Set Up Ubuntu/XFCE4 on Headless Raspberry Pi 4 via VNC

Background

The new Raspberry Pi 4 with 8GB RAM is a little attractive  beast that is suitable for  home server.  You can set it up basically fan-less.  I was intrigued and I wanted to set it up headless, running Bitcoin full node, ElectrumX server plus Lightning Network (see my next post).

However, while there are many tutorials around Internet, I just can’t find the ideal one that suit me:

  • It must be 64bit.  Come on!  We are in 2021 already.
    • That rules out 32bit Raspberry Pi OS.
    • The 64bit Raspberry Pi OS (beta) runs fine on the board, but it lacks some basic Linux/UNIX features (e.g., middle mouse button pasting).  The desktop settings keep getting lost.  It is just frustrating.
  • That leaves 64bit Ubuntu, which should be a decent base OS.
  • I want to boot from USB without slow SD card.
  • I want to run headless with remote graphic UI 

The hardware

Below are the components I purchased.  The total cost is $270.  Not exactly dirty cheap, but that is pretty meaty piece. 

Putting it together is simper than setting up a toaster machine. See a couple of pictures below as well.

Step 1 – Prepare SSD drive and USB boot

  • Download and install rpi-imager on your host PC (Linux, Windows, etc)
  • Flush raspberry pi OS into SD card (I used 64bit RPIOS beta, but I think 32bit one should work). This will be a throw-away stepping stone.
  • Insert the SD card to RPI4 and boot from SD card.
    • Insert USB keyboard/mouse into black USB slot (USB2)
    • Connect a HDMI cable to a monitor
    • Use USB type-C power supply, ideally > 2.4A
  • Install (if not already installed) and run rpi-imager on RPI4 booted from SD card.
  • (IMPORTANT) Choose “Ubuntu 20.10 64bit Server” OS image and flush onto the 1TB Samsung SSD drive.
    • I found Ubuntu 20.04LTS don’t work!
  • Follow this link to update EEPROM firmware
  • Follow this link to boot from USB (i.e., SSD driver) first instead of SD card.
  • Remove SD card, and now you should be able to boot from SSD drive.

Step 2 – Fix up the board

  • Connect an Ethernet cable to enable Internet
  • Fix sluggish USB mouse
    • Add “usbhid.mousepoll=0” to /boot/firmware/cmdline.txt; Reboot
  • Install argon 1 fan control
    • curl https://download.argon40.com/argon1.sh | bash

Step 3 – Install XFCE desktop

Note choose “lightdm” when install xfce4

sudo apt update
sudo apt dist-upgrade
apt install xfce4 xfce4-goodies
apt install tigervnc-standalone-server novnc firefox

Create /etc/lightdm/lightdm.conf file:

[Seat:*]
allow-guest=false
autologin-user=ubuntu
user-session=xfce
#greeter-session=unity-greeter
xserver-command=/etc/X11/xinit/xserverrc

After this you should be able to log into graphic XFCE desktop.

Step 4 – Set up VNC and headless

  • Configure wifi assuming that is what you want to run RPI4
  • Set up VNC password
    • sudo vncpasswd /etc/vncpasswd
  • Modify /etc/X11/xinit/xserverrc file
#!/bin/sh

ARG=`echo $@ | sed -e "s#vt. -novtswitch##"`
GEOMETRY="1600x1000"
DEPTH="24"
# single client mode; VNC password auth; TLS encryption optional;
exec Xvnc -desktop ubuntu -SecurityTypes "VncAuth,TLSVnc" -NeverShared -DisconnectClients -geometry $GEOMETRY -depth $DEPTH -rfbauth /etc/vncpasswd $ARG
  • Make sure WiFi is connected before user logs in so that you can connect via VNC
    • To do this, launch NetworkManager, select “Applications”/”Settings”/”Advanced Network Configuration”
    • Then select the wifi network you are connecting to, and then click on button at the bottom to edit the connection
    • Under “General” tab, check “All users may connect to this network”

After this setup, if you reboot RPI4, you will get text console on display.

Meanwhile you should be able to access RPI4 over VNC remotely

  • On your PC, install VNC viewer (e.g., tigerVNC viewer)
  • Launch the viewer, enter the IP address and port 5900
  • Enter the VNC password
  • You should see XFCE desktop

Step 5 – Optional noVNC to enable web browser access

  • Install noVNC package
    • sudo apt install novnc net-tools
  • Run /usr/share/novnc/utils/launch.sh
  • On PC, open a browser and point at “http://<RPI4 IP address>:6080/vnc_auto.html”
  • Enter VNC password and you will be connected to the desktop

You can automate this process by adding this to /etc/rc.local or create a small systemd service file.

AWS ARM64 vs. X86_64 – Bitcoin Performance Comparison

I’m running some experiments with bitcoin core software on AWS. I’m intrigued by 2 questions:

  • Which performs better, ARM64 vs X86_64?
  • What is performance loss of 32bit ARM vs 64bit ARM?

The Experiment

  • I fired up a t3.medium instance with ubuntu desktop 20.04(x86_64) image
  • I fired a second instance of t4g.medium type with ubuntu desktop 20.04 (arm64) image
  • To run 32bit ARM program, I followed the chroot approach documented in my previous post and set up an armhf Ubuntu 20.04 (focal) chroot environment
  • Then I download all 3 versions of bitcoin core software from its download site : 32bit ARM, 64bit ARM and 64bit Intel.
  • Run bitcoin-qt from scratch (delete ~/.bitcoin directory if any) and count the time duration for the verifying first 200,000 blocks.
    • Each configuration runs 2 times. We then take the average.
    • Other system variables are monitored to ensure similar running environment. Specifically networking or disk don’t seem to be a factor.

The Results

See results listed below. It seems for bitcoin related workload 64bit ARM version performs 20% faster than 32bit ARM version and 64bit Intel instance.

category64bit ARM 32bit ARM64bit Intel
Instance typet4g.mediumt4g.mediumt3.medium
CPU Graviton2Graviton2Intel Xeon Platinum 8000 series
# CPU cores222
RAM (GB)444
run duration #18’44” (524″)10’24” (624″)11’16″(676″)
run duration #27’30” (450″)9’58″(598″)9’04″(544″)
run duration average487″611″610″
relative performance to 64bit ARM100%80%80%
Pricing (us-west-2/Orgon) ($/hr)0.03360.03360.0416

Geekbench Performance on AWS Graviton 2

There is much touting about the new AWS Graviton 2 (ARM64) offering as a game changer. Let us run some benchmark to test it out.

Settings

We pick 3 EC2 instance types to compare:

  • a1 – First generation of ARM64 AWS Graviton CPU
  • m6g – Second generation of ARM64 Graviton 2 CPU
  • m5 – Intel Xeon Platinum 8259CL CPU

We run Geekbench 4 on all xlarge instances of these EC2 types. We mostly focus on 64bit performance, but we will also touch 32bit performance as well.

Overview

Instance typea1.xlargem6g.xlargem5.xlarge
vCPU444
Memory(GB)81616
Hourly price(us-ea-1,Linux)0.1020.1540.192
64bit single-core score189936093647
64bit multi-core score5227111428017

From the above table, several observations are obvious:

  • Graviton 2 has doubled the performance of Graviton 1.
  • For single core performance Graviton 2 is similar to Intel Xeon CPU
  • For multi-core performance, Graviton 2 scales up much better, likely because Intel uses hyper-threading technology, where vCPU count is only 1/2 of true CPU core count. By contrast, vCPU count in Graviton CPU is true CPU core count.

I also listed the pricing. It looks like Graviton 2 is a good deal!

Look into the Details

This link gives detailed scores for each test suite and each instance type. A few highlighted cells indicate interesting contrast between Intel Xeon and Graviton 2:

  • Intel Xeon is 20 times faster than Graviton 2 in AES test! This is *very* likely due to non-optimized implementation for ARM64, i.e., it is not using NEON instructions. Otherwise the performance should be more comparable.
  • Intel Xeon are 2x better in SGEMM and 50% better in SFFT, both heavily relying Intel AVS/SSE instructions while ARM64 using NEON instructions.
  • Graviton 2 shines in memory area, 2x better in Memory Copy and 4x better in Memory bandwidth.

32bit Performance

While 32bit performance is probably not interesting on those servers, it is still interesting to take a look. Below is the overview comparison table and this link gives detail scores.

Instance typea1.xlargem6g.xlargem5.xlarge
32bit single-core score170729753053
32bit multi-core score469290166886

Overall we see similar patterns in 64bit case:

  • Graviton 2 is about 2x faster than Graviton 1
  • Intel Xeon performs relatively same as Graviton 2 in single core and wanes in multi-core performance.
  • Detail scores also reflect similar pattern as 64bit case.

Develop 32bit Applications on 64bit Linux Machines

Overview

Nowadays almost all PC and servers are 64bit. Once in a while one may need to develop and run 32bit apps. In this post we will go through several methods for both x86_64 and ARM64 machines.

I can think of 4 ways of doing this:

  • Run a virtual machine, which in turn runs a 32bit OS. Not covered in this post.
  • Set up 32bit chroot environment. (Recommended)
  • Set up 32bit architecture with Debian multi-arch support (Recommended)
  • Use gcc multi-lib feature to build 32bit binaries.

X86_64/Ubuntu 20.04

Here we go through some detailed steps for those approaches on X86_64/Ubuntu 20.04. Instructions should apply to other Debian-based distributions and different versions as well.

GCC Multi-Lib Approach

Although this is the simplest approach, I don’t recommend it because the GCC support varies for different CPU architectures and the command line options also varies, making it hard to generalize and port existing software. I believe mult-arch is the future direction.

  • Install GCC multi-lib package
sudo apt install gcc-multilib
  • Compile with -m32 option
gcc -m32 hello.c -o hello
file hello
./hello

Multi-Arch Approach

  • Add 32bit as secondary architecture
sudo dpkg --add-architecture i386
  • Install dev tools and 32bit specific dev tools and libraries
sudo apt update
sudo apt install -y build-essential
sudo apt install -y crossbuild-essential-i386
sudo apt install -y libc6:i386
sudo apt install -y libstdc++6:i386
  • Develop 32bit apps with i686-linux-gnu- cross-compile prefix
i686-linux-gnu-gcc hello.c -o hello
file hello
./hello
  • Note for Ubuntu 18.04, crossbuild-essential-i386 meta-package does not exist. One needs to install below packages explicityly
sudo apt install -y dpkg-cross g++-i686-linux-gnu  gcc-i686-linux-gnu

Chroot Approach

For more detailed descriptions, please refer to http://logan.tw/posts/2018/02/24/manage-chroot-environments-with-schroot/

  • Install packages
sudo apt-get install schroot debootstrap
  • Install 32bit environment (e.g., Ubuntu 20.04) (note the last URL arg can be ommited)
sudo debootstrap --arch=i386 focal /var/chroot/focal32 http://archive.ubuntu.com/ubuntu
  • Edit /etc/schroot/schroot.conf file
[focal32]
description=Ubuntu 20.04 Focal Fossil 32bit
directory=/var/chroot/focal32
root-users=ubuntu,jsun
users=ubuntu,jsun
type=directory
personality=linux32
  • Create and enter 32bit chroot session
schroot -c focal32          ; enter as normal user
schroot -c focal32 -u root  ; enter as root user
  • From inside the session, one can install build-essential meta-package and develop 32bit apps as if inside a 32bit machine.
  • (Optional) One can even run 32bit graphic apps from inside the chroot environment.
export DISPLAY=:0.0

ARM64/Ubuntu18.04

ARM64 is also called aarch64. “ARM64” is rooted from Linux community and “aarch64” is official name from ARM. We will use those 2 terms inter-changeably.

Multi-Lib approach

GCC for aarch64 does not support multi-lib. So this option is not available.

Multi-Arch Approach

It is similar to i386 case, except that architecture is “armhf” and 32bit cross-compile prefix is “arm-linux-gunueabihf-“.

sudo dpkg --add-architecture armhf
sudo apt update
sudo apt install -y build-essential
sudo apt install -y crossbuild-essential-armhf
sudo apt install -y libc6:armhf
sudo apt install -y libstdc++6:armhf

arm-linux-gnueabihf-gcc hello.c -o hello
file hello
./hello

Chroot Approach

For chroot approach, it is similar to i386 case, with 2 exceptions:

  • The architecture name is “armhf” instead of “i386”
  • The repo URL is “http://ports.ubuntu.com/ubuntu-ports”, instead of “http://archive.ubuntu.com/ubuntu”

Note we can also set up the root filesystem from debian repos. Below is an example to set up debian sid for armhf 32bit architecture:

  • Install packages and bootstrap root filesystem
sudo apt-get install schroot debootstrap
sudo debootstrap --arch=armhf sid /var/chroot/sid-armhf http://ftp.debian.org/debian/
  • Modify and add to /etc/schroot/schroot.conf file
[sid32]
description=Debian Sid 32bit
directory=/var/chroot/sid-armhf
root-users=ubuntu
users=ubuntu
type=directory
personality=linux32
  • Create and enter sid32 session, “schroot -c sid32”

About MIPS64

  • Multilib works for MIPS64.
    • sudo apt install gcc-multilib g++-multilib
    • compile with -mabi=32 flags, i.e., “gcc -mabi=32 hello.c”
  • Multi-Arch does not work for MIPS64, at least from 32bit app development perspective
  • I have not tried chroot approach on MIPS64 yet. Let me know if you have had any success.

Last Words

  • In Ubuntu chroot environment, the initial apt repo source is limited to main. You may find many packages are missing. To fix this, edit /etc/apt/source.list
deb http://archive.ubuntu.com/ubuntu xenial main universe multiverse

How to build/run Android Cuttlefish emulator on AWS

Background

Cuttlefish is new virtual-machine based Android emulator.  Earlier I have written  an article on how to build/run it on PC and ARM64 machines.  Towards the end of 2019, AWS has introduced a1.metal instance which allows KVM to run on their ARM64 machines.  It opens the possibility of running cuttlefish on AWS (which itself opens a lot of possibilities!)

The steps are similar to those I mentioned before.  This article summarizes them here specifically for AWS a1.metal instance, running Ubuntu 19.10.

Build AOSP cuttlefish images on x86_64

This step is done on PC, while all the rest steps are done on AWS a1.metal instance.

  • Refer to https://source.android.com/setup/build/building to set up your host machine
  • Check out the source from master and build distribution packages, which will be transferred to a1.metal instance later.  This step takes very loooong time.
mkdir cuttlefish
repo init -u https://android.googlesource.com/platform/manifest
repo sync -j8
source build/envsetup.sh
lunch aosp_cf_arm64_phone-userdebug
make dist

If you like to download from a branch and like to download as little as possible, the following command do so with android10-gsi branch without git history.

repo init --depth=1 -u https://android.googlesource.com/platform/manifest -b android10-gsi
repo sync -f --force-sync --no-clone-bundle --no-tags -j$(nproc)

Setup AWS a1.metal

Start an Ubuntu 19.10 instance on a1.metal.  First,  we install GUI for better debugging and viewing. 

sudo apt-get update -y 
sudo apt-get install lxde xrdp -y
sudo passwd ubuntu

Setup to run x86_64 binaries

Many tools (e.g., cros_vm) in cuttlefish are still built as x86_64 binaries, not as arm64.  Current solution is to use qemu-user to run those binaries (*ouch!*).  As such, we set up x86_64(amd64) as the secondary architecture on a1.metal.

  • Override /etc/apt/sources.list file with the following content
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-updates main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-backports main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-security main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-updates main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-backports main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-security main restricted universe multiverse
  • Install qemu-user-static and add amd64 secondary architecture
sudo apt install qemu-user-static
sudo dpkg --add-architecture amd64
sudo apt install libc6:amd64
  • After this you should be able to run simple x86_64 binaries such as “cat”.
    • scp over “cat” program from your x86_64 linux machine
    • it should run!

Build and install cuttlefish-common package

  • install packages needed for build
sudo apt install dpkg-dev
sudo apt install cdbs config-package-dev debhelper
  • Download and build
mkdir cuttlefish-common
cd cuttlefish-common
git clone https://github.com/google/android-cuttlefish.git
cd android-cuttlefish
dpkg-buildpackage --no-sign
  • Install
sudo apt install bridge-utils dnsmasq-base f2fs-tools libarchive-tools libfdt1 libwayland-client0 net-tools python2
sudo dpkg -i ../cuttlefish-common_0.9.13_arm64.deb

Setup cuttlefish for running

  • copy (scp) the following  the files from x86_64 PC host:
out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz
  • untar cvd-host_package.tar.gz to ~/cuttlefish/host directory
  • unzip the zip file to ~/cuttlefish/image directory
  • Add user to proper groups before running; power off machine; then power on again. 
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER

Run Cuttlefish

You have 2 choices to view the screen of emulated Android device:

  1. Use remote desktop and view screen via local VNC connection to Cuttlefish emulator
  2. Use a web browser and view the screen via remote WebRTC connection

View via Local VNC

  • Start a remote desktop viewer and connect to your EC2 a1.metal instance (Note: you need to open port 3389)
  • Below steps are running on remote EC2 a1.metal instance via remote desktop
  • Start a LXTerminal and run the following commands to start cuttlefish
export ANDROID_PRODUCT_OUT=~/cuttlefish/image/
export ANDROID_HOST_OUT=~/cuttlefish/host/
export PATH=$PATH:$ANDROID_HOST_OUT/bin
launch_cvd -decompress_kernel=true
  • Start browser and download VNC viewer
    • download tightvnc viewer (jar file): https://www.tightvnc.com/download.php
    • install java if not done yet: sudo apt install openjdk-11-jre
  • Start a second LXTerminal and run this command : java -jar tightvnc-jviewer.jar
    • Please use 127.0.0.1 as IP address and 6444 as port number
  • To stop cvd, run stop_cvd

View via WebRTC

  • Note you need to enable port 8443 on EC2 a1.metal instance
  • On EC2 instance run
launch_cvd -start_webrtc -webrtc_public_ip=`curl http://169.254.169.254/latest/meta-data/public-ipv4` -decompress_kernel=true
  • On local PC, start a browser on local machine, and connect via “https://<public IP>:8443”

As of writing today (4/5, 2020), the WebRTC method is not working yet.  It shows a black screen.

Install NumPy and Matplotlib on Chromebook/Linux(Beta)

Linux(Beta) on Chromebook is really cool. You can easily and officially turn it on by following a very simple step. Apparently there are also ways to enable audio capture and GPU acceleration as well.

I was playing with Lenovo N23 Yoga , which is an AAarch64(ARM64) Chromebook powered by MTK8173 chipset. It turned out to be an interesting experience. Specifically I was able to do some simple Python development.

Step 1 – Enable Linux (beta)

Follow the instructions at this page.

Step 2 – Install debian packages

The Linux environment seems to be a container-type environment, running Debian Linux. Python3 is already installed.

sudo apt update
sudo apt install python3-pip pkg-config libpng-dev libfreetype6-dev python3-tk

Step 3 – Install PIP3 packages

pip3 install cython
pip3 install numpy
pip3 install matplotlib

Step 4 – Try it out

Create the following Python file try.py:

import numpy
import matplotlib.pyplot as plt

x = numpy.random.normal(5.0, 1.0, 100000)

plt.hist(x, 100)
plt.show()

Then run it by typing “python3 try.py”. If everything goes well, you should see some picture like the following:

Install WordPress on Ubuntu 18.04

As I said in my Home Page, I finally spent some time and migrated my ancient homepage to WordPress, the most popular web hosting platform (>60% of web sites).

However, quite contrarily to my expectations, the process is rather complicated. I went through quite a few hiccups, taking more than a couple of days, to reach today’s state. So I decide to write my experience down to hopefully save someone’s else pain.

For the Inpatients

  • Don’t follow the official instructions. It looks easy and simple, but WordPress is too old and does not seem to be easily updateable.
  • Don’t use WordPress docker container. A lot of trouble to get MySQL setup and connected. You still have a broken environment that many plug-in’s don’t work. In my case, I need to send emails (who don’t?), and that is VERY HARD to do with container.
  • I suppose Bitnami WordPress on AWS might be a fine choice. But I find more flexibility in just doing it myself, and it is actually not that hard, with a good instruction.
  • I ended up installing WordPress directory on Ubuntu 18.04
    • I install WordPress as a subdrectory of WWW root (wordpress/). This way I can migrate my original web site (in classic HTML) to WWW directory and co-exist with new shiny WordPress pages.
    • I chose Apache2 and MySQL. You can use Niginx and Mariabdb as well.

Installation

  • install required packages
apt update
apt install apache2 mysql php php-mysql
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip php-gd
  • configure mysql
create database wordpress;
grant SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER on wordpress.* to 'wp_admin'@'localhost' identified by 'somepasswordofyourown';
  • install wordpress
cd /tmp && wget https://wordpress.org/latest.tar.gz
tar xzf latest.tar.gz -C /var/www/html/
mkdir /var/www/html/wordpress/uploads
chown -R www-data.www-data /var/www/html/wordpress/
  • create /etc/apache2/sites-available/wordpress.conf file:
<Directory /var/www/html/wordpress>
    Options FollowSymLinks
    AllowOverride Limit Options FileInfo
    DirectoryIndex index.php
    Order allow,deny
    Allow from all
</Directory>
<Directory /var/www/html/wordpress/wp-content>
    Options FollowSymLinks
    Order allow,deny
    Allow from all
</Directory>
  • enable WordPress in apache2
sudo apache2ctl configtest      # test syntax
sudo a2ensite wordpress
sudo a2enmod rewrite
sudo systemctl restart apache2
  • Initialize WordPress
    • Open a browser and go to “http://<server ip/name>/wordpress”
    • Enter site name, database name (“wordpress”), database user (“wp_admin”) and password.
    • You are done! Well, sort of. The rest is about WordPress itself, which is another topic.

References

How to Cross-Build Debian/MIPS Kernel

Background

I was trying to install Debian/MIPS on QEMU and found out that I need to build/update the kernel.  The existing instructions mostly talk about native build, which is painfully slow on QEMU.  I tried once and it took almost 10 hours!  There are some references to cross-build debian kernel and none applies straightforwardly.    I decided to write a blog on how I did it.

Prepare the host

  • My host is Ubuntu 18.04
  • I’m building 64bit MIPS little endian kernel for malta board
  • Need install a build related packages :
    • apt install -y build-essential linux-source bc kmod cpio flex cpio libncurses5-dev bison libssl-dev
  • Need to install crosscompile tools.  Fortunately we only need gcc and binutils for compiling kernel
    • apt install -y binutils-mips64-linux-gnuabi64 gcc-mips64-linux-gnuabi64

Set up the source

  • Download and unpack kernel source and config from debian site
    • cd download
    • wget http://security.debian.org/debian-security/pool/updates/main/l/linux/linux-source-4.19_4.19.67-2+deb10u2_all.deb
    • wget http://security.debian.org/debian-security/pool/updates/main/l/linux/linux-config-4.19_4.19.67-2+deb10u2_mips64el.deb
  • Extract the kernel source from .deb file
    • dpkg -x download/linux-source-4.19_4.19.67-2+deb10u2_all.deb .
    • tar xf usr/src/linux-source-4.19.tar.xz
  • Copy and modify the kernel config as you see fit
    • dpkg -x download/linux-config-4.19_4.19.67-2+deb10u2_mips64el.deb .
    • unxz usr/src/linux-config-4.19/config.mips64el_none_5kc-malta.xz
    • patch -p0 -b < kconfig.patch
    • cp usr/src/linux-config-4.19/config.mips64el_none_5kc-malta linux-source-4.19/.config
  • Make deb packages for kernel
    • cd linux-source-4.19
    • make ARCH=mips CROSS_COMPILE=mips64-linux-gnuabi64- oldconfig
    • make ARCH=mips CROSS_COMPILE=mips64-linux-gnuabi64- KDEB_PKGVERSION=1 -j`nproc` bindeb-pkg

Download the source

Below the script file and kernel config patch would work together to execute the above steps automatically.

How to Build/Run Android Cuttlefish Emulator on PC/ARM64

Cuttlefish is new virtual-machine based Android emulator. It uses virtio devices instead of emulated devices as in original Android emulator.   As such, it needs lighter VM support (to the extent it can run on ARM64 host), unlike Android Emulator which requires heavily modified QEMU to emulate various devices.  The virtio architecture can potentially offer better performance as well.

Refer to  a slide deck on cuttlefish.  Or find a local copy of it at here.

Many thanks to Alistair Delva from Google, who provided many technical guidance in going through this exercise.

How to Build/Run Cuttlefish on PC (X86_64)

My host Ubuntu 18.04.  Refer to https://source.android.com/setup/build/building

Build and install cuttlefish-common package

git clone https://github.com/google/android-cuttlefish.git 
cd android-cuttlefish
dpkg-buildpackage --no-sign
  • Install :
dpkg -i  ../cuttlefish-common_0.9.9_amd64.deb
  • it requires dnsmasq-base and a few other packages; install them as requested
  • Check status  with  /etc/init.d/cuttlefish-common status

Build cuttlefish

  • Checkout AOSP pie-gsi branch.
repo init -u https://android.googlesource.com/platform/manifest -b pie-gsi repo sync -j 8
source build/envsetup.sh
lunchaosp_cf_x86_64_phone-userdebug
make

Run cuttlefish

  • Add user to proper groups before running; power off machine; then power on again.  (Strange, rebooting did not seem to work somehow).  You may need to run “sudo apt install qemu-kvm” if you get error, “group kvm doesn’t exist”.
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER
  • Run: launch_cvd
  • Install tightvnc to view phone
    • download tightvnc viewer (jar file): https://www.tightvnc.com/download.php
    • install java if not done yet:  sudo apt install openjdk-11-jre
    • java -jar tightvnc-jviewer.jar  #use 127.0.0.1:6444
  • Run stop_cvd to kill the cvd

How to Build/Run Cuttlefish on ARM64

My ARM64 board is rockpro64, running ubuntu 18.04.

[on X86_64] Cross-build cuttlefish for ARM64

  • Similar as above, except build with a different target and with distribution packages
lunch aosp_cf_arm64_phone-userdebug
make dist

Note the following output files, which need to be copied to ARM64 host

out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz

[on X86_64] Configure and build arm64 kernel

You will need CONFIG_BINFMT_MISC.  Otherwise below step will fail.  Check /proc/sys/fs/binfmt_misc to be sure.

In addition, you will need a few other kernel configs, which according to Alistair are only supported in kernel after 4.9.  Here is the set of configs I added to rockpro64 default v5.2 kernel.

CONFIG_BINFMT_MISC=y
CONFIG_EVENTFD=y
CONFIG_VSOCKETS=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_SCSI=m
CONFIG_VHOST_VSOCK=m
CONFIG_VHOST=m
CONFIG_VIRTIO_BLK_SCSI=m
CONFIG_VIRTIO_INPUT=m
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_VIRTIO_VSOCKETS=m

In specific, here are the exact commands I used to build my rockpro64 kernel on my PC ubuntu:

git clone https://github.com/ayufan-rock64/linux-mainline-kernel.git
cd linux-mainline-kernel/
git checkout -b 5.2.0-1116-ayufan-js 5.2.0-1116-ayufan
vi arch/arm64/configs/rockchip_linux_defconfig    # add the above configs to the end
vi dev.mk   # BUG? change HOSTCC=aarch64-linux-gnu-gcc to HOSTCC=gcc
./dev-make kernel-image-and-modules
./dev-make kernel-package

Copy over the .deb package file to arm64 host and install with “dpkg -i <pkg file>” command.  Reboot afterwards.

[on ARM64] Setup to run x86_64 binaries

Do following as root user:

apt install qemu-user-static
dpkg --add-architecture amd64
[You may need to correct /etc/apt/source.list file here.  See an example at this link]
apt install libc6:amd64

After this you should be able to run simple x86_64 binaries such as “cat”.  Give it a try.

[on ARM64] Build and install cuttlefish-common package

This is similar to x86_64 case, except that you do this step on ARM64 host.

The following packages are needed before you can install cuttlefish-common:

apt install bridge-utils libarchive-tools libfdt1 python iptables

[on ARM64] Setup and run cuttlefish

  • copy the following  the files from x86_64 PC host:
out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz
  • untar and unzip them into 2 directories, say /home/jsun/work/cuttlefish/host and /home/jsun/work/cuttlefish/image
  • Add user to proper groups before running; power off machine; then power on again.  You may need to create kvm group first and make sure  /dev/kvm is read/writable by kvm group
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER
  • Run the following commands to start cuttlefish
export ANDROID_PRODUCT_OUT=/home/jsun/work/cuttlefish/image/
export ANDROID_HOST_OUT=/home/jsun/work/cuttlefish/host/
export PATH=$PATH:$ANDROID_HOST_OUT/bin
launch_cvd -decompress_kernel=true
  • The rest are similar to x86_64 case

Appendix – Install Ubuntu Desktop on ARM64

Many ARM64 ubuntu distros are minimal or server, which means no desktop included.  It is easy to install one.  However, without a few key steps (see first a few commands below),  you can easily get some headaches.

Below are the commands I used to install Xubuntu on rockpro64, starting from their minimal Ubuntu 18.04 distro.

  1. Download Ubuntu 18.04 minimal img for SD card (refer to this page),
  2. Copy to SD card
dd if=./bionic-minimal-rockpro64-0.8.3-1141-arm64.img of=/dev/sdb bs=4M
  1. Use parted or gparted to expand /dev/sdb7 to take over whole SD space
  2. Boot up rockpro64:
sudo su
locale-gen
localectl set-locale LANG="en_US.UTF-8"
apt update
apt install -y xubuntu-desktop 

How to Trio-Boot Linux/Windows

Background

As a typical programmer, my working environment is Linux (compiling, building and debugging).  Meanwhile my digital social life also depends on Windows (Skype, games, etc).  When you buy a new laptop, like I just did with Lenovo T490S, a big question is how to make both ends happy.

Common solution such as dual booting does not quite cut it because the context switching overhead is too big.  VM solution like VMware player or virtual box is another common solution, which gets pretty close.  You loose about 15% to 25% performance in Linux VM but you can access both of them at the same time.

I like to push the envelop a little bit further and implemented something what I call as Trio-boot:

    • You can dual-boot into either Linux or Windows host
    • Once you are in windows host OS, you can also use VMware Player to boot the *same* Linux as guest OS

By doing this it has the convenience of VM solution, while also preserves the performance and compatibility of native Linux host if needed.  Note whether you boot natively or start from VMware player, it is the same Linux environment, which is very important.

Steps At-a-Glance

  • Prepare Windows to leave disk space for Linux partition
  • Install Linux (e.g., Ubuntu 18.04) to the spare space and do dual booting
  • Boot into Windows and install VMWare Player 15
  • Create a 64bit Linux VM that includes a virtual disk (/dev/sda) and native Linux partition on the physical disk

Prepare Windows

  • Start “Create and format disk partitions” from “Control Panel”
  • Shrink the main windows partition to leave sufficient space for Linux
    • If the PC has been used for a while, you may find some non-movable files that prevent you from shrinking sufficient amount.  Google around for solutions.

Install Ubuntu 18.04

  • Download 64bit Ubuntu ISO image (https://ubuntu.com/download/desktop )
  • Use rufus to burn a Ubuntu 18.04 USB disk (16+GB) (https://rufus.ie/ )
  • Boot PC with Ubuntu 18.04 USB disk
  • Choose “Install Ubuntu” option when prompted
  • At “Install Type” page, choose “Something else” (See https://i.stack.imgur.com/KURnS.png)
  • Create a new partition to have ext4 fs type and mounted as root “/”
    • In my case it create a new partition called “nvme0n1p5”
  • Continue installation as usual

At the end you should be able to dual-boot Windows and Ubuntu

Start Ubuntu in VMware Player

The key of this step is to add 2 disks to the VM: 1 virtual disk which will install MBR and serves as the boot disk, and 1 physical partition that holds Linux which is already installed.  A theoretically simpler solution is to add both EFI partition and Linux to the VM and don’t use a virtual disk.  However, this approach does not always work.

  • Boot into Windows and install VMware Player
  • Create 64bit Linux VM with 0.1GB virtual disk
    • add a second hard disk, which is physical disk and select only Linux partition
    • I chose NVMe disk type to match my physical disk type, but that probably does not matter
  • Boot VM with Ubuntu CD image with “Try Ubuntu” option
  • Change root to native Linux partition
    • mount /dev/nvme0n1p5 /mnt
    • mount -B /sys /mnt/sys
    • mount -B /proc /mnt/proc
    • mount -B /dev /mnt/dev
    • chroot /mnt
    • fdsik -l   # it should show /sda/ as well as the nvme0n1 disk partitions
  • remove auto mount of EFI partition (since it is not available in VM enviroment of Linux)
    • vi /etc/fstab
    • add “#” to the line that contains “/boot/efi”
  • prepare /dev/sda
    • fdisk /dev/sda
    • create a new primary partition with default values, i.e., a linux partition that takes over the whole disk
      • Note by default it will initialize the disk partition table as a DOS-type disk with MBR boot.  Keep this setting.
  • Install grub on /dev/sda, grub-install /dev/sda
  • power off VM; disconnect CD-ROM; restart VM, and choose Ubuntu as startup OS
  • (optional) install vm tools to enable copy’n’paste and resizing desktop, etc
    • sudo apt install open-vm-tools open-vm-tools-desktop
  • (optional) Mount hgfs automatically on every boot
    • create /etc/rc.local file with the following code
#!/bin/bash
vmhgfs-fuse .host:/ /mnt/hgfs/ -o allow_other -o uid=1000
    • add executable attribute, “sudo chmod +x /etc/rc.local”
    • enable /etc/rc.local, “sudo systemctl enable rc-local”
  • restart VM

TIPS

  • always shut down VM completely before rebooting into Linux natively.  Otherwise you will be left with some inconsistent harddisk state.

Additional notes on 9/22, 2019

In my original settings, computer will boot from grub2, where I can choose to start windows or ubuntu.  Quite a few times, for some unknown reasons, the computer boot Ubuntu straight, which can cause some disasters if your windows’ vmplayer is playing the same ubuntu image and windows went hibernation.

To overcome this, I finally settled on the following:

  • Enter BIOS and set windows boot manager as the default startup OS.
    • I need to press ENTER + F12 on my Dell machine in order to boot Ubuntu natively
  • in Windows vmplayer, I edit grub.cfg and remove windows so that vmplayer would always boot up Ubuntu