Develop 32bit Applications on 64bit Linux Machines

Overview

Nowadays almost all PC and servers are 64bit. Once in a while one may need to develop and run 32bit apps. In this post we will go through several methods for both x86_64 and ARM64 machines.

I can think of 4 ways of doing this:

  • Run a virtual machine, which in turn runs a 32bit OS. Not covered in this post.
  • Set up 32bit chroot environment. (Recommended)
  • Set up 32bit architecture with Debian multi-arch support (Recommended)
  • Use gcc multi-lib feature to build 32bit binaries.

X86_64/Ubuntu 20.04

Here we go through some detailed steps for those approaches on X86_64/Ubuntu 20.04. Instructions should apply to other Debian-based distributions and different versions as well.

GCC Multi-Lib Approach

Although this is the simplest approach, I don’t recommend it because the GCC support varies for different CPU architectures and the command line options also varies, making it hard to generalize and port existing software. I believe mult-arch is the future direction.

  • Install GCC multi-lib package
sudo apt install gcc-multilib
  • Compile with -m32 option
gcc -m32 hello.c -o hello
file hello
./hello

Multi-Arch Approach

  • Add 32bit as secondary architecture
sudo dpkg --add-architecture i386
  • Install dev tools and 32bit specific dev tools and libraries
sudo apt update
sudo apt install -y build-essential
sudo apt install -y crossbuild-essential-i386
sudo apt install -y libc6:i386
sudo apt install -y libstdc++6:i386
  • Develop 32bit apps with i686-linux-gnu- cross-compile prefix
i686-linux-gnu-gcc hello.c -o hello
file hello
./hello
  • Note for Ubuntu 18.04, crossbuild-essential-i386 meta-package does not exist. One needs to install below packages explicityly
sudo apt install -y dpkg-cross g++-i686-linux-gnu  gcc-i686-linux-gnu

Chroot Approach

For more detailed descriptions, please refer to http://logan.tw/posts/2018/02/24/manage-chroot-environments-with-schroot/

  • Install packages
sudo apt-get install schroot debootstrap
  • Install 32bit environment (e.g., Ubuntu 20.04) (note the last URL arg can be ommited)
sudo debootstrap --arch=i386 focal /var/chroot/focal32 http://archive.ubuntu.com/ubuntu
  • Edit /etc/schroot/schroot.conf file
[focal32]
description=Ubuntu 20.04 Focal Fossil 32bit
directory=/var/chroot/focal32
root-users=ubuntu,jsun
users=ubuntu,jsun
type=directory
personality=linux32
  • Create and enter 32bit chroot session
schroot -c focal32          ; enter as normal user
schroot -c focal32 -u root  ; enter as root user
  • From inside the session, one can install build-essential meta-package and develop 32bit apps as if inside a 32bit machine.
  • (Optional) One can even run 32bit graphic apps from inside the chroot environment.
export DISPLAY=:0.0

ARM64/Ubuntu18.04

ARM64 is also called aarch64. “ARM64” is rooted from Linux community and “aarch64” is official name from ARM. We will use those 2 terms inter-changeably.

Multi-Lib approach

GCC for aarch64 does not support multi-lib. So this option is not available.

Multi-Arch Approach

It is similar to i386 case, except that architecture is “armhf” and 32bit cross-compile prefix is “arm-linux-gunueabihf-“.

sudo dpkg --add-architecture armhf
sudo apt update
sudo apt install -y build-essential
sudo apt install -y crossbuild-essential-armhf
sudo apt install -y libc6:armhf
sudo apt install -y libstdc++6:armhf

arm-linux-gnueabihf-gcc hello.c -o hello
file hello
./hello

Chroot Approach

For chroot approach, it is similar to i386 case, with 2 exceptions:

  • The architecture name is “armhf” instead of “i386”
  • The repo URL is “http://ports.ubuntu.com/ubuntu-ports”, instead of “http://archive.ubuntu.com/ubuntu”

Note we can also set up the root filesystem from debian repos. Below is an example to set up debian sid for armhf 32bit architecture:

  • Install packages and bootstrap root filesystem
sudo apt-get install schroot debootstrap
sudo debootstrap --arch=armhf sid /var/chroot/sid-armhf http://ftp.debian.org/debian/
  • Modify and add to /etc/schroot/schroot.conf file
[sid32]
description=Debian Sid 32bit
directory=/var/chroot/sid-armhf
root-users=ubuntu
users=ubuntu
type=directory
personality=linux32
  • Create and enter sid32 session, “schroot -c sid32”

About MIPS64

  • Multilib works for MIPS64.
    • sudo apt install gcc-multilib g++-multilib
    • compile with -mabi=32 flags, i.e., “gcc -mabi=32 hello.c”
  • Multi-Arch does not work for MIPS64, at least from 32bit app development perspective
  • I have not tried chroot approach on MIPS64 yet. Let me know if you have had any success.

Last Words

  • In Ubuntu chroot environment, the initial apt repo source is limited to main. You may find many packages are missing. To fix this, edit /etc/apt/source.list
deb http://archive.ubuntu.com/ubuntu xenial main universe multiverse

How to build/run Android Cuttlefish emulator on AWS

Background

Cuttlefish is new virtual-machine based Android emulator.  Earlier I have written  an article on how to build/run it on PC and ARM64 machines.  Towards the end of 2019, AWS has introduced a1.metal instance which allows KVM to run on their ARM64 machines.  It opens the possibility of running cuttlefish on AWS (which itself opens a lot of possibilities!)

The steps are similar to those I mentioned before.  This article summarizes them here specifically for AWS a1.metal instance, running Ubuntu 19.10.

Build AOSP cuttlefish images on x86_64

This step is done on PC, while all the rest steps are done on AWS a1.metal instance.

  • Refer to https://source.android.com/setup/build/building to set up your host machine
  • Check out the source from master and build distribution packages, which will be transferred to a1.metal instance later.  This step takes very loooong time.
mkdir cuttlefish
repo init -u https://android.googlesource.com/platform/manifest
repo sync -j8
source build/envsetup.sh
lunch aosp_cf_arm64_phone-userdebug
make dist

If you like to download from a branch and like to download as little as possible, the following command do so with android10-gsi branch without git history.

repo init --depth=1 -u https://android.googlesource.com/platform/manifest -b android10-gsi
repo sync -f --force-sync --no-clone-bundle --no-tags -j$(nproc)

Setup AWS a1.metal

Start an Ubuntu 19.10 instance on a1.metal.  First,  we install GUI for better debugging and viewing. 

sudo apt-get update -y 
sudo apt-get install lxde xrdp -y
sudo passwd ubuntu

Setup to run x86_64 binaries

Many tools (e.g., cros_vm) in cuttlefish are still built as x86_64 binaries, not as arm64.  Current solution is to use qemu-user to run those binaries (*ouch!*).  As such, we set up x86_64(amd64) as the secondary architecture on a1.metal.

  • Override /etc/apt/sources.list file with the following content
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-updates main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-backports main restricted universe multiverse
deb [arch=arm64,armhf] http://ports.ubuntu.com/ubuntu-ports eoan-security main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-updates main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-backports main restricted universe multiverse
deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ eoan-security main restricted universe multiverse
  • Install qemu-user-static and add amd64 secondary architecture
sudo apt install qemu-user-static
sudo dpkg --add-architecture amd64
sudo apt install libc6:amd64
  • After this you should be able to run simple x86_64 binaries such as “cat”.
    • scp over “cat” program from your x86_64 linux machine
    • it should run!

Build and install cuttlefish-common package

  • install packages needed for build
sudo apt install dpkg-dev
sudo apt install cdbs config-package-dev debhelper
  • Download and build
mkdir cuttlefish-common
cd cuttlefish-common
git clone https://github.com/google/android-cuttlefish.git
cd android-cuttlefish
dpkg-buildpackage --no-sign
  • Install
sudo apt install bridge-utils dnsmasq-base f2fs-tools libarchive-tools libfdt1 libwayland-client0 net-tools python2
sudo dpkg -i ../cuttlefish-common_0.9.13_arm64.deb

Setup cuttlefish for running

  • copy (scp) the following  the files from x86_64 PC host:
out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz
  • untar cvd-host_package.tar.gz to ~/cuttlefish/host directory
  • unzip the zip file to ~/cuttlefish/image directory
  • Add user to proper groups before running; power off machine; then power on again. 
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER

Run Cuttlefish

You have 2 choices to view the screen of emulated Android device:

  1. Use remote desktop and view screen via local VNC connection to Cuttlefish emulator
  2. Use a web browser and view the screen via remote WebRTC connection

View via Local VNC

  • Start a remote desktop viewer and connect to your EC2 a1.metal instance (Note: you need to open port 3389)
  • Below steps are running on remote EC2 a1.metal instance via remote desktop
  • Start a LXTerminal and run the following commands to start cuttlefish
export ANDROID_PRODUCT_OUT=~/cuttlefish/image/
export ANDROID_HOST_OUT=~/cuttlefish/host/
export PATH=$PATH:$ANDROID_HOST_OUT/bin
launch_cvd -decompress_kernel=true
  • Start browser and download VNC viewer
    • download tightvnc viewer (jar file): https://www.tightvnc.com/download.php
    • install java if not done yet: sudo apt install openjdk-11-jre
  • Start a second LXTerminal and run this command : java -jar tightvnc-jviewer.jar
    • Please use 127.0.0.1 as IP address and 6444 as port number
  • To stop cvd, run stop_cvd

View via WebRTC

  • Note you need to enable port 8443 on EC2 a1.metal instance
  • On EC2 instance run
launch_cvd -start_webrtc -webrtc_public_ip=`curl http://169.254.169.254/latest/meta-data/public-ipv4` -decompress_kernel=true
  • On local PC, start a browser on local machine, and connect via “https://<public IP>:8443”

As of writing today (4/5, 2020), the WebRTC method is not working yet.  It shows a black screen.

Install NumPy and Matplotlib on Chromebook/Linux(Beta)

Linux(Beta) on Chromebook is really cool. You can easily and officially turn it on by following a very simple step. Apparently there are also ways to enable audio capture and GPU acceleration as well.

I was playing with Lenovo N23 Yoga , which is an AAarch64(ARM64) Chromebook powered by MTK8173 chipset. It turned out to be an interesting experience. Specifically I was able to do some simple Python development.

Step 1 – Enable Linux (beta)

Follow the instructions at this page.

Step 2 – Install debian packages

The Linux environment seems to be a container-type environment, running Debian Linux. Python3 is already installed.

sudo apt update
sudo apt install python3-pip pkg-config libpng-dev libfreetype6-dev python3-tk

Step 3 – Install PIP3 packages

pip3 install cython
pip3 install numpy
pip3 install matplotlib

Step 4 – Try it out

Create the following Python file try.py:

import numpy
import matplotlib.pyplot as plt

x = numpy.random.normal(5.0, 1.0, 100000)

plt.hist(x, 100)
plt.show()

Then run it by typing “python3 try.py”. If everything goes well, you should see some picture like the following:

Install WordPress on Ubuntu 18.04

As I said in my Home Page, I finally spent some time and migrated my ancient homepage to WordPress, the most popular web hosting platform (>60% of web sites).

However, quite contrarily to my expectations, the process is rather complicated. I went through quite a few hiccups, taking more than a couple of days, to reach today’s state. So I decide to write my experience down to hopefully save someone’s else pain.

For the Inpatients

  • Don’t follow the official instructions. It looks easy and simple, but WordPress is too old and does not seem to be easily updateable.
  • Don’t use WordPress docker container. A lot of trouble to get MySQL setup and connected. You still have a broken environment that many plug-in’s don’t work. In my case, I need to send emails (who don’t?), and that is VERY HARD to do with container.
  • I suppose Bitnami WordPress on AWS might be a fine choice. But I find more flexibility in just doing it myself, and it is actually not that hard, with a good instruction.
  • I ended up installing WordPress directory on Ubuntu 18.04
    • I install WordPress as a subdrectory of WWW root (wordpress/). This way I can migrate my original web site (in classic HTML) to WWW directory and co-exist with new shiny WordPress pages.
    • I chose Apache2 and MySQL. You can use Niginx and Mariabdb as well.

Installation

  • install required packages
apt update
apt install apache2 mysql php php-mysql
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip php-gd
  • configure mysql
create database wordpress;
grant SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER on wordpress.* to 'wp_admin'@'localhost' identified by 'somepasswordofyourown';
  • install wordpress
cd /tmp && wget https://wordpress.org/latest.tar.gz
tar xzf latest.tar.gz -C /var/www/html/
mkdir /var/www/html/wordpress/uploads
chown -R www-data.www-data /var/www/html/wordpress/
  • create /etc/apache2/sites-available/wordpress.conf file:
<Directory /var/www/html/wordpress>
    Options FollowSymLinks
    AllowOverride Limit Options FileInfo
    DirectoryIndex index.php
    Order allow,deny
    Allow from all
</Directory>
<Directory /var/www/html/wordpress/wp-content>
    Options FollowSymLinks
    Order allow,deny
    Allow from all
</Directory>
  • enable WordPress in apache2
sudo apache2ctl configtest      # test syntax
sudo a2ensite wordpress
sudo a2enmod rewrite
sudo systemctl restart apache2
  • Initialize WordPress
    • Open a browser and go to “http://<server ip/name>/wordpress”
    • Enter site name, database name (“wordpress”), database user (“wp_admin”) and password.
    • You are done! Well, sort of. The rest is about WordPress itself, which is another topic.

References

How to Cross-Build Debian/MIPS Kernel

Background

I was trying to install Debian/MIPS on QEMU and found out that I need to build/update the kernel.  The existing instructions mostly talk about native build, which is painfully slow on QEMU.  I tried once and it took almost 10 hours!  There are some references to cross-build debian kernel and none applies straightforwardly.    I decided to write a blog on how I did it.

Prepare the host

  • My host is Ubuntu 18.04
  • I’m building 64bit MIPS little endian kernel for malta board
  • Need install a build related packages :
    • apt install -y build-essential linux-source bc kmod cpio flex cpio libncurses5-dev bison libssl-dev
  • Need to install crosscompile tools.  Fortunately we only need gcc and binutils for compiling kernel
    • apt install -y binutils-mips64-linux-gnuabi64 gcc-mips64-linux-gnuabi64

Set up the source

  • Download and unpack kernel source and config from debian site
    • cd download
    • wget http://security.debian.org/debian-security/pool/updates/main/l/linux/linux-source-4.19_4.19.67-2+deb10u2_all.deb
    • wget http://security.debian.org/debian-security/pool/updates/main/l/linux/linux-config-4.19_4.19.67-2+deb10u2_mips64el.deb
  • Extract the kernel source from .deb file
    • dpkg -x download/linux-source-4.19_4.19.67-2+deb10u2_all.deb .
    • tar xf usr/src/linux-source-4.19.tar.xz
  • Copy and modify the kernel config as you see fit
    • dpkg -x download/linux-config-4.19_4.19.67-2+deb10u2_mips64el.deb .
    • unxz usr/src/linux-config-4.19/config.mips64el_none_5kc-malta.xz
    • patch -p0 -b < kconfig.patch
    • cp usr/src/linux-config-4.19/config.mips64el_none_5kc-malta linux-source-4.19/.config
  • Make deb packages for kernel
    • cd linux-source-4.19
    • make ARCH=mips CROSS_COMPILE=mips64-linux-gnuabi64- oldconfig
    • make ARCH=mips CROSS_COMPILE=mips64-linux-gnuabi64- KDEB_PKGVERSION=1 -j`nproc` bindeb-pkg

Download the source

Below the script file and kernel config patch would work together to execute the above steps automatically.

How to Build/Run Android Cuttlefish Emulator on PC/ARM64

Cuttlefish is new virtual-machine based Android emulator. It uses virtio devices instead of emulated devices as in original Android emulator.   As such, it needs lighter VM support (to the extent it can run on ARM64 host), unlike Android Emulator which requires heavily modified QEMU to emulate various devices.  The virtio architecture can potentially offer better performance as well.

Refer to  a slide deck on cuttlefish.  Or find a local copy of it at here.

Many thanks to Alistair Delva from Google, who provided many technical guidance in going through this exercise.

How to Build/Run Cuttlefish on PC (X86_64)

My host Ubuntu 18.04.  Refer to https://source.android.com/setup/build/building

Build and install cuttlefish-common package

git clone https://github.com/google/android-cuttlefish.git 
cd android-cuttlefish
dpkg-buildpackage --no-sign
  • Install :
dpkg -i  ../cuttlefish-common_0.9.9_amd64.deb
  • it requires dnsmasq-base and a few other packages; install them as requested
  • Check status  with  /etc/init.d/cuttlefish-common status

Build cuttlefish

  • Checkout AOSP pie-gsi branch.
repo init -u https://android.googlesource.com/platform/manifest -b pie-gsi repo sync -j 8
source build/envsetup.sh
lunchaosp_cf_x86_64_phone-userdebug
make

Run cuttlefish

  • Add user to proper groups before running; power off machine; then power on again.  (Strange, rebooting did not seem to work somehow).  You may need to run “sudo apt install qemu-kvm” if you get error, “group kvm doesn’t exist”.
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER
  • Run: launch_cvd
  • Install tightvnc to view phone
    • download tightvnc viewer (jar file): https://www.tightvnc.com/download.php
    • install java if not done yet:  sudo apt install openjdk-11-jre
    • java -jar tightvnc-jviewer.jar  #use 127.0.0.1:6444
  • Run stop_cvd to kill the cvd

How to Build/Run Cuttlefish on ARM64

My ARM64 board is rockpro64, running ubuntu 18.04.

[on X86_64] Cross-build cuttlefish for ARM64

  • Similar as above, except build with a different target and with distribution packages
lunch aosp_cf_arm64_phone-userdebug
make dist

Note the following output files, which need to be copied to ARM64 host

out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz

[on X86_64] Configure and build arm64 kernel

You will need CONFIG_BINFMT_MISC.  Otherwise below step will fail.  Check /proc/sys/fs/binfmt_misc to be sure.

In addition, you will need a few other kernel configs, which according to Alistair are only supported in kernel after 4.9.  Here is the set of configs I added to rockpro64 default v5.2 kernel.

CONFIG_BINFMT_MISC=y
CONFIG_EVENTFD=y
CONFIG_VSOCKETS=y
CONFIG_VHOST_NET=m
CONFIG_VHOST_SCSI=m
CONFIG_VHOST_VSOCK=m
CONFIG_VHOST=m
CONFIG_VIRTIO_BLK_SCSI=m
CONFIG_VIRTIO_INPUT=m
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_VIRTIO_VSOCKETS=m

In specific, here are the exact commands I used to build my rockpro64 kernel on my PC ubuntu:

git clone https://github.com/ayufan-rock64/linux-mainline-kernel.git
cd linux-mainline-kernel/
git checkout -b 5.2.0-1116-ayufan-js 5.2.0-1116-ayufan
vi arch/arm64/configs/rockchip_linux_defconfig    # add the above configs to the end
vi dev.mk   # BUG? change HOSTCC=aarch64-linux-gnu-gcc to HOSTCC=gcc
./dev-make kernel-image-and-modules
./dev-make kernel-package

Copy over the .deb package file to arm64 host and install with “dpkg -i <pkg file>” command.  Reboot afterwards.

[on ARM64] Setup to run x86_64 binaries

Do following as root user:

apt install qemu-user-static
dpkg --add-architecture amd64
[You may need to correct /etc/apt/source.list file here.  See an example at this link]
apt install libc6:amd64

After this you should be able to run simple x86_64 binaries such as “cat”.  Give it a try.

[on ARM64] Build and install cuttlefish-common package

This is similar to x86_64 case, except that you do this step on ARM64 host.

The following packages are needed before you can install cuttlefish-common:

apt install bridge-utils libarchive-tools libfdt1 python iptables

[on ARM64] Setup and run cuttlefish

  • copy the following  the files from x86_64 PC host:
out/dist/aosp_cf_arm64_phone-img-eng.jsun.zip
out/dist/cvd-host_package.tar.gz
  • untar and unzip them into 2 directories, say /home/jsun/work/cuttlefish/host and /home/jsun/work/cuttlefish/image
  • Add user to proper groups before running; power off machine; then power on again.  You may need to create kvm group first and make sure  /dev/kvm is read/writable by kvm group
sudo usermod -aG kvm $USER
sudo usermod -aG cvdnetwork $USER
  • Run the following commands to start cuttlefish
export ANDROID_PRODUCT_OUT=/home/jsun/work/cuttlefish/image/
export ANDROID_HOST_OUT=/home/jsun/work/cuttlefish/host/
export PATH=$PATH:$ANDROID_HOST_OUT/bin
launch_cvd -decompress_kernel=true
  • The rest are similar to x86_64 case

Appendix – Install Ubuntu Desktop on ARM64

Many ARM64 ubuntu distros are minimal or server, which means no desktop included.  It is easy to install one.  However, without a few key steps (see first a few commands below),  you can easily get some headaches.

Below are the commands I used to install Xubuntu on rockpro64, starting from their minimal Ubuntu 18.04 distro.

  1. Download Ubuntu 18.04 minimal img for SD card (refer to this page),
  2. Copy to SD card
dd if=./bionic-minimal-rockpro64-0.8.3-1141-arm64.img of=/dev/sdb bs=4M
  1. Use parted or gparted to expand /dev/sdb7 to take over whole SD space
  2. Boot up rockpro64:
sudo su
locale-gen
localectl set-locale LANG="en_US.UTF-8"
apt update
apt install -y xubuntu-desktop 

How to Trio-Boot Linux/Windows

Background

As a typical programmer, my working environment is Linux (compiling, building and debugging).  Meanwhile my digital social life also depends on Windows (Skype, games, etc).  When you buy a new laptop, like I just did with Lenovo T490S, a big question is how to make both ends happy.

Common solution such as dual booting does not quite cut it because the context switching overhead is too big.  VM solution like VMware player or virtual box is another common solution, which gets pretty close.  You loose about 15% to 25% performance in Linux VM but you can access both of them at the same time.

I like to push the envelop a little bit further and implemented something what I call as Trio-boot:

    • You can dual-boot into either Linux or Windows host
    • Once you are in windows host OS, you can also use VMware Player to boot the *same* Linux as guest OS

By doing this it has the convenience of VM solution, while also preserves the performance and compatibility of native Linux host if needed.  Note whether you boot natively or start from VMware player, it is the same Linux environment, which is very important.

Steps At-a-Glance

  • Prepare Windows to leave disk space for Linux partition
  • Install Linux (e.g., Ubuntu 18.04) to the spare space and do dual booting
  • Boot into Windows and install VMWare Player 15
  • Create a 64bit Linux VM that includes a virtual disk (/dev/sda) and native Linux partition on the physical disk

Prepare Windows

  • Start “Create and format disk partitions” from “Control Panel”
  • Shrink the main windows partition to leave sufficient space for Linux
    • If the PC has been used for a while, you may find some non-movable files that prevent you from shrinking sufficient amount.  Google around for solutions.

Install Ubuntu 18.04

  • Download 64bit Ubuntu ISO image (https://ubuntu.com/download/desktop )
  • Use rufus to burn a Ubuntu 18.04 USB disk (16+GB) (https://rufus.ie/ )
  • Boot PC with Ubuntu 18.04 USB disk
  • Choose “Install Ubuntu” option when prompted
  • At “Install Type” page, choose “Something else” (See https://i.stack.imgur.com/KURnS.png)
  • Create a new partition to have ext4 fs type and mounted as root “/”
    • In my case it create a new partition called “nvme0n1p5”
  • Continue installation as usual

At the end you should be able to dual-boot Windows and Ubuntu

Start Ubuntu in VMware Player

The key of this step is to add 2 disks to the VM: 1 virtual disk which will install MBR and serves as the boot disk, and 1 physical partition that holds Linux which is already installed.  A theoretically simpler solution is to add both EFI partition and Linux to the VM and don’t use a virtual disk.  However, this approach does not always work.

  • Boot into Windows and install VMware Player
  • Create 64bit Linux VM with 0.1GB virtual disk
    • add a second hard disk, which is physical disk and select only Linux partition
    • I chose NVMe disk type to match my physical disk type, but that probably does not matter
  • Boot VM with Ubuntu CD image with “Try Ubuntu” option
  • Change root to native Linux partition
    • mount /dev/nvme0n1p5 /mnt
    • mount -B /sys /mnt/sys
    • mount -B /proc /mnt/proc
    • mount -B /dev /mnt/dev
    • chroot /mnt
    • fdsik -l   # it should show /sda/ as well as the nvme0n1 disk partitions
  • remove auto mount of EFI partition (since it is not available in VM enviroment of Linux)
    • vi /etc/fstab
    • add “#” to the line that contains “/boot/efi”
  • prepare /dev/sda
    • fdisk /dev/sda
    • create a new primary partition with default values, i.e., a linux partition that takes over the whole disk
      • Note by default it will initialize the disk partition table as a DOS-type disk with MBR boot.  Keep this setting.
  • Install grub on /dev/sda, grub-install /dev/sda
  • power off VM; disconnect CD-ROM; restart VM, and choose Ubuntu as startup OS
  • (optional) install vm tools to enable copy’n’paste and resizing desktop, etc
    • sudo apt install open-vm-tools open-vm-tools-desktop
  • (optional) Mount hgfs automatically on every boot
    • create /etc/rc.local file with the following code
#!/bin/bash
vmhgfs-fuse .host:/ /mnt/hgfs/ -o allow_other -o uid=1000
    • add executable attribute, “sudo chmod +x /etc/rc.local”
    • enable /etc/rc.local, “sudo systemctl enable rc-local”
  • restart VM

TIPS

  • always shut down VM completely before rebooting into Linux natively.  Otherwise you will be left with some inconsistent harddisk state.

Additional notes on 9/22, 2019

In my original settings, computer will boot from grub2, where I can choose to start windows or ubuntu.  Quite a few times, for some unknown reasons, the computer boot Ubuntu straight, which can cause some disasters if your windows’ vmplayer is playing the same ubuntu image and windows went hibernation.

To overcome this, I finally settled on the following:

  • Enter BIOS and set windows boot manager as the default startup OS.
    • I need to press ENTER + F12 on my Dell machine in order to boot Ubuntu natively
  • in Windows vmplayer, I edit grub.cfg and remove windows so that vmplayer would always boot up Ubuntu

Build Vyatta AMI

Overview

The following are the major steps involved:

  1. Build 64bit Vyatta ISO image
  2. Install the ISO image to a virtual machine and obtain the root FS image
  3. Modify the above root fs image to adapt to AWS EC2 environment (dark magic, haha)
  4. Create EBS-based AMI from the above modified root fs image

Build 64bit Vyatta ISO image

My starting points are two places:

  1. The README file from vyatta repository. It gives very good instructions on building the vyatta.
  2. This site has developed a script that builds vyatta from source, basically following the above instructions. The only difference is that they were building for 32bit while we are building for 64bit.

This script is my final script used to build the ISO image. A couple of points noted below:

  1. You need a debian 64bit build machine. I used debian 6.0.9. See my debian setup at the end of this page.
  2. I used vyatta daisy branch, which is v6.6r1. See here for more vesioning details
  3. The 64-bit version already has a kernel configured for running on virtual machines (specifically Xen, which is same as AWS EC2). However, for 32-bit versions, one needs to modify build-iso/pkgs/linux-image/debian/rules.gen to use i386/config.586-vyatta-virt instead of i386/config.586-vyatta. The config file appears three times (for setup, build, binary). I don’t understand, just modify them all.

Install ISO

This is relatively straightforward. To save space, I installed the ISO on a 2GB hard disk with VMWare Workstation.

  1. I boot up the VM with liveCD
  2. Log in with “vyatta”/”vyatta”.
  3. Type in “install system”. This will install Vyatta into the hard disk
  4. I boot up the virtual machine with a liveCD (actually the ISO itself) again. Then do something like

    dd if=/dev/sda of=<my root fs image> bs=1M

Modify the root fs image

A lot of try-n-error and dark magic happen in this step. I will try to cover as much as possible.

Mount the root fs image under /mnt/ec2-image

mount -oloop /opt/ec2/images/vyatta-64bit.img /mnt/ec2-image/

Create menu.lst for pv-grub

Create /mnt/ec2-image/boot/grub/menu.lst (Note the suffix is “L”-st, not “One”-st)

default=0
timeout=0
title Jun Vyatta 64bit
root (hd0)
kernel /boot/vmlinuz-3.3.8-1-amd64-vyatta ro root=/dev/xvda1 console=hvc0 rd_NO_PLYMOUTH
initrd /boot/initrd.img-3.3.8-1-amd64-vyatta

Fix /etc/fstab

vi /mnt/ec2-image/etc/fstab

/dev/xvda1  /           ext4         noatime           0    1
/dev/xvda3  swap        swap         defaults          0    0

Fix vyatta config

vi /mnt/ec2-image/opt/vyatta/etc/config/config.boot. Main changes are a) remove eth0 MAC address, b) add sshd service, c) change console

nterfaces {
    ethernet eth0 {
        address dhcp
    }
    loopback lo {
    }
}
service {
    ssh {
        port 22
    }
}
system {
    config-management {
        commit-revisions 20
    }
    console {
        device hvc0 {
            speed 9600
        }
    }
    host-name vyatta-64bit
    login {
        user vyatta {
            authentication {
                encrypted-password $1$2LZU31YS$ShE9ovJPjaJGZDCw9iLW20
            }
            level admin
        }
    }
    ntp {
        server 0.vyatta.pool.ntp.org {
....

Set up ssh key access for ‘vyatta’ user

  1. Copy/install /mnt/ec2-image/etc/init.d/ec2-ssh-key
  2. Copy/install /mnt/ec2-image/opt/vyatta/sbin/vyatta_config_ssh
  3. ln -s ../init.d/ec2-ssh-key /mnt/ec2-image/etc/{rc2.d,rc3.d,rc4.d,rc5.d}/S05ec2-ssh-key
  4. Change “CONCURRENCY” to “none” in /mnt/ec2-image/etc/rc file.

Also remove default password for ‘vyatta’ user in /mnt/ec2-image/etc/passwd file.

vyatta:x:1000:100::/home/vyatta:/bin/vbash

Create initial host SSH key

Add the following to the /mnt/ec2-image/etc/rc.local file:

#
# [jsun] generate host key if not available
#
if [ ! -f /etc/ssh/ssh_host_key ]; then
        dpkg-reconfigure openssh-server
fi

/sbin/ifconfig

exit 0

Don’t remember hw-id for eth0

This feature screws up AMI instace each time it is stop and re-started, because the eth0 hw-id will be different. A simple solution is to not remember it at all.

--- backup/opt/vyatta/sbin/vyatta_interface_rescan	2014-03-25 16:53:02.000000000 -0700
+++ /mnt/ec2-image/opt/vyatta/sbin/vyatta_interface_rescan	2014-03-25 15:41:02.505167405 -0700
@@ -132,7 +132,8 @@
 	my $ifpath = interface_type($ifname) . " $ifname";
 
 	syslog(LOG_INFO, "add config for %s hw-id %s", $ifname, $hwaddr);
-	$xcp->create_node(['interfaces',$ifpath,"hw-id $hwaddr"]);
+	#$xcp->create_node(['interfaces',$ifpath,"hw-id $hwaddr"]);
+	$xcp->create_node(['interfaces',$ifpath,"address dhcp"]);
 
 	# Add existing phy entry for wireless
 	if ($ifname =~ /^wlan/) {

Create AMI images

Even though we got a perfect FS image above for AMI, there is no easy way to create one. *sigh*

The theory

Vyatta is using its own Linux kernel. We are relying on a AWS EC2 feature, called, PV-GRUB. We use the AKI for 64bit and partionless disk (hd0 version).

We rely on a AMI build host in AWS that helps us to create a volumn with all the root fs image content. We then create an snapshot from the volumn, and then create AMI from thsnapshot. We then copy the AMI to the targeted regions, if they are different from the build host.

Note that we will ssh into the build host. It is much more convenient if we set up key-based access to the build host. Also the login user must be in “disk” group.

We use the ec2 tool PHP SDK. Please install that first.

The script

The PHP script used to create the AMI is listed here.

Run this script from local host (e.g., the debian 6 vyatta build host). The usage is pretty simple

  1. ./create-ebs-image.php –delete-image : will delete AMI image in the target regions.
  2. ./create-ebs-image.php –delete-snapshot: will delete both AMI image and snapshots in the target regions.
  3. ./create-ebs-image.php –run : will create and deloy AMI images from the local root fs image

How to update a single package

One does not need to go through the above steps if he/she wants to change some source code of a package. Instead, if you already have build vyatta ISO environment and an EC2 instance running the vyatta AMI, use the following procedure.

  1. modify the source code of a pkg, e.g., vyatta-bash. (BTW, all pkgs are under build-iso/vyatta/build-iso/pkgs.)
  2. clean the pkg : tools/submod-clean -d vyatta-bash
  3. re-build .deb package file: tools/submod-mk vyatta-bash
  4. upload pkg to EC2 instance: scp -i <ssh pem> vyatta-bash_999.dev_amd64.deb vyatta@<ec2 ip address>
  5. install the package: sudo dpkg -i vyatta-bash_999.dev_amd64.deb

TODO

  1. It would be good we can modify the vyatta build process and obtain the installed root fs image directly, instead of going through installing ISO and copy root disk steps.

Appendix

Set up Debian 6 build host

- debian 6 net iso install VM;
	system utilities
	ssh server
	graphic install

- install kernel header 
	sudo apt-get install linux-headers-$(uname -r)

- install vmware-tools

- install build packages
apt-get install       ssh build-essential sudo bzip2 curl autoconf git devscripts \
      debhelper autotools-dev automake libtool bison flex lintian \
      libglib2.0-dev libapt-pkg-dev libboost-filesystem1.42-dev \
      libncurses5-dev libdb-dev libssl-dev cdbs libmozjs-dev \
      libreadline5-dev libpam0g-dev libcap-dev libsnmp-dev gawk unzip \
      kernel-package libatm1-dev git-buildpackage libnfnetlink-dev \
      libnetfilter-conntrack-dev libattr1-dev rsync libxml2-dev \
      libedit-dev libpcap0.8-dev libpci-dev lsb-release quilt ruby \
      genisoimage liblzo2-dev unifont libpopt-dev libgmp3-dev \
      libcurl4-openssl-dev libopensc2-dev libldap2-dev libkrb5-dev \
      hardening-wrapper libgcrypt11-dev libpcre3-dev libprelude-dev \
      libgnutls-dev libperl-dev python-all-dev python-setuptools \
      live-helper syslinux libsort-versions-perl libexpat1-dev \
      libfile-sync-perl gcc-multilib libfreetype6-dev libusb-dev \
      libdevmapper-dev libmysqlclient-dev autogen libdumbnet-dev

Build CentOS 6 AMI

Overview

This is a faithful translation of the excellent tutorial by Jeff Hunter to BASH script. However, the result is so useful that I felt it is meaningful to share. 🙂

If you are patient enough, you should read the tutorial for all the gory details. If you are not, just follow the steps below. If you are lucky, you can build a CentOS 6 AMI in a hurry.

Pre-requisites

    1. CentOS build host: Should have at 10GB extra space
    2. Install host tools:
yum -y install e2fsprogs ruby java-1.6.0-openjdk unzip MAKEDEV
    1. Install AWS tools:

# mkdir -p /opt/ec2/tools
# curl -o /tmp/ec2-api-tools.zip http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
# unzip /tmp/ec2-api-tools.zip -d /tmp
# cp -r /tmp/ec2-api-tools-*/* /opt/ec2/tools

# curl -o /tmp/ec2-ami-tools.zip http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
# unzip /tmp/ec2-ami-tools.zip -d /tmp
# cp -rf /tmp/ec2-ami-tools-*/* /opt/ec2/tools

The script

You can find the script here.

Note you need to configure the following parameters at the beginning the script. Most certainly you need to supply EC2_PRIVATE_KEY, EC2_CERT, AWS_ACCOUNT_NUMBER, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, EC2_KEYPAIR, EC2_SECURITY_GROUP.

export JAVA_HOME=/usr
export EC2_HOME=/opt/ec2/tools
#export EC2_URL=https://ec2.amazonaws.com
export EC2_URL=https://ec2.us-west-1.amazonaws.com
export EC2_PRIVATE_KEY=/home/jsun/files/aws-nsp-x509-pk-4USZFXUMLDXAV5Q3BNUUYPURLA6VZWRH.pem
export EC2_CERT=/home/jsun/files/aws-nsp-x509-cert-4USZFXUMLDXAV5Q3BNUUYPURLA6VZWRH.pem

export AWS_ACCOUNT_NUMBER=XXXXXXXXXX
export AWS_ACCESS_KEY_ID=XXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXX
export AWS_AMI_BUCKET=vyatta-ami/x86-64/Linux/CentOS/6.5

IMG_BASE_NAME=centos-6-x86_64
S3_REGION=us-west-1
AMI_PVGRUB=aki-f77e26b2
EC2_KEYPAIR=XXXX
EC2_SECURITY_GROUP=XXXX

Also note you may need to change AMI_PVGRUB depending on the region and architecture. Refer to the tutorial for details. Here is a list of them for us-west-1:

root@localhost ~]# ec2-describe-images --owner amazon --region us-west-1 | grep "amazon\/pv-grub-hd0" | awk '{ print $1, $2, $3, $5, $7 }'
IMAGE aki-960531d3 amazon/pv-grub-hd00_1.04-i386.gz available i386
IMAGE aki-920531d7 amazon/pv-grub-hd00_1.04-x86_64.gz available x86_64
IMAGE aki-8e0531cb amazon/pv-grub-hd0_1.04-i386.gz available i386
IMAGE aki-880531cd amazon/pv-grub-hd0_1.04-x86_64.gz available x86_64
IMAGE aki-e97e26ac amazon/pv-grub-hd00_1.03-i386.gz available i386
IMAGE aki-eb7e26ae amazon/pv-grub-hd00_1.03-x86_64.gz available x86_64
IMAGE aki-f57e26b0 amazon/pv-grub-hd0_1.03-i386.gz available i386
IMAGE aki-f77e26b2 amazon/pv-grub-hd0_1.03-x86_64.gz available x86_64

If you are lucky, run the scrip the following order, and you should have a CentOS instance running in AWS. 🙂


commands:
  init     : perform teardown and create new img file/dirs, set up yum
  setup    : mount image, bind run-time dirs
  install  : install centos image (after setup)
  configure: configure the OS img (after install)
  teardown : unbind and un-mount
  bundle   : build img bundle for upload (after install/configure/teardown)
  upload   : upload image (after bundle)
  register : register AMI (after upload)
  run <id> : run a small instance of the registered AMI

Find out the IP address of the new instance, and ssh into it

ssh -i my_aws.pem root@<pub ip address>

Tricks and Tips

  1. It takes long time (>2 minutes) for the instance to boot up. Be patient. And don’t panic too soon.
  2. If somehow you cannot log into the instance with the key pair, you can always pre-create /root/.ssh directory in the OS image and pre-create the authorized_keys file underneath it.