Experiments on Ethereum Staking Upload Bandwidth

I have set up rocket pool Ethereum staking node for about 1 month now. I have to say the experience is relatively smooth and the support is great. In fact it is so smooth, setting up the node itself is not worth a blog. 🙂 So far it has already produced its first block.

The only issue is the upload bandwidth concern. It was using almost 5mbps, about half of my ISP service allowance. While technically this is fine, I feel I would be comfortable if I have more headroom. Plus, I plan to add minipools. So I looked around and found that I could reduce number of peers to reduce bandwidth. However, there is very little information on how much bandwidth can be reduced when you reduce number of peers. So I set out to do an experiment.

My ETH1 client is Besu (Java) and my ETH2 client is Lighthouse (Rust). BTW, I’m choosing clients purely based on the programming language.

By default Besu has 25 peers and Lighthouse has 80 peers. In week 1 of the experiment, I used the default peer numbers. In week 2, I reduced Eth1 peers to half, 13 peers. That did not yield too much bandwidth saving. In week 3, I reduced Eth2 peers to half, 40 peers.

See results below. The numbers are taken from router. Note I only focus on upload bandwidth, not only because that is the issue of concern, but also I have highly variable download bandwidth and the numbers are not reliable.

Rocketpool pool uses about 75% of total bandwidth. So the true percentage change is amplified by 33%. From the table it seems we can save about 25% upload bandwidth of Ethereum staking node when we slash peers to half for both ETH1 and ETH2 clients.

weekweek 1week 2week 3
ETH1 (Besu) peers251314
ETH2 (Lighthouse) peers808040
total upload (GB)327.4312.1266.4
daily upload (GB)46.844.638.1
mpbs4.334.123.52
% against week 1100%95%81%
staking node %100%93.5%75%

PS – A few weeks later, I added a second minipool. I was expecting the bandwidth increase. However, the daily upload bandwidth actually dropped to about 27.5GB (2.55mpbs). This is puzzling. It could be due to the drop of actual peers connected (which is around 33 now). Or it could be due to “maturing” nodes or connections? In any case I’m happy upload bandwidth does not appear to be an issue anymore. And most likely I will create 2 more minipool after the Ethereum Shanghai upgrade and Rocketpool LEB8 introduction.

Unboxing Ray-Ban Stories

Ray-Ban Stories is a smart sunglass product that is co-developed by Ray-Ban and Facebook (now called Meta). Yesterday I got one pair of them and tried it on for about one hour. This post summarizes my experience of setting it up, wearing it, using it and interacting with the app (“Facebook View”).

Unboxing

I was very impressed by the little details for unboxing. The shipping box can be used for return shipping, with returning address label included. The plastic wrap has a tab for tearing it apart. I have never seen that before! Similarly there is a tab for tearing a sealing label on the box. No more hassle looking for a knife or a pair of scissors!

The packaging felt premium. Not bad for a device sold for $299. Mine is actually $379 due to the transition lens. It came with a USB type-C cable and a charging case. That is pretty much it.

Tuning On

The tutorial guide was very illustrative. The first step is to turn on the device with a switch at the left corner of the glass (see pic below)

The initial BT pairing proved to be a disaster. The glasses refused to enter blinking blue LED mode. Several tries later it entered a blinking white LED mode, where BT pairing obviously cannot succeed. From there on, nothing seemed to work. I had to search online to do a factory reset. Skipping some details here, I believe I actually did a reset. BT pairing eventually worked as expected.

The app then wanted to do a firmware upgrade before anything else, which is kind of expected. However it complained about not enough battery. So I had to charge the glasses for another 20 minutes, a big laydown for an enthusiastic user.

Wearing the glasses is pretty comfortable. It does not feel like a burdensome gadget. The arms do feel a little rigid and thicker than normal. (ADDED on Dec 6th : the glasses tend to slip down. I ordered a nose pad set which hopefully can stop that.)

Using the Glasses

Glasses can be controlled by either voice (“hey, facebook. take a photo”) or pressing a button + tapping the arm. I found they are relatively intuitive.

I will cut it short and go straight to the likes and dislikes. Below is a list of likes:

  • App has a good tutorial guide
  • App has an easy montage feature to combine several video clips, even with music
  • App also has a flashback feature for animating a picture. I probably need a few more practice to master it.
  • It has a verbal warning when battery drops to 10%
  • Generally good image/video quality, except when taking shots indoors, perhaps due to low light
  • Phone can still communicate and control the device while it is being charged. Good.

Here are a few dislikes:

  • Is the initial BT pairing failure due to glasses already paired somehow? If so, would need a better cue to lead users. Factory resetting a device which is just out of box seems really harsh for an end user.
  • Not enough initial battery to do the initial firmware update. Must wait for 20 minutes before playing with it.
  • Why can’t we update the firmware while charging it? that should give sufficient power.
  • App has a “Facebook View is active” notification that is always on. What is the purpose? Tapping on it does not even bring up the app.
    • Instead, I might need a notification to remind me about the battery level of the glasses and turning it off if necessary.
    • ADDED on Dec 6th: After 2 days, I found this notification is REALLY annoying. It is there. It cannot be dismissed. And it is completely useless!
  • Using voice to stop the video recording is a little strange. There is no audio prompt after saying “Hey, Facebook”.
  • I myself don’t know whether a video recording is on or not. This is not a big problem because you can only record a 30-second clip and typically you will master the recording after a few trials.
  • Sometimes there is a audio sound played shortly after stopping a video. I still don’t understand what that is. Maybe telling me processing is done and I can start recording again? That sound does not always come.

Below are some pictures taken during that session. (Somehow I could not share the video due to WP limitations. *sigh*)

Last Words about Battery

I played with it intensively for about 1 hour and used up all the 34% of battery. During that period, I took 19 video clips (380 seconds total) and 17 pictures. That seems to be consistent with Ray-Ban website claiming about 6 hours of moderate usage and 3 hours continuous usage.

Save Bash Output to A File – Automatically

I often have a need to save the output a bash file to another file, e.g., log file. I know I could use redirection “>” or “tee”. But I would have to type it from command line. This post talk about doing it from within the script itself.

Google search does not yield much meaningful results. To save the hassle for me and potentially others. Here is the straight no-BS code.

#!/bin/bash

readonly file="output"
exec 1> >(tee $file)
exec 2>&1

Experience of Donation With Cardano

I just made a donation of 15 ADA to Cardano Forest project. I like share my experience and some thoughts.

The process is relative simple:

  • View the project page on web browser,
  • Click the donate button and copy the receiver address
  • Then switch to Yoroi wallet app on the phone
  • Click “Send” button, paste the receiver address, enter 15 ADA and click “continue” finish the sending.

First, this is really just a payment use case, which in theory it should not be too much different from, say, a PayPal-enabled transaction. However, there are 3 important differences.

  • No intermediary party needed. It is just one address paying another address. No platform company like PayPal or banks needed to facilitate the transaction.
  • No personal information exchange. None of email, name, or phone number get exchanged.
  • NFT token to ensure tracking of your donation to the true beneficiary. You will get a NFT token for every tree planted. Each ADA will plant 1 tree. People can still make flaws here, but the whole process is definitely more transparent and more trackable.

You can find out more at their web site. It is a good cause, and I encourage everyone to donate, not only for the cool and new experience, but also for its cause.

A few wishes that would make the experience even better.

  • I wish there is a “pay” button from the donation web page which triggers Yoroi mobile app directly. This is more of Android/iOS issue.
  • I wish the receive address and related qr code can embed the ADA amount, and perhaps even a short memo.
  • I wish Yoroi mobile app can scan a QR code from a picture, which is useful when you have the mobile phone for both web viewing and QR code scanning.

Please consider staking with us @ MYADA pool

Increase Root Volume Size for AWS EC2 Linux Instances

It turns out it is extremely simple to increase volume size for AWS EC2 Linux instances. In this article we use Ubuntu 20.04 as an example to show how it works in simple 3 steps, without restarting the instance.

  1. Increase volume size – Go to AWS console; find the volume used for the instance as the root device; Choose “Modify volume” action item; increase the size to the desired number
  2. Log into the AWS machine, type “lsblk” to verify the root device size has been increase. Also confirm that the partition size remains the same as before.
  3. Expand partition size to fill up the drive. For example, if root device is /dev/xvda and root partition is the first partition, you would run “sudo growpart /dev/xvda 1”. Run “lsblk” again to verify partition size.
  4. Resize filesystem to use the new space. For the previous example, one would run “sudo resize2fs /dev/xvda1”. Run “df -h /” to verify.

Viola! You are done!

Install Full Ubuntu on Portable USB Drive

Background

My goal is to install fully updateable Ubuntu 20.04 onto a USB stick, so that I can boot it up with any Intel-based PC’s or laptops. However, due to what I considered a bug in Ubuntu, this is actually harder than it should be. So I wrote down this blog in the hope it might help others, as well as my future self, in the similar shoes.

Note that the objective is different from so-called LiveUSB ubuntu with persistency, where Ubuntu OS itself will remain as a static ISO image and updates are added on-top in a separate persistent partition. My goal is to install a standard Ubuntu OS on a USB disk, which can be updated and upgraded just like normal PC case, except that a) it is on a portable USB drive or disk and b) it is portable across different PC’s. I suppose this setup gives longer life span of the installation, which potentially allows you to even upgrade your OS later.

In the following steps, I will also show an optional feature which creates an encrypted home directory.

Assumptions and Prerequisites

  • You need Intel x86_64 PC
    • We assume it support UEFI and GPT partitions which are standards for all recent ones
  • A USB drive that holds Ubuntu ISO image for installation, a.k.a. the installation media drive. This needs to have 4GB minimum size.
  • A second USB drive or disk that will hold installed Ubuntu OS, a.k.a. the installation target drive. This one needs 16GB minimum size

Step 1 – flash Ubuntu ISO image to the installation media drive

I will not repeat the process here. Please refer to many pages below.

Step 2 – Prepare the partition table on the target drive

  • Insert installation media drive into PC.
  • Interrupt normal booting sequence and choose the media USB drive as the boot device
    • Different PC have different process to do this. On Lenovo PC, one has to press ENTER on bootup, and then press F12 to select boot device
  • Select “Try Ubuntu” when presented the option
  • Insert target USB drive
    • Identify which drive is target USB drive by examining the output “lsblk”
    • In most case if you follow the instructions exactly, it would be “/dev/sdb”
  • Once Ubuntu is up and running, start a terminal and type “sudo gparted /dev/sdb” (replace “/dev/sdb” with the right usb device you have for the target USB drive)
    • create GPT partition table
      • click “Device”/”Create Partition Table …”
      • select “gpt” as partition table type
      • See Pic #1 below
    • create 100MB fat32 partition as ESP partition
      • Click “Partition”/”New”;
      • Enter “100MB” as size and select “FAT32” as file type
      • See Pic #2 below
    • set “esp”, “boot” attributes to the new ESP partition
      • Apply changes to actually crate the partition
      • Select the ESP partition and then select “Partition”/”Manage Flags”
      • In the pop-up window, select “esp” and “boot” flag
      • See pic #3 below
    • create an ext4 partition that takes the rest of space for root partition
      • See pic #4 below
    • (optional) if you like to have encrypted home partition, create an ext4 root partition with size of 10GB or more, and leave the rest free space open for encrypted home petition later.
      • Pic #5 shows the partition table at this step.
Pic #5 – after create ESP and root partitions

Step 3 – Install Ubuntu

Once we finish the above step, quit gparted and we are ready to install Ubuntu into the target USB drive.

  • click “install ubuntu” icon on the desk to to start installation
    • select “something else” in partition page. See Pic #6 below.
    • select the ESP partition on the target USB drive as the “ESP” partition. See Pic #7 below.
    • select root partition and mount as “/”. See Pic #8 below.
    • (optional) Create encrypted physical volume
      • Select the free space left during creating partitions
      • Click “+” to create a new partition/volume
      • select “encrypted physical partition”.
      • See Pic #9 below.
    • wait for a while, select “/dev/mapper/sdb3_crypt” as “/home”. See Pic #10 below
    • Finish installation.

Step 4 – Fix EFI on target USB drive

At this point you might get an illusion everything is working, because if you reboot the PC you will be able to select either Ubuntu or Windows to boot up, and they all work. However, there are 2 very serious problems

  • If you boot into BIOS and select the USB disk as boot device, it won’t work.
  • Even worse, your PC is likely not able to boot up Windows either if you remove the target USB disk.

The reason for these problems is that, despite we told Ubuntu installer to install Ubuntu on the USB disk, which implies it should use the ESP partition on target USB disk, it still uses the ESP partition on PC built-in disk, and thus screws up EFI partition on PC and leave an empty EFI on target USB disk. See more details at this very old bug report.

So the first thing we need to do is to install Ubuntu loader into USB ESP partition and install grub into USB disk

  • Reboot into BIOS firmware and select “ubuntu” as the boot target
  • switch EFI mount
    • Type “lsblk” to verify that ESP partition on PC built-in drive is mounted
    • umount it, “sudo umount /boot/efi”
    • mount the right one, ” sudo mount /dev/sda1 /boot/efi”
  • install grub boot on USB EFI partition
    • “sudo modprobe efivars”
    • “grub-install -d /usr/lib/grub/x86_64-efi –efi-directory=/boot/efi/ –removable /dev/sda”
    • Note “–removable” flag is important in the above command as it allows USB to boot on any Intel-based PCs
  • Reboot into BIOS firmware and select target USB as boot device. It should work now.
    • You can try the USB disk on other PCs, and it should work as well.

At this point, /etc/fstab is probably mounting PC EFI partition to /boot/efi, which is wrong and will break when you boot the target USB from another PC. You can either delete the /boot/efi line in /etc/fstab, or replace the UUID with the one for EFI partition on target USB. You can find out the UUID with “blkid” command (e.g., blkid /dev/sda1)

Step 5 – Restore PC boot loader

Now let us fix PC does not boot up problem (if you encounter this)

  • Boot up the PC into BIOS and select the newly made Ubuntu USB drive as boot device. This will boot into Ubuntu.
  • Open a terminal
  • Remove Ubuntu from PC EFI partition, and PC will boot up windows again
    • Mount host EFI partition as /mnt, “sudo mount /dev/nvme01n1p1 /mnt”
    • “cd /mnt/EFI/”
    • “sudo rm -rf ubuntu”
  • NOTE 1: If your PC is already installed with another Ubuntu system, the bootloader entry “ubuntu” will collide with each other and the previous Ubuntu will not be able to boot. You can follow this guide to restore booting the previous Ubuntu system.
  • NOTE 2: Annoyingly, if your PC is already installed with another Ubuntu system, each time you boot up with the USB target disk Ubuntu, it will modify the bootloader entry and cause previously installed Ubuntu unable to boot. It is possible to rename previous Ubuntu bootloader in some tricky way so that both can live peacefully. That probably warrant another blog.

Clone a Website Behind Login with WinHTTrack

I have an very old Fedora server that stopped active running almost 10 years ago. I need to backup its data for archive and physically dispose it.

It used to run wiki site based on moinmoin. If I just back its static files plus database files, chances are I will never be able to re-active those wiki pages and see them again. So I thought a better idea is to clone the web pages and turn them into static local web site, which I can still access easily.

It turns the journey is more complicated than expected. It took me more than a couple of hours to finally nail it down. So I figure it is worth a post here.

The Software

A quick search shows HTTrack seems to be best software for this job. It has a GUI version for Linux, called WebHTTrack. It also has a windows version called WinHTTrack. During my fiddling around, I ended up using WinHTTrack. In retrospect, the method used here should also work for WebHTTrack.

The Challenge

WinHTTrack is generally user friendly. The biggest problem is moin wiki requires a user login to view the content.

The cookie capture method (“Add URL” followed with “Capture URL”) does not work for moin wiki. Only first page works. Later pages still returns “You are not allowed to view this page”

The Solution

Step 1 – prepare cookies.txt file

  • Install and launch firefox browser
  • Install cookies.txt extension
  • Log into the web site and start browsing
  • Click on “cookies.txt” extension and export cookies.txt for the current site.
  • Also note the firefox user agend
    • Clock on top-right “settings” icon
    • Then click on “Help”/”Troubleshooting Information”
    • Note the “User Agent” string. We will use it later.

Step 2 – run WinHTTrack once

  • Start WinHTTrack and set up the project normally without worrying about login
    • Specifically, you don’t need to use “Add URL” button to do anything special. Just type in the URL in the text input area.
  • Click on “Finish” to start downloading
    • It will warn the site seems empty. That is expected.

Step 3 – copy over the cookies.txt

  • Copy the exported cookies.txt file to the top level directory of the local clone.
    • For example, if HTTrack working directory is “%USER%\Documents\httrack-websites” and your project name is “erick-wiki”, then the destination directory is “%USER%\Documents\httrack-websites\erick-wiki”

Step 4 – run it again with proper options

A few options had to be set properly

  • Click on “Set options …” and a new pop-up window shows up
  • On “Spider” tab, set “Spider” to “no robots.txt rules”
  • On “Browser ID” tab, set “Browser Identity” to the “User Agent” of Firefox browser noted in step 1
  • On “Scan Rules” tab, click on “Exclude links”, another pop-up windows shows up
    • Here you can exclude the files you like to clone
    • Most importantly, you need to exclude the “logout” URL. Otherwise cookies.txt will be updated/deleted and you will note be able to continue cloning
      • In my case, the URL will end with “?action=logout”. So I set “criterion” to “Links containing:”, and set “String” to “action=logout”
  • (Optional) On “Experts Only” tab, change “Travel mode” to “can go both up and down”
  • (Optional) On “Flow control”, set “number of connections” to higher number for faster cloning. Note: your website may have surge limit as Moinmoin does, in which case you will need to turn off this feature on the web server in order to increase bandwidth.
  • Once all set, click on “Next” and “Finish” to start cloning.

That is it!

Build, Flash and Un-Flash AOSP Image on Pixel Phones

In this post we will go through a full cycle of building a custom AOSP image, flashing it on a Pixel phone, and then restoring to original stock image. We will use Pixel 4 (codenamed as “flame”) as the device example. We will walk through in a succinct manner without detailed explanations.

Prepare the Device

  • Note the Android build number
    • Click “Settings”/”About Phone”.
    • In my case it is “QQ3A.200805.001”
  • Enable developer options
    • click on “Build Number” row 7 times to enable developer options
  • Enable “OEM unlocking”
    • Toggle “Settings”/”System”/”Advanced”/”Developer Options”/”OEM unlocking”

Build AOSP Image

Prepare the host

My host is Ubuntu 20.04. Generally we are following this guide from Google. Below is a quick gist.

sudo apt-get install git-core gnupg flex bison build-essential zip curl zlib1g-dev gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev x11proto-core-dev libx11-dev lib32z1-dev libgl1-mesa-dev libxml2-utils xsltproc unzip fontconfig
sudo apt install python2    #repo needs python
sudo usermod -aG plugdev $LOGNAME
apt-get install android-sdk-platform-tools-common   # install udev rules and adb

Download AOSP source

Generally we are following this guide from Google. Below are some specific steps.

  • Find exact branch/tag that matches device current build.
    • Go to the Android build tag page
    • Search for the build number noted at the previous step
    • Find corresponding AOSP tag. In my case, it is “android-10.0.0_r41”
  • Check out this branch
mkdir android-10.0.0_r41-shallow
cd android-10.0.0_r41-shallow
repo init --depth=1 -u https://android.googlesource.com/platform/manifest -b android-10.0.0_r41
repo sync  --force-sync --no-clone-bundle --no-tags -j$(nproc)

Fetch Proprietary Binaries

  • Go to Google proprietary binary page
  • Find the section corresponding to your device (Pixel 4 in my case) and the build number (“QQ3A.200805.001”)
  • Download tarballs from Google and Qualcomm
  • Untar them under the root directory of AOSP source tree
cd ~/work/android/android-10.0.0_r41-shallow
tar xzf ~/Downloads/google_devices-flame-qq3a.200805.001-a7a5499d.tgz
tar xzf ~/Downloads/qcom-flame-rp1a.200720.009-1157dde5.tgz
./extract-google_devices-flame.sh
./extract-qcom-flame.sh

Build images

source build/envsetup.sh
lunch aosp_flame-userdebug
m

Note “flame” is the codename for Pixel 4. If you have a different device, you should use a different target. See Pixel factory image page for device codenames.

Command ‘m’ starts the build process, which can take very long. My laptop has AMD Ryzenâ„¢ 7 Pro 4750U Processor (1.70 GHz, up to 4.10 GHz Max Boost, 8 Cores, 16 Threads, 8 MB Cache) and has a fast NVME SSD disk. It takes about 2.5 hours to finish the build.

Flash AOSP Built Image

  • On the phone, enable OEM unlocking and USB debugging
    • Toggle “Settings”/”System”/”Advanced”/”Developer options”/”USB debugging”
    • Toggle “Settings”/”System”/”Advanced”/”Developer options”/”OEM unlocking”
  • On the PC in the same terminal where you just build the AOSP image
adb devices             # it should show the phone device attached over ADB
adb reboot bootloader   # reboot to fastboot
fastboot devices        # it should show the phone device attached over ADB
fastboot flashing unlock
fastboot flashall -w

During “fastboot flashing unlock” step, you need to following instruction on the screen, press up/down volume button to select “unlock bootloader”, and press power button to commit.

If things go smoothly, phone will reboot a couple of times and eventually boots into the freshly built AOSP image with root access.

Restore to Stock ROM Image

After you played enough with the AOSP image, you may want to return to the original retail state. Google has made it easy recently.

  • Open Chrome browser and go to “https://flash.android.com”
  • Allow browser to connect to the device
  • Go to Pixel 4 factory image page and find the desired build number to flash
    • Just for fun, I’m selecting Android 11, build number RQ1A.210205.004.
  • Select “Wipe device” and “Lock bootloader”
  • Click “Confirm” to start the process.

What If Things Go Wrong?

Things can go wrong when phone does not boot up anymore. It can be due to build AOSP build, or something due to mismatch between AOSP image and baseban/firmware version (BTW, this is the reason why this guide asks you to download the same version of AOSP that matches existing build).

unzip flame-qq3a.200805.001-factory-d93c74e6.zip
cd flame-qq3a.200805.001/
fastboot devices   # make sure phone is connected
./flash-all.sh 

(Bonus) How to Add a System Privileged App

Before Android 10, you simply re-mount /system as rw and push apk into /system/priv-app/

adb root
adb shell remount -orw,remount /system
adb push my-app.apk /system/priv-app
adb reboot    # to complete the app installation

Since Android 10, /system is mounted as root and cannot be re-mounted as rw. Instead you will need to use “adb remount” to enable overlay FS for any modification.

adb root
adb remount     # if this is first time, "adb reboot" to disable verity first
adb push my-app.apk /system/priv-app
adb reboot      # to complete the app installation

Often times you need to assign PivilegedOrSystem level permissions to system apps (which is why you want to install system app at the first place). Starting from Android 8.1, you will need to explicitly whitelist these permissions in system xml files.

For example, to give READ_PRIVILEGED_PHONE_STATE permission to a testing app, net.junsun.idattestation, you need to create the following file on phone, /etc/permissions/priv-app/privapp-permissions-idattestation.xml

<?xml version="1.0" encoding="utf-8"?>
<!--
This XML file declares which signature|privileged permissions should be granted to privileged
applications that come with the platform
-->
<permissions>
    <privapp-permissions package="net.junsun.idattestation">
        <permission name="android.permission.READ_PRIVILEGED_PHONE_STATE"/>
    </privapp-permissions>
</permissions>

Setup Bitcoin/ElectrumX/Lightning Network on Raspberry Pi 4

In the previous post I described the process of setting up a headless Raspberry Pi 4 (8GB) with remote VNC desktop access. It includes 1GB SDD drive, which is needed to run bitcoin full node with full archive of blockchain transactions.

In this post I will describe the process I used to set up a powerful multi-purpose Bitcoin node for the cryptocurrency community, that includes the following elements.

  • Bitcoin core – the default and standard bitcoin full node software that started the whole thing
  • ElectrumX server – de-centralized cryptocurrency wallet server. Wallet software needs it to make payments/transfers etc.
  • Lighnting Network Daemon (LND) – Level 2 network built on top of bitcoin to facilitate fast and cheap small amount payments.

Install and Run Bitcoin Core

wget https://bitcoin.org/bin/bitcoin-core-0.21.0/bitcoin-0.21.0-aarch64-linux-gnu.tar.gz
tar xzf bitcoin-0.21.0-aarch64-linux-gnu.tar.gz
sudo cp -a bitcoin-0.21.0/* /usr/local/
  • Run bitcoin-qt -txindex and start populating the blockchain data. It will take 2+ days and will consume >350GB in disk space!
    • DON’T SELECT “prune” MODE! Otherwise Electrum server won’t work.
  • Configure bitcoin for hardening and interactions with ElectrumX/LND
    • Create ~/.bitcoin/bitcoin.conf file. See sample setup below.
    • In this setup, we don’t listen to incoming peers over IPv4/IPv6 for enhanced privacy. New transactions will be broadcasted to Tor network only.
    • I also disabled wallet feature for better privacy again.
    • zmq (ZeroMQ) is needed for LND connection.
# index all transactions; LND/ElectrumX needs it
txindex=1

# bitcoin-qt accept RPC or not; needed for EPS/ElectrumX
server=1

# EPS needs wallet feature; electrumx does not
disablewallet=1

# don't broadcast transactions from our own wallet; will submit through Tor
# only useful when disablewallet==0
walletbroadcast=0

# # Maximum number of inbound+outbound connections. default 125
maxconnections=32

# enable electrumx server and LND
rpcuser=YOUR_RPC_USER
rpcpassword=YOUR_RPC_PASSWORD

# Listening for peers, enabled by default except when 'connect' is being used
# we don't export bitcoin ports or accept any incoming connections,
# because we would expose our own transactions to peers associated with our IP
# We only connect outgoing through tor
listen=0

# outgoing traffic use onion/tor only (default any)
proxy=127.0.0.1:9050
onlynet=onion

# debug logging - values: 0, 1, net, ....
debug=0

# needed by LND; I don't understand
zmqpubrawblock=tcp://127.0.0.1:28332
zmqpubrawtx=tcp://127.0.0.1:28333

Install ElectrumX Server

  • Install dependencies
sudo apt install -y python3-pip libleveldb-dev
sudo pip3 install aiohttp aiorpc
sudo pip3 install pylru
sudo pip3 install plyvel
  • Clone the the repo and install into /usr/local.
    • Note there are 2 primary electrumx repos on github.com. Make sure choose “spesmilo” one. The other one, albeit older, stopped supporting bitcoin due to recent segwit address introduction.
git clone https://github.com/spesmilo/electrumx.git
cd ~/electrumx
sudo python3 setup.py install
  • Add user to ssl-cert group to access the private SSL key
sudo usermod -a -G ssl-cert $USER
  • If bitcoin core has finished initial data sync, you can start ElectrumX server with the following script. Make sure you forward port 50001 and 50002 to the Raspberry Pi 4.
#!/bin/bash

# REQUIRED env variables
export DB_DIRECTORY=$HOME/.electrumx/db
export COIN=Bitcoin
DAEMON_URL="http://YOUR_RPC_USER:YOUR_RPC_PASSWORD@localhost:8332"

# Optional env variables
export PUBLIC_IP=$(curl -s http://whatismyip.akamai.com/)
export SERVICES=rpc://localhost:8000,tcp://:50001,ssl://:50002
export REPORT_SERVICES=ssl://$PUBLIC_IP:50002
export SSL_CERTFILE=/etc/ssl/certs/ssl-cert-snakeoil.pem
export SSL_KEYFILE=/etc/ssl/private/ssl-cert-snakeoil.key

# replace with your wallet address!
export DONATION_ADDRESS="bc1q52we5s8qyddhvrfscjn4qyn5nvq8hf92k9rt2w"

# log all sent transactions
electrumx_server

Install LND

Lightning Network in some sense is more complex than bitcoin. You are strongly suggested to review some basic concepts before setting up the software. There are multiple LN server implementations. LND is a popular one.

  • Install the pre-built binary image for ARM64
wget https://github.com/lightningnetwork/lnd/releases/download/v0.12.0-beta.rc6/lnd-linux-arm64-v0.12.0-beta.rc6.tar.gz
tar xzf lnd-linux-arm64-v0.12.0-beta.rc6.tar.gz
sudo cp lnd-linux-arm64-v0.12.0-beta.rc6/* /usr/local/bin
  • Set up config file
mkdir ~/.lnd
wget -O ~/.lnd/lnd.conf https://raw.githubusercontent.com/lightningnetwork/lnd/master/sample-lnd.conf
  • Below is a sample setup based on my own node
listen=0.0.0.0:9735
nolisten=false
rpclisten=0.0.0.0:10009
externalip=xxx.xxx.xxx.xxx
alias=YourFavoriateNameForThisNode
[Bitcoin]
bitcoin.active=true
bitcoin.mainnet=true
bitcoin.node=bitcoind
bitcoin.basefee=1
bitcoin.feerate=1
[Bitcoind]
bitcoind.dir=~/.bitcoin
bitcoind.rpcuser=YOUR_RPC_USER
bitcoind.rpcpass=YOUR_RPC_PASSWORD
bitcoind.zmqpubrawblock=tcp://127.0.0.1:28332
bitcoind.zmqpubrawtx=tcp://127.0.0.1:28333
bitcoind.estimatemode=ECONOMICAL
  • Run LND with “lnd” command
    • First time run, you will need to create a wallet, lncli create
    • For later running, you will need to unlock wallet, lncli unlock
      • If you like to automate LND startup, you will likely need lncli unlock --stdin option to pass in the wallet password via a script.
  • (Optional) if you like to set up a public LND node so that others can reach you, enable port forward on port 9735. You need to be a public node to participate in LN routing.

(Optional) Install RTL

RTL stands for “Ride The Lightning”, which provides a web interface to visually interact with LND. Otherwise you can just the commandline interface, which can be a little boring (and hard)

  • Install package and dependencies
git clone https://github.com/Ride-The-Lightning/RTL.git
cd RTL/
sudo apt install nodejs
sudo apt install npm
npm install --only=prod
cp sample-RTL-Config.json RTL-Config.json
vi RTL-Config.json
node rtl.js 
  • In order to run “node rtl.js” successfully, you need to configure RTL-config.json properly. Below is the sample setup derived from my own node.
{
  "multiPass": "YOUR_RTL_PASSWORD",
  "port": "3000",
  "defaultNodeIndex": 1,
  "SSO": {
    "rtlSSO": 0,
    "rtlCookiePath": "",
    "logoutRedirectLink": ""
  },
  "nodes": [
    {
      "index": 1,
      "lnNode": "Node 1",
      "lnImplementation": "LND",
      "Authentication": {
        "macaroonPath": "/home/ubuntu/.lnd/data/chain/bitcoin/mainnet/",
        "configPath": "/home/ubuntu/.lnd/lnd.conf",
        "swapMacaroonPath": ""
      },
      "Settings": {
        "userPersona": "MERCHANT",
        "themeMode": "DAY",
        "themeColor": "PURPLE",
        "channelBackupPath": "/home/ubuntu/.lnd/backup",
        "enableLogging": false,
        "lnServerUrl": "https://localhost:8080",
        "swapServerUrl": "https://localhost:8081",
        "fiatConversion": false
      }
    }
  ],
}
  • Once RTL starts running, you can open a browser and point at Raspberry Pi 4 IP address at port 3000. You see stats and perform actions with LND.

How You May Benefit

  • Use your own ElectrumX server for increased trust and privacy.
    • In this guide, we disable bitcoin core listening from public IP addresses. All new transactions are broadcasted from Tor. That increases privacy and anonymity.
  • You set up donation wallet address in ElectrumX. It seems people do donate, although not that common.
  • Once you set up LND properly, you can collect routing fees.

Show Your Support

This post is only complete if I can show you how I set up the donation and LN connection. 🙂 And your reading is only complete when you perform one of the following actions. 😛

For donation, Bitcoin wallet address is “bc1qyc48kmpweyx2kpqhq8n6r0ckr2fqsfvz2qxpx4”

For Lightning Network, my node public key is “0362a4372375bd24c5b8f8bb2ea85ae2ccf783808477e68cfb06121694b34d1927”

Appendix A – Reference Links on Lightning Network

Appendix B – How to create inbound liquidity for lightning network

Android AOSP Repo Checkout Method and Size Comparison

Android open source AOSP size has grown to be very huge now, often pushing over 100GB in disk size.  In this post, we look at 3 different ways to check out AOSP and compare their sizes.

1. Full clone – regular checkout with complete history, branch, tags and all files

repo init -u https://android.googlesource.com/platform/manifest -b android-11.0.0_r4
repo sync -j$(nproc)

2. Partial clone – only clone needed files with smaller-sized files initially, where the left-out files can be fetched later over the network when needed.

repo init -u https://android.googlesource.com/platform/manifest -b android-11.0.0_r4 --partial-clone --clone-filter=blob:limit=10M
repo sync -j$(nproc)

3. Shallow clone – don’t clone history or previous revisions or other branches/tags

repo init --depth=1 -u https://android.googlesource.com/platform/manifest -b android-11.0.0_r4
repo sync  -f --force-sync --no-clone-bundle --no-tags -j$(nproc)

Below is the list of disk spaces consumed by each method. Clearly the winner size-wise is shallow clone. The disadvantage is that you don’t have any revision/branch/tag information. If that is OK with you, it is the way to go. Partial clone might work for you if you have fast network, but it does not seem to save much space.

MethodSize
Full clone105GB
Partial clone98GB
Shallow clone77GB

(Added later on 2/10, 2021) Additionally one may wish to set up a local AOSP mirror to reduce check out time. The total size of a full AOSP mirror is around 540GB.

mkdir aosp-mirror
cd aosp-mirror
repo init -u https://android.googlesource.com/mirror/manifest --mirror
repo sync