How I Rooted OnePlus 12 with Magisk

There are many conflicting sources on the Internet. Specifically I tried this one and did not work. Below is a short recap what has worked for me.

Short Recap

  • Follow Magisk official installation guide
    • OnePlus 12 has ramdisk and uses init_boot.img
    • get oneplus 12 image zip file from this site
    • use payload dumper to extract init_boot.img from here
    • patch init_boot.img and flash the patched version according to the guide
  • By now, Magisk should be installed and you should have root access
  • Install Magisk Module Manager to install modules
    • for unknown reasons, I could install modules with Magisk app itself, nor through the manual method
  • (Bonus) I like to create your own Magisk module. I used the template
    • Specifically if you like to modify a file under /system_ext, please use the path /system/system_ext.
    • For example, if you like to add a file /system_ext/foo, use /system/system_ext/foo instead.

That is it!

My Bucket List

It is time to write down my bucket list, v2024.

Countries

  • Turkey
  • Egypt
  • France
  • Antarctic
  • Thailand
  • Really, as many countries as possible…

Places

  • Mt. Everest base
  • Machu Pichu
  • Tibet
  • Xinjiang

Experience

  • Over-the-water bungalows, learning scuba diving
  • Africa Safari
  • RV all over North America
  • See nothern lights

Events

  • Tennis Grand Slams – all four of them
  • Burning man
  • Big apple drop on NY square on NYE

Personal achievements

  • Form a band, playing guitar
  • Become a Rust expert
  • Sail across ocean

Setup NaiveProxy Server

NaiveProxy is an unique tool that can potentially escape the most strict censorship firewall. I thought I would give it a try.

Setup the server

This is actually the simple part.

  • Follows this page to create a Caddyfile (See “Server setup”)
  • Go to this page to download the binaries you need
  • Follows this page to set up a systemctl service.

Set up client on Ubuntu

Client side is a little bit tricky and cumbersome to use.

  • Go to this download page to download the right naive client
  • Follow the readme page and create config.json file in the same directory
  • Open Ubuntu Settings/Network/Network Proxy
  • Choose “Manual”, and fill in “127.0.0.1:1080” for “Socks Host”, while leaving others empty (IMPORTANT!) See picture below.
  • After that, download and start to use “start-naive” and “stop-naive” scripts to switch between using and not using proxy.
    • Note only browsers work with this scheme
    • And it seems only Chrome is working while Firefox is not working (bug?)

Setup TailScale VPN Server on Synology RT6600ax Router

TailScale is great VPN. It is even greater if it runs all the time on a router! Currently it is available as 3rd party package for Synology NAS (X64-based) machines, but not for Routers (usually ARM64-based). *sigh* This blog talks about a way to set it up.

Grab the binaries

  • Copy (scp) over these two files to Synology router, say under your home directory’s subdirectory, “tailscale”.

Testing

  • Download the script that starts/stops the tailscale daemon
    • This script is derived from TailScale’s original script and is adapted to SRM environment
  • MODIFY SCRIPT with your own path for PKGVAR variable
  • Now type “./start-stop-status start” and “./start-stop-status status”
  • For the first time running, type “./tailscale –socket tailscaled.sock up –advertise-exit-node” and perform web-based login/setup. See my previous post.

Start up TailScale automatically

Copy the start-stop-status script to /usr/local/etc/rc.d/ directory

sudo cp start-stop-status /usr/local/etc/rc.d/tailscale 

HELP! However this setup current does not seem to work, while it should. I’m still investigating.

Set Up TailScale VPN Server on AWS

TailScale website has an excellent collection of documents. I just want to quickly jot down what I did for simpler reference later. A couple of notes:

  • It runs on AWS EC2 instance (t3a.micro) with Ubuntu 22.04
  • It is a server node, a.k.a. exit node in TailScale terms.

Install

curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list

sudo apt-get update

sudo apt-get install tailscale

First-time setup

We need to create an account first. Then we will be asked to login and make this machine as server, i.e., exit node.

sudo tailscale up   # perform web login
sudo tailscale up --advertise-exit-node

Log into TailScale website,

  • click on the “…” setting button for the server
  • select “Edit routing …” menu
  • Check “Use as exit node” button.

Usage

To use for yourself, simply turn on tailscale on your phone/PC, and use the above server as the exit node.

To share the server with others, go to tailscale web portal and click on “Share …” button for the server.

Disable/remove TailScale

sudo systemctl disable --now tailscaled     # to disable

sudo apt purge tailscale                    # to uninstall

Set Up Dynamic DNS for AWS EC2 Instance with Lambda Service

Background

It’s pretty stupid and annoying (maybe intentional?) that AWS provides DNS service (Route 53) but does not provide dynamic DNS to its own instances!!! This blog describes a method to achieve just that with various AWS features, namely EC2 instance events, Lambda functions and Route 53.

Specifically,

  • You must already host some domains with Route 53
  • You like to launch EC2 instances with public IP addresses
  • You like to assign some cool domain names to those EC2 instances, automatically
  • When those instances are stopped, you like those domain names are removed automatically

I have done this for over 5 or 6 years now. The blog is an attempt to capture what I did and keep my memory fresh! The approach was based on some early articles, most likely an early version of this one. However, I spent time to develop my own version which has diverged significantly now. For example, my version supports multiple domains and doesn’t use database. Also my blog will focus on using AWS console operations instead of using CLI.

Usage

Suppose you own a domain called mydomain.com and you are hosting it with Route 53. When you launch an EC2 instance, you can set the name tag as “ddns-fun.mydomain.com” during launch or startup time. See pictures below. After the instance starts running, you will automatically have a A-type DNS record “fun.mydomain.com” pointing to the instance’s IP address.

When you stop or terminate the instance, the DNS record will be removed automatically.

Overview of the process

It is relatively complicated. Below is an overview.

  • The central piece is a lambda function written in Python 3.x called ddns_lambda. This function will receive events when EC2 instances are started or stopped. It will examine the name tag or DNS records to determine whether it should add some DNS records or remove them.
  • In order for the ddns_lambda to run with right permissions and access the recourses, you will create an IAM policy, called ddns-lambda-policy and an IAM role, called ddns-lambda-role.
  • Lastly you will create an event triggering rule that monitors EC2 instance start/stop. When such event happens, ddns_lambda function will be called.

Let us dive in!

Create a policy for DDNS lambda role

  • Go to AWS/Services/IAM/Policies
  • Click on “Create policy” on the top-right
  • On “Specify permissions” page, choose JSON option and enters following code. The policy allows access to ec2 instance query, write logs, and full access to route53.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
  • Name the policy as “ddns-lambda-policy”
  • Click on “Create policy”. Done!

Create a role for DDNS lambda function

  • Go to AWS/Services/IAM/Roles
  • Click on “Create role” button on the top-right
  • Select “AWS services” (default) and pick “Lambda” as the target service
  • Next page, search and select “ddns-lambda-policy”
  • Next page, name role as “ddns-lambda-role”
  • Click on “Create role”. Done!

Create EC2 instance event trigger

  • Go to AWS/Services/Amanzon EventBridge/Rules
  • Click on “Create rule”
  • Name it as “ec2_lambda_ddns_rule”. Click on “Next”
  • Leaving most as default and scroll down to “Event pattern” section
    • “AWS service” – select “EC2”
    • “Event type” – select “EC2 Instance State-change Notification”
    • “Event Type Specification 1” – select “Specific state(s)”
    • Select “Running”, “terminated”, “stopped” 3 states
    • At this point, you should have event pattern shown as following. Then Click “Next”
{
  "source": ["aws.ec2"],
  "detail-type": ["EC2 Instance State-change Notification"],
  "detail": {
    "state": ["running", "terminated", "stopped"]
  }
}
  • For “Target 1”, select “AWS service”, “Lambda function”, “ddns_lambda”
  • Then just click “Next”, “Next”, “Create rule”. Done!

Create and test DDNS lambda function

Create function template

  • Go to AWS/Services/Lambda/Functions
  • Click on “Create function” on the top-right
    • Choose “Author from scratch”
    • Function name – enter “ddns_lambda” for example
    • Runtime – Choose any python3.x
    • Architecture – Choose “arm64” architecture for tiny bit of money savings
    • Execution role – choose “Use an existing role” and select “ddns-lambda-role”
  • Click on “Create function” at the bottom.

Upload the code

  • First, download the code and save this zip file
  • Go to AWS/Lambda/Functions/ddns_lambda, the function template you just created.
  • Click and select “Upload from”/”.zip file” menu item to import the code into source code console
  • IMPORTANT!!! Update zone_names variable with your own domain names. It supports multiple domain names as an array.
  • Scroll down. Update “Runtime settings”/”handler” to be “union-py3.lambda_handler”

Configure the function

  • Go to AWS/Lambda/Functions/ddns_lambda
  • Click on “Configuration” tab in “Code source” panel
    • Click on “General configuration”, change timeout to 100 seconds.
    • Clock on “Triggers”
      • Click on “add triggers”
      • Select “EventBridge (CouldWatch Events)”
      • Choose “Existing rules” and select “ec2_lambda_ddns_rule” created above
      • Click “Add”

Deploy and testing

  • Go to AWS/Lambda/Functions/ddns_lambda
  • in “Code source” panel, click on “Deploy” button. Done!

Now you can start to launch an instance, stop it or terminate and see if corresponding DNS records are added or removed. A few tips about debugging:

  • Watch lambda function logs
    • Go to AWS/Lambda/Functions/ddns_lambda
    • Click on “Monitor” tab
    • Click on “View CloudWatch logs”
    • I usually do “Live Tail” monitoring

That is it! Let me know what you think.

Set Up Your Own NordVPN Meshnet VPN Server

China great firewall is having a dreaded effect on my decision whether I should go visit. With a typical hacker fashion I decided to roll my own sleeves and take the matter under my own control – set up my VPN servers.

A long story short, two solutions emerge, OpenVPN and NordVPN meshnet. Both have some commercial backing. So it is not exactly under my own control in some sense, but the solutions are all free. This article talks about NordVPN meshnet.

Set up VPN Server on AWS Ubuntu 22.04

I mostly follow this page that describes well. Below are the the commands I used.

<register nordvpn account>  # max 10 devices are allowed

sh <(curl -sSf https://downloads.nordcdn.com/apps/linux/install.sh)

nordvpn login --token
nordvpn set technology nordlynx
nordvpn set meshnet on
nordvpn mesh peer list

nordvpn mesh peer inv send <email>  # invite others

nordvpn mesh peer routing allow <peer node>  # allow others to connect and route

# to stop/disable nordvpn
sudo systemctl disable --now nordvpnd

# to uninstall nordvpn
sudo apt-get --purge remove 'nordvpn*'

Set up on Ubuntu Client

The primary source information is at this page. Below are commands I used


<register nordvpn account>

sh <(curl -sSf https://downloads.nordcdn.com/apps/linux/install.sh)

nordvpn login --token
nordvpn set technology nordlynx
nordvpn set meshnet on

# accept invitation, if using other's server
nordvpn mesh inv list
nordvpn mesh inv accept <email of Server user, if needed>

# connect and route internet traffic via meshnet VPN server
nordvpn mesh peer list
nordvpn mesh peer connect <server node>

# status and disconnect
nordvpn status
nordvpn disconnect

Set up Android Client and Other Clients

  • install NordVPN Android app
  • Sign up and/or log into NordVPN account
  • (optional) If using others’ VPN server, accept their invitation first
  • Then follow this page to start VPN

Other platforms are most likely similar, but I have not tried them. See iOS page and macOS page. Note if you use someone else’s VPN servers, you will need to accept their invitation first so that you can see their servers on your peer list.

Also note NordVPN is mixing meshnet features with their own paid VPN services, which make the UI very confusing. Just follow this guide and steer clear from the paid subscriptions.

Build Spec Tools for CPU2006 Benchmark

CPU2006 is an old obsolete benchmark. But in modern days we may still need to build and run it. The biggest problem is usually in building the tools needed by benchmark itself, called spec tools.

Below are the steps I used to build spec tools for AArch64 (64bit ARM) and RISC-V 64 on Ubuntu OS (22.04 and 23.04).

  • Install Fortran compiler, sudo apt install gfortran
  • Obtain CPU2006 install ISO image (need license)
  • Untar and install the source tree (install_archives/cpu2006.tar.xz)
  • cd cpu2006/tools/src
  • obtain and update all the config.guess and config.sub files with the latest ones
wget -O config.guess 'https://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD'
wget -O config.sub 'https://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD'

Once build is done, copy the attached config files below to under config/ directory and run with them. For example,

runspec --config ubuntu_riscv64 --action=setup int fp
runspec --config ubuntu_aarch64 --noreportable --size=test --iterations=1 mcf gcc
runspec --config ubuntu_aarch64 --size=ref --iterations=1 int fp

Attached files:

Experiments on Ethereum Staking Upload Bandwidth

I have set up rocket pool Ethereum staking node for about 1 month now. I have to say the experience is relatively smooth and the support is great. In fact it is so smooth, setting up the node itself is not worth a blog. 🙂 So far it has already produced its first block.

The only issue is the upload bandwidth concern. It was using almost 5mbps, about half of my ISP service allowance. While technically this is fine, I feel I would be comfortable if I have more headroom. Plus, I plan to add minipools. So I looked around and found that I could reduce number of peers to reduce bandwidth. However, there is very little information on how much bandwidth can be reduced when you reduce number of peers. So I set out to do an experiment.

My ETH1 client is Besu (Java) and my ETH2 client is Lighthouse (Rust). BTW, I’m choosing clients purely based on the programming language.

By default Besu has 25 peers and Lighthouse has 80 peers. In week 1 of the experiment, I used the default peer numbers. In week 2, I reduced Eth1 peers to half, 13 peers. That did not yield too much bandwidth saving. In week 3, I reduced Eth2 peers to half, 40 peers.

See results below. The numbers are taken from router. Note I only focus on upload bandwidth, not only because that is the issue of concern, but also I have highly variable download bandwidth and the numbers are not reliable.

Rocketpool pool uses about 75% of total bandwidth. So the true percentage change is amplified by 33%. From the table it seems we can save about 25% upload bandwidth of Ethereum staking node when we slash peers to half for both ETH1 and ETH2 clients.

weekweek 1week 2week 3
ETH1 (Besu) peers251314
ETH2 (Lighthouse) peers808040
total upload (GB)327.4312.1266.4
daily upload (GB)46.844.638.1
mpbs4.334.123.52
% against week 1100%95%81%
staking node %100%93.5%75%

PS – A few weeks later, I added a second minipool. I was expecting the bandwidth increase. However, the daily upload bandwidth actually dropped to about 27.5GB (2.55mpbs). This is puzzling. It could be due to the drop of actual peers connected (which is around 33 now). Or it could be due to “maturing” nodes or connections? In any case I’m happy upload bandwidth does not appear to be an issue anymore. And most likely I will create 2 more minipool after the Ethereum Shanghai upgrade and Rocketpool LEB8 introduction.