//Running Debian Buster with 64bit mainline Kernel on a Raspberry Pi 3b+

The Raspberry Pi 3b+ runs on a Cortex A53 quad core processor which is using the Armv8 architecture. This allows it to run 64bit (AArch64) kernels, but be also backwards compatible with Armv7 and run 32bit (Armhf) kernels.

One of the most convenient ways to get started with a Raspberry Pi is to use the Debian based Raspbian distribution. Raspbian maintains its own customized linux kernel, drivers and firmware and provides 32bit OS images.

For my usecase however, I required the following:

  • get a server friendly 64bit OS running on the Pi3b+ (for example Debian)
  • use a mainline kernel
  • avoid proprietary binary blobs if possible
  • boot from a SSD instead of a SD card or USB stick (connected via USB adapter)

It turned out most of it was fairly easy to achieve thanks to the RaspberryPi3 debian port efforts. The raspi3-image-spec project on github contains instructions and the configuration for building a 64bit Debian Buster image which is using the mainline linux-image-arm64 kernels, ready to be booted from.

The image creation process is configured via raspi3.yaml which I modified to leave out wifi packages (i don't need wifi) and use netfilter tables (nft) as replacement for the now deprecated iptables but this could have been changed after initial boot too if desired.

The image will contain two partitions, the first is formatted as vfat labeled RASPIFIRM and contains the kernel, firmware and raspi specific boot configuration. The second ext4 partition is labeled RASPIROOT and contains the OS. Since partition labels are used the raspi should be able to find the partitions regardless of how they are connected or which drive type they are on (SSD, SD card reader, USB stick..).

Once the custom 64bit Debian image is built, simply write it to the SSD using dd, plug it in and power the raspi on. Once it booted don't forget to change the root password, create users, review ssh settings (disable root login), configure netfilter tables etc.

Some additional tweaks

Reduced the GPU memory RAM partition to 32MB to have more RAM left while still being able to use a terminal via a 1080p HDMI screen. Set cma=32M in /boot/firmware/cmdline.txt. I did also add a swapfile to free up additional memory by writing inactive pages to the SSD. The default image does not have swap configured.

Added
force_turbo=1

flag to /boot/firmware/config.txt. This will lock the ARM cores to their max clock frequency (1.4GHz). Since the linux kernel doesn't ship with a Pi 3 compatible cpufreq implementation yet, the Pi3b+ would just boot with idle clock (600Mhz) and never change it based on load. This is just a workaround until the CPU frequency scaling drivers made it into the kernel (update: commits made it into kernel 5.3). I didn't experience any CPU temperature issues so far, but i am also not running CPU intensive tasks.

The files will get overwritten on kernel updates, I added most of the /etc folder into a local git repository which makes diffs trivial and simplifies maintenance. If the raspi stops booting you can simply plug the SSD USB adapter into a workstation and edit the config files from there, diff, revert etc - very convenient.

USB Issues

I ran into an issue where the SSD connected via USB got disconnected after 5-30 days. This was very annoying to debug because it would not finish writing the kernel log file since virtually everything was on that disconnected drive. After swapping out everything hardware wise, compiling custom kernels (on the raspberry pi btw :) ), even falling back to the raspberry kernel which uses different USB drivers (dwc_otg instead of dwc2) I could not get rid of the disconnects - only delay it. Newer kernels seemed to improve the situation but what finally solved the issue for me was an old USB 2.0 hub i put in front of the SSD-USB adapter. Problem solved.


Thats about it. In case you are wondering what i am using the raspi for: This weblog you are reading right now is running on it (beside other things like a local wiki), for over six month by now, while only using 2-5W, fairly stable :)


//Converting from CVS to GIT

This post is a short description how to convert a CVS repository into GIT. I used the tool cvs2svn from Tigris.org for this task. As an example I will use the project jake2.

0. Install cvs2svn and cvs

('cvs2svn' not only does cvs2svn conversion it also contains the shell commands cvs2git and cvs2bzr)

sudo apt-get install cvs
sudo apt-get install cvs2svn

1. Download the CVS repository

This is mandatory. In contrary to SVN you can't just read out the complete versioning history just by using the CVS protocol. To convert from CVS you will need direct file access to the repository.

Sourceforge for example supports repository synchronization via rsync to allow backups. Thats what I used to get a copy of the jake2 repository.


rsync -av rsync://jake2.cvs.sourceforge.net/cvsroot/jake2/* .

2. Convert into GIT fast-import format

Take a look at /usr/share/doc/cvs2svn/examples/cvs2git-example.options.gz and extract the example configuration file. The file is very good documented so I won't repeat things here. But the most important step is to provide an author mapping from cvs to git authors (search for 'author_transforms').

cvs2git --options=cvs2git.options

This will produce a dump and blob file for step 3.

3. Create GIT repository and import CVS history


mkdir gitrepo
cd gitrepo
git init
cat ../repo-tmp/git-blob.dat ../repo-tmp/git-dump.dat | git fast-import

4. Manual cleanup

cvs2git will do its best to convert tags and branches into git tags and branches and usually add an artificial commit do document that. Now you will probably want to take a look at the repository with a history browser like gitk or giggle... and clean it up a bit. E.g remove branches which could be tags etc. If this is to much work you might consider to tweak the configuration of step 2 and convert again with different strategies.

- - - -

btw you can find the jake2 git repo here.


//There is no place like 127.0.0.1

I have decided to host my blog myself and thought I report about some highlights ;). The reason why I haven't bloged for a long time was actually a configuration issue of the old setup which confused <myDomain>/myBlog with <myBrother'sDomain>/myBlog. Selfhosting is also a good opportunity to play with enterprise software at home in a daily basis (not only the usual try and remove evaluation with constructed dummy projects or brainstormed scenarios).

I initially evaluated OpenSolaris since I was really eager to try out ZFS. But I decided to stay with Ubuntu 9.04 server edition (aka jaunty jackalope) since I already use the desktop version for my workstation. The other reason was the lower memory footprint of Ubuntu (256MB) compared to OpenSolaris (512MB) which is relevant if you run your site on over 10 years old hardware like I do.

Ubuntu has great software repositories which just provide everything to get started (e.g Sun JDK6.13 and in theory also glassfish, netbeans, eclipse, your favourite database etc but i prefer in those cases zip files ;-) ).

Installation with gimmicks like Software RAID (RAID 1), firewall configuration went flawlessly. The combination GlassFish2.1 + Roller 4.0.1 + hsql 1.8 worked also on first attempt.

a few thing I noticed:

  • 5 years ago the whole process would take far longer than two evenings
  • -XX:OnOutOfMemoryError=<send me a mail command> (since java 1.6) is really cool
  • Cloudcomputing is overrated (web services should always be weather independent)
  • There is no place like 127.0.0.1

don't worry the upcomming entries should be more technical - this is mainly a test ;-)


//Hello World

I am Michael Bien and this is my new weblog.

A few words about me. I am freelancer, and project owner of the NetBeans OpenGL Pack, the project FishFarm and JOGL (JSR 231 - Java Bindings to OpenGL) committer. You will find here hopefully interesting blog entries about Java, computer graphics, open source software and new technologies.

have fun with Java and enjoy reading!