Design Solutions Research & Design Hub

Compiling Old Kernels Under Today’s GNU/Linux

Written by Kris Bahnsen

Docker Smooths the Way

When it comes to embedded computing software, the old and the new are sometimes at odds. Getting legacy software to work properly with modern operating systems can be challenging. In this article, Kris explains how to use Docker virtualization to enable older kernels to operate under modern GNU/Linux distributions.

  • How to use Docker virtualization to enable older kernels to operate under modern GNU/Linux distributions.

  • How to understand what Docker is?

  • How to set up your build directory

  • How to set up Docker

  • Technologic’s TS-72XX series of SBCs

  • 2.4 Linux kernel

  • Docker

Like any embedded computing technology company, we’ve identified and cater to a market that needs the same product shipped every time. This is important in terms of ensuring products work consistently across a given application and but also that can effectively serve long lifecycle implementations. This means that, once a product has reached its “Fully Developed” state, we strive to deliver the same product with the same software to eliminate any “gotchas” in end user code. Our Product Lifecycle statement provides more information on that subject [1].

All that said, there are times when it’s necessary for either us or embedded systems developer customers to modify and build software that was designed and rolled out many years ago. With the Linux kernel and GNU Project always marching forward, support for older software projects is untested and neglected. Given long enough time spans and these older projects may reach a point where modern tools can no longer properly compile them due to deprecated or changed features.

To illustrate these issues, we’ll focus on Technologic’s TS-72XX series of SBC products based on the 2.4 Linux kernel (TS-7200, TS-7250 (Figure 1a), TS-7260 (Figure 1b), TS-7300, and TS-7400). These products still ship with a 2.4.26 kernel, and either Debian Sarge- or a Busybox-based distribution depending on the product and its booted media. Compiling the original kernel is documented in our product manuals, but is somewhat cumbersome on modern GNU/Linux distributions due to the tools available and even the modern 64-bit architecture of desktop computers. We will be using Docker to create a simple userspace, based on an original i386 Debian Sarge release, that is capable of building the 2.4 kernel using the original and recommended cross compiler and manual instructions.

FIGURE 1 – The TS-72XX series of products are based on the 2.4 Linux kernel. The TS 7250 (a Top) is a compact, full-featured SBC based upon the Cirrus EP9302 Arm9 CPU, and provides speed and reliability for embedded control systems. Also based on the EP9302, the TS-7260 (b Bottom) specifically targets ultra-low power applications such as solar or battery-powered devices.

At its core, Docker is an OS-level virtualization platform. Similar to chroots or jails on Unix-like platforms, Docker containers are instances of a userspace that is virtualized and isolated from both the host and other containers. Docker goes beyond a simple chroot by giving every container its own process and network namespace thanks to kernel features that support this. But Docker also offers services such as a cross platform image distribution system with public and private access—all the way through Enterprise grade management and support. Our use as described here is completely free and open to end users.

Docker uses “containers” which are a standard unit of software that packages up code and all its dependencies (Figure 2). This allows the application to run quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

FIGURE 2 – Docker uses containers which are a standard unit of software that packages up code and all its dependencies. This allows the application to run quickly and reliably from one computing environment to another.

This process is compatible with other products beyond our previously mentioned TS-72XX series boards. Setting up a Docker environment is very similar to doing so with a chroot under Unix-like operating systems. But the Docker ecosystem makes setting up this environment a lot easier. This process can be expanded to allow for compatibility with newer or older kernels, or for cross-compiling userspace applications so long as Docker container images are readily available for a specific environment.

These instructions are all about setting up the specific environment that can be used to successfully compile an older kernel. They assume familiarity with Linux and that all of the necessary tools (Docker, wget and so forth) are already installed or the reader is comfortable installing them as needed. These commands were used on a modern 64-bit Debian Stretch installation and have also been tested on a Mac OS 10.15 installation with success. Once the container is set up, the kernel compile process outlined in a given manual should be used to then compile the kernel and modules, and install these files to the SBC.

Note that all of these steps can and should be run as a local system user rather than a superuser account. In general, the Docker workflow discourages running any component of Docker (other than the “dockerd” daemon) as a superuser. Instead, the normal user should be added to the “docker” group. All of the commands below were run as a standard user from a terminal shell.

Step 1: Set up your build directory, with tools, to persist between Docker instances:

mkdir kernel-build
cd kernel-build
tar xvf crosstool-linux-gcc-3.3.4-glibc-2.3.2-0.28rc39.tar.bz2
tar xvf tskernel-2.4.26-ts11-feb232011.tar.gz

Step 2: The compiler is hard-coded in the kernel Makefile via the CROSS_COMPILE variable. Usually, the product manual will briefly touch on how to set it up. Because we have a known compiler and Docker setup, sed can be used for a quick one-line change. (The reason /work/ is used will be discussed later.)

sed -i -e "s_^CROSS\_COMPILE.*_CROSS\_COMPILE = /work/usr/local/opt/crosstool/arm-linux/gcc-3.3.4-glibc-2.3.2/bin/arm-linux-_" linux24/Makefile

Step 3: Set up docker. This uses debian/eol:sarge from Docker Hub [2] to create a similar userspace environment that would have been used to build the original kernel. The two echo commands make a quick Dockerfile to describe the image and install necessary utilities. Note that additional utilities can be added to the Docker image by appending them to the apt-get … command in docker/Dockerfile. The image is then built with a specific tag to identify it.

mkdir docker
echo "FROM debian/eol:sarge" > docker/Dockerfile
echo "RUN apt-get update && apt-get install -y build-essential libncurses5-dev" >> docker/Dockerfile
docker build --tag "technologic-sarge" docker/

Step 4: Enter the Docker container using the above image. There are a couple arguments passed here that are important:

–rm —Simply removes the container after it is exited/closed, the image stays in place for the future. In some cases, retaining the container is useful for logging or debugging purposes. This is not necessary in our application.
-i —Makes the container interactive.
-t —Allocates a PTY device to the container to allow the current terminal to connect STDIN and STDOUT of the container.
–volume $(pwd):/work —Maps/mounts the present directory kernel-build/ to the /work/ directory inside of the container. Note that, once in the container, the only persistent changes by default will be to this or other mapped volumes. Any modified files that exist outside of mapped volumes will disappear once the container is exited. This is inherent to the design of a Docker container.
-w /work —Sets the working directory inside the container to /work/
-e HOME=/work —Sets the /work/ directory as the user’s home directory inside the container
–user $(id -u):$(id -g) —Sets the user inside the container to the current user and group ID. Root is not needed inside the container for the kernel compilation process because it can all happen as a normal user.
technologic-sarge —This is the tag of the image to use.
bash —Run bash once the container is set up.

Here is the complete string:

docker run --rm -it --volume $(pwd):/work -w /work -e HOME=/work --user $(id -u):$(id -g) technologic-sarge bash

Step 5: At this point, the terminal should be inside of the container that is Debian Sarge. The linux24/ directory contains the previously extracted kernel sources and the cross compiler is already set up. The first time this process is used, we recommend that the following commands be run to clean up any stale files inside of the kernel archive:

cd linux24
make mrproper && make distclean && make clean

Once in the linux24/ directory, the commands outlined in the respective manual can then be used to configure and compile the kernel and modules. Note that if make modules_install is run, by default the modules will be installed to /lib/modules/ which is not a persistent directory inside the Docker container. It is recommended to archive the folder to a location in the /work/ directory (which is the home directory for the container user and is persistent) for use later. For a 2.4 series kernel, this would roughly look like:

make <platform>_config
make oldconfig
make dep clean
make Image && make modules
make modules_install

Simply type “exit” to close the container and return to the standard workstation terminal. At this point, the kernel binary exists in linux24/arch/arm/boot/. This, along with the modules, can be unpacked to an SD card or transferred to the SBC via any modern means from outside the Docker container.

Internally, we’re spinning up more and more Docker instances for consistent build systems like this. In the past, we’ve run into situations where specific libraries or utilities are no longer readily available. In these instances, we’ve had to scramble to assemble a build system that is still compatible with shipping software.

Using Docker to implement build systems allows us to run a few commands to generate a stable and consistent environment across many different platforms. The Docker image can be set up to already have all of the necessary dependencies needed for the target. This eliminates missing software or tools from the list of potential build issues, allowing developers to focus on their code and not the development tools 

Author’s Notes: The “debian/eol:sarge” Docker Hub entry is maintained by the Debian Project. The “debian/eol:sarge” base image inherently uses “” for the Debian package repository. Any packages that exist in that repository can be installed. The archive repository is maintained by the Debian Project.


[1] See our Product Lifecycle statement for more information.:
[2] “debian/eol:sarge” from Docker Hub Systems |


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
| + posts

Kris Bahnsen has been involved with electronics since the moment he could pick up a soldering iron. He's been an engineer with Technologic Systems since 2008. Kris spends his free time tinkering with various embedded electronics, software and hardware security, cars and maintaining a small collection of arcade cabinets.

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Compiling Old Kernels Under Today’s GNU/Linux

by Kris Bahnsen time to read: 8 min