FLUSP logo FLUSP - FLOSS at USP

Build the Linux kernel for ARM

Written on , last modified on

This tutorial describes how to build the Linux kernel for ARM and boot test it with a virtual machine. Basic kernel build configuration is covered too.

This tutorial was originally thought to be part of a set of tutorials tailored to aid newcomers to develop for the Linux kernel Industrial I/O subsystem. This is a continuation for the “Use QEMU and libvirt to setup a Linux kernel test environment” tutorial.

Command Summary

If you did not read this tutorial yet, skip this section. This section was added as a summary for those that already went through this tutorial and just want to remember a specific command.

git clone git://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git iio-tree --depth=10

export ARCH=arm64; export CROSS_COMPILE=<cross_compiler_packagename_or_path>

make defconfig
make -j$(nproc) Image.gz modules

guestmount -w -a <disk_iname> -m <disk_partition> <local_directory>
guestunmount mountpoint_arm64

make INSTALL_MOD_PATH=<path_to_rootfs> modules_install

Configuring, building, and installing the Linux kernel

In this section we will go through the steps to build Linux images for ARM64 machines.

Summary of this part of the workshop:

  1. Clone the Linux kernel
  2. Configure and build the Linux kernel
  3. Install the kernel modules and image

1) Clone the Linux kernel

There are several repositories that contain the source code for the Linux kernel. These repositories are known as trees. Some trees are widely known such as Linus Torvalds’ tree (mainline) and the Linux stable tree. In general, a Linux tree is a repository where some development for the kernel happens. Many of those repos are at kernel.org.

Some examples of Linux kernel trees are:

For this workshop, we’ll be using the IIO subsystem tree so download (clone) it with git.

# Run these in your host machnie
cd $IIO_DIR
git clone git://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git iio --depth=10
export IIO_TREE=$(readlink -f iio)

The --depth argument will limit the commit history downloaded along with the code so the final disk size taken should hopefully be not so large. If you happen to have plenty of disk space I suggest cloning without the depth flag because commit logs are often a good source of information when you are trying to understand kernel code. By the time this post was being written, the IIO tree (with full commit history) was sizing roughly 5GB.

2) Build the Linux kernel

The Kernel Build System (kbuild) is based on make and other GNU tools and allows a highly modular and customizable build process for the Linux kernel. By default, kbuild uses the configuration options stored in the .config file under the root directory of the Linux source files. Those options hold values for configuration symbols associated with kernel resources such as drivers, tools, and features in general. Nearly all directories inside the kernel source tree have a Kconfig file which defines the symbols for the resources that lay next to it. Top Kconfig files include (source) Kconfig files from subdirectories thus creating a tree of configuration symbols. When needed, kbuild generates configuration options from Kconfig symbols and stores the values for them in a .config file. kbuild Makefiles then use the configuration values to compile code conditionally and to decide which objects to include in a kernel image or its modules [1] [2].

There are sets of predefined configuration options for building kernels for different machines and purposes. These are called defconfig files. defconfig files store only specific non-default values for configuration symbols. For instance, one can find defconfig files for ARM architecture machines under arch/arm/configs/. We will create a .config file from the arm64 defconfig. We must also specify our target architecture for the build.

cd $IIO_TREE
export ARCH=arm64
make defconfig
make olddefconfig

If you saved the list of VM modules in the first part of the workshop, you may now use that to reduce the number of modules selected for compilation and thus reduce the time to build the kernel and amount of VM disk space required to install the modules.

scp -i ~/.ssh/rsa_iio_arm64_virt root@192.168.122.38:~/vm_mod_list .
make LSMOD=vm_mod_list localmodconfig

Different processor architectures have distinct instruction sets and register names. Due to that, the binaries produces by a compiler for architecture A will not work on a machine of architecture B. So, we need to use a compiler that produces binaries compatible with the instruction set of the machine we want to run our kernel (arm64).

Most distros should have a GCC package with a compiler for x86 host machines that produces binaries for arm64 targets. On debian, the package name is gcc-aarch64-linux-gnu so that’s what a debian user would have to install.

sudo apt install gcc-aarch64-linux-gnu

See the Complementary Commands section for advice if not using the debian package.

We may now tell our environment that we got a cross compiler.

export CROSS_COMPILE=aarch64-linux-gnu-

The kernel has many build targets though we will only use the Image.gz and modules targets. Use make help to view a list of available targets. Finally, let’s build the Linux kernel. Run the make command from the Linux kernel source root directory.

make -j$(nproc) Image.gz modules

Sometimes I forget to do the exports or change terminals so it’s often handy to have the build command full version.

$ make -j$(nproc) ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image.gz modules

Nevertheless, it is likely that the above command will fail due to missing software required for the build. Yet, kbuild does a good job in telling what is missing for the Linux build. So, one may often identify what to install after analysing the build output messages. On debian based OSs, developers often need to install flex, binson, and ncurses.

sudo apt install flex bison libncurses-dev

There is also a minimal requirements to compile the kernel page with a list of software required to build Linux and how to check your system has the minimal required versions of them.

The make command will instruct kbuild Makefiles to start the build process. The main goal of the kbuild Makefiles is to produce the kernel image (vmlinux) and modules [2]. Akin to Kconfig files, kbuild Makefiles are also present in most kernel directories, often working with the values assigned for the symbols defined by the former.

The whole build is done recursively — a top Makefile descends into its sub- directories and executes each subdirectory’s Makefile to generate the binary objects for the files in that directory. Then, these objects are used to generate the modules and the Linux kernel image. [1]

If everything goes right, you should see a Image file generated under arch/arm64/boot/ and modules.order and alike files under the Linux source root directory.

3) Install the kernel modules

Mount the VM root filesystem and install the modules there.

cd $VM_DIR
mkdir mountpoint_arm64
# Be sure the VM is shut down
guestmount -w -a iio_arm64.qcow2 -m /dev/sda2 mountpoint_arm64/
cd $IIO_TREE
make INSTALL_MOD_PATH=$VM_DIR/mountpoint_arm64/ modules_install
guestunmount mountpoint_arm64

Change the VM start command/script to point to the newly generated kernel image.

#!/bin/bash

IIO_DIR=$HOME/iio_workshop
VM_DIR=$IIO_DIR/vm_dir/
BOOT_DIR=$VM_DIR/iio_arm64_boot/
IIO_TREE=$IIO_DIR/iio/

qemu-system-aarch64 \
  -M virt,gic-version=3 \
  -m 2G -cpu cortex-a57 \
  -smp 2 \
  -netdev user,id=net0 -device virtio-net-device,netdev=net0 \
  -initrd $BOOT_DIR/initrd.img-6.1.0-5-arm64 \
  -kernel $IIO_TREE/arch/arm64/boot/Image \
  -append "console=ttyAMA0 loglevel=8 root=/dev/vda2 rootwait" \
  -device virtio-blk-pci,drive=hd \
  -drive if=none,file=$VM_DIR/iio_arm64.qcow2,format=qcow2,id=hd \
  -nographic

Log into the VM and run uname -a to check you are now running the kernel just built. Congratulations, you’ve compiled and boot-tested a Linux kernel.

To finish our development setup, update the virsh VM to use our kernel images. Here’s how to do it by recreating the virsh VM.

virsh undefine iio-arm64

Update create_vm_virsh_iio_workshop.sh with the path to our images.

#!/bin/bash

# Part 2 version - custom kernel - adapted to run with sudo/root and custom resized qemu
IIO_DIR=<full_path_to_your_iio_workshop_directory>
VM_DIR=$IIO_DIR/vm_dir/
BOOT_DIR=$VM_DIR/iio_arm64_boot/
IIO_TREE=$IIO_DIR/iio/

virt-install \
        --name "iio-arm64" \
        --arch aarch64  \
		--machine virt  \
		--cpu cortex-a57 \
        --memory 2048 \
        --osinfo detect=on,require=off \
		--check path_in_use=off \
		--features acpi=off \
        --import \
        --disk path=$VM_DIR/iio_arm64.qcow2 \
        --boot kernel=$IIO_TREE/arch/arm64/boot/Image,initrd=$BOOT_DIR/initrd.img-6.1.0-5-arm64,kernel_args="console=ttyAMA0 loglevel=8 root=/dev/vda2 rootwait" \
		--network bridge:virbr0 \
        --graphics none

Run the VM create script.

sudo ./create_vm_virsh_iio_workshop.sh

3.1) Installing the kernel image

Often, kernel developers also need to explicitly install the Linux kernel image to their target test machines. Essentially, installing a new kernel image would be to just replace the vmlinuz/Image/zImage/bzImage/uImage file which contains the Linux boot executable program. However, some platforms (such as x86 and arm64) have fancy boot procedures with boot loaders that won’t find kernel images without very specific configuration poiting to them (e.g. GRUB), which might mount temporary file systems (initrd), load drivers prior to mounting the root filesystem, and so on. To help setup those additional boot files and configuration, the Linux kernel has a install rule. So, kernel developers may also run make install or make install INSTALL_PATH=<path_to_bootfs> when deploying kernels to those platforms.

For this setup we shall not bother with that. Because we instructed QEMU (with -kernel) and libvirt (with --boot kernel=...) to pick up the kernel image from our build directory (which happens to also be the source directory in our setup), and we are reusing the initrd file, we don’t need to run the installation rule.

Complementary Commands

One may also download cross compiler toolchains from different vendors. For instance, ARM provides an equivalent cross compiler that you may download if having trouble finding a proper distro package.

wget -O $IIO_DIR/gcc-aarch64-linux-gnu.tar.xz https://developer.arm.com/-/media/Files/downloads/gnu-a/10.3-2021.07/binrel/gcc-arm-10.3-2021.07-x86_64-aarch64-none-linux-gnu.tar.xz
tar -xf -C $IIO_DIR $IIO_DIR/gcc-aarch64-linux-gnu.tar.xz

Sometimes identifying the cross compiler for your combination of host and target machines may require some understanding of what is called the compiler triplet. Conceptually, the compiler triplet should contain three fields: the name of the CPU family/model, the vendor, and the operating system name [3]. However, sometimes the vendor is omitted so one may find a triplet like x86_64-freebsd (FreeBSD kernel for 64-bit x86 CPUs) [3]. It is also common to see the operating system information split into two separate fields, one for indicating the kernel and the other for describing the runtime environment or C library which is being used. The the debian package for x86-64 gcc is an example of this triplet format mutation: gcc-x86-64-linux-gnu (compiler for 64-bit x86 targets that will run a Linux kernel and have GNU glibc in their runtime). But things can get even more unintuitive when system call conventions or Application Binary Interfaces (ABI) are specified in the OS field as in arm-linux-gnueabi (compiler for 32-bit ARM targets that will run Linux using the EABI system call convention) or as in arm-none-eabi (compiler for 32-bit ARM that will run no OS (bare-metal) using the EABI system call convention).

Anyways, you may point to the generic cross compiler name when using compilers not under your PATH. For example:

export CROSS_COMPILE=$IIO_DIR/gcc-aarch64-linux-gnu/bin/aarch64-none-linux-gnu-

Conclusion

This post described how to build the Linux kernel and install it into a virtual machine. To accomplish that, it also covered basic concepts of Linux kernel build configuration to guide readers into generating feasible .config files. By this point you should be able to configure, build, install, and boot test the Linux kernel.

History

  1. V1: Release

References

[1] Javier Martinez Canillas. “Kbuild: the Linux Kernel Build System”. (2012) URL: https://www.linuxjournal.com/content/kbuild-linux-kernel-build-system

[2] Michael Elizabeth Chastain and Kai Germaschewski and Sam Ravnborg. “Linux Kernel Makefiles”. (2023) URL: https://www.kernel.org/doc/html/latest/kbuild/makefiles.html

[3] . “Target Triplet”. (2019) URL: https://wiki.osdev.org/Target_Triplet


comments powered by Disqus