Westermo NetBox is a toolbox for embedded systems based on Buildroot.
NetBox provides easy access to all Westermo specific custimizations made to Linux and other Open Source projects used in WeOS. You can use it as the base for any application, but is strongly recommended for all use cases for container applications running in WeOS. Official WeOS container applications will be based on NetBox.
NetBox is built using the Buildroot External Tree facility. This is a layered approach which enables customizing without changing Buildroot. You may use NetBox as NetBox use Buildroot, see the App-Demo project for an example -- click Use this template -- to create your own.
To contribute, see the file HACKING for details.
NetBox use the same versioning as Buildroot, with an appended -rN
to
denote the revision of Buildroot with Westermo extensions. E.g., the
first release is 2020.02-r1.
The NetBox project follows the Westermo product platform naming. This to be able to easily match what container image works on a Westermo device:
Architecture | Platform Name | Nightly App | Nightly OS |
---|---|---|---|
arm9 | Basis | basis.app | basis.os |
powerpc | Coronet | coronet.app | coronet.os |
arm pj4 | Dagger | dagger.app | dagger.os |
aarch64 | Envoy | envoy.app | envoy.os |
aarch64 | Ember | [ember.app][] | [ember.os][] |
x86_64 | Zero | zero.app | zero.os |
Note: the Envoy platform includes support also for the Marvell ESPRESSObin (Globalscale) and MACCHIATObin (Solidrun) boards.
In addition to various NetBox platforms there are two major flavors available. The current first-class citizen is apps, but it is also possible to build an entire operating system image, including Linux kernel and the same userland already available to apps. To select a pre-configured NetBox flavor for a given platform:
netbox_app_$platform
netbox_os_$platform
The build environment requires the following tools, tested on Ubuntu 21.04 (x86_64): make, gcc, g++, m4, python, and openssl development package.
On Debian/Ubuntu based systems:
~$ sudo apt install build-essential m4 libssl-dev python
To run in Qemu, either enable host-side build in make menuconfig
, or
for quicker builds you can use the version shipped with your Linux host.
On Debian/Ubuntu based systems:
~$ sudo apt install qemu-system
For smooth sailing, after install, add the following line to the file
/etc/qemu/bridge.conf
(add file if it does not exist):
allow all
For network access to work out of the box in your Qemu system, install
the virt-manager package, this creates a host bridge called virbr0
:
~$ sudo apt install virt-manager
First clone the repository, optionally check out the tagged release you want to use. The build system clones the submodule on the first build, but you can also run the command manually:
~$ cd ~/src
~/src$ git clone https://github.com/westermo/netbox.git
~/src$ cd netbox
~/src/netbox$ git submodule update --init
Second, select your target _defconfig
, see the configs/
directory,
or use make list-defconfigs
to see all Buildroot and NetBox configs
available. We select the defconfig for Zero (x86-64) NetBox app flavor:
~/src/netbox$ make netbox_app_zero_defconfig
Note: if you want to use the
gdbserver
on target, this is the point where you have to enable it inmake menuconfig
. The setting you want is under Toolchain --> "Copy gdb server to the Target". You also want "Build options" --> "build packages with debugging symbols"
Third, type make and fetch a cup of coffee because the first time you build it will take some time:
~/src/netbox$ make
Done. See the output/images/
directory for the resulting SquasFS
based root file system: netbox-app-zero.img
Tip: the same source tree can easily be used to build multiple defconfigs, use the Buildroot
O=
variable to change the defaultoutput/...
toO=/home/$LOGNAME/src/netbox/zero
in one terminal window, andO=/home/$LOGNAME/src/netbox/coronet
in another. This way, when working with packages, e.g. editing code, you can build for multiple targets at the same time, without cleaning the tree.
To update your local copy of NetBox from git, you need to update both NetBox and the Buildroot submodule, like when you first cloned (above):
~/src/netbox$ git pull
~/src/netbox$ git submodule update --init
All NetBox OS builds are supported by Qemu. This is actually a corner stone in NetBox, and principal testing strategy at Westermo. It can be highly useful for quick turnarounds when developing and testing new features. Make sure you have built one of the os images before running, e.g.:
~/src/netbox$ make netbox_os_zero_defconfig
Any feature targeting OSI layer 3, and above, need nothing else to run. For more advanced test setups, with multiple networked Qemu nodes, we highly recommend Qeneth.
To start a single node:
~/src/netbox$ make run
Note: you may need
sudo
, unless you have set up your system with capabilities https://troglobit.com/2016/12/11/a-life-without-sudo/
By default, this command starts the utils/qemu
script and tries to
connect one interface to a host bridge called virbr0
. That bridge
only exists if you installed virt-manager (above), if not, you can have
a look at the utils/qemu
script arguments and environment variables,
or try:
~/src/netbox$ make QEMU_NET=tap run
Qemu nodes start from the same read-only SquasFS image as built for all
targets. For persistent storage a disk image file on the host system is
used. This is controlled by the environment variable $QEMU_MNT
, which
defaults to VENDOR-config-PLATFORM.img
, provided ~/.cache
exists .
E.g., for NetBox Zero OS: ~/.cache/netbox-config-zero.img
. See the
helper script utils/qemu
for more information.
When persistent storage is enabled and working, the /mnt
directory on
the target system is used to storing an OverlayFS of the target's
/etc
, /root
, and /var
, directories. I.e., changing a file in
either of these directories (exceptions in /var
exist) is persistent
across reboots.
NetBox support 9P file sharing between the host and Qemu targets. Set
directory to share, using the absolute path, in QEMU_HOST
:
~/src/netbox$ make run QEMU_HOST=/tmp
When booting your target system with make run
, the hosts' /tmp
directory is available as /host
on the target system.
Here is an example run of a Zero OS build, the persistent store for all
your configuration (in /etc
or /home
) is stored in a disk image file
named ~/.cache/netbox-config-zero.img
:
~/src/netbox$ make distclean
~/src/netbox$ make netbox_os_zero_defconfig
~/src/netbox$ make
~/src/netbox$ make run
Note: you may still need to call
sudo make run
, see the note on capabilities, above.
if you remembered "Copy gdb server to the target", above, we can debug
failing programs on our target (Qemu) system. You also need to have the
gdb-multiarch
program installed on your host system, the regular gdb
only supports your host's architecture:
sudo apt install gdb-multiarch
To open the debug port in Qemu we start NetBox with QEMU_GDB=1
, this
opens localhost:4712
as your debug port (4711 is used for kgdb):
$ make run QEMU_GDB=1
When logged in, start the gdbserver
service:
# initctl enable gdbserver
# initctl reload
From your host, in another terminal (with the same $O
set!) from the
same NetBox directory, you can now connect to your target and the attach
to, or remotely start the program you want to debug. NetBox has a few
extra tricks up its sleeve when it comes to remote debugging. The below
commands are defined in the .gdbinit file:
$ make debug
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
For help, type "help".
(gdb) user-connect
(gdb) user-attach usr/sbin/querierd 488
0x00007f5812afc425 in select () from ./lib64/libc.so.6
(gdb) cont
Continuing.
For more information on how to use GDB, see the manual, or if you want to know a little bit more behind the scenes, see the blog post about Debugging an embedded system.
The NetBox app builds can be run in LXC, or LXD, on your PC. With LXD
it is even possible to run non-native archs, like Arm64 using the Linux
"binfmt misc" mechanism, which runs all binaries through qemu-aarch64
.
This is only documented in the predecessor to NetBox, myrootfs][].
To run a NetBox app in LXC, first install all dependencies (lxc-utils, libvirt, etc.) and create the necessary directories:
$ sudo mkdir -p /var/lib/lxc/images/
$ sudo mkdir -p /var/lib/lxc/foo/mnt
Since we are playing it safe, we've built the Zero (x86_64) NetBox app,
image, so let's install it in the images/
directory. Images can be
shared with multiple LXC container apps:
$ sudo cp output/images/netbox-app-zero.img /var/lib/lxc/images/foo.img
The LXC config
file might need some tweaking, in particular if you use
different path to the .img
file. The host bridge you probably want to
change as well. Here we have used lxcbr0
only because it's the
default in libvirt installs in Debian/Ubuntu and gives us NAT:ed access
to the Internet from our app(s) via the host. All this is already set
up by libvirt, so we can focus on the LXC container config
:
$ sudo sh -c "cat >>/var/lib/lxc/foo/config" <<-EOF
lxc.uts.name = foo
lxc.tty.max = 4
lxc.pty.max=1024
#lxc.hook.pre-mount = pre-mount.sh /var/lib/lxc/images/foo.img /var/lib/lxc/foo/rootfs
#lxc.rootfs.path = overlayfs:/var/lib/lxc/foo/rootfs:/var/lib/lxc/foo/delta0
#lxc.rootfs.options = -t squashfs
lxc.rootfs.path = loop:/var/lib/lxc/images/foo.img
lxc.mount.auto = cgroup:mixed proc:mixed sys:mixed
#lxc.mount.entry=run run tmpfs rw,nodev,relatime,mode=755 0 0
#lxc.mount.entry=shm dev/shm tmpfs rw,nodev,noexec,nosuid,relatime,mode=1777,create=dir 0 0
lxc.mount.entry=/var/lib/lxc/foo/mnt mnt none bind 0 0
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = lxcbr0
#lxc.init.cmd = /sbin/init finit.debug
#lxc.seccomp.profile = /usr/share/lxc/config/common.seccomp
lxc.apparmor.profile = lxc-container-default-with-mounting
EOF
The last two lines are needed on systems with Seccomp and/or AppArmor.
Uncomment the one you need, see the host's dmesg when lxc-start
fails
with mysterious error messages. For convenience the Debian/Ubuntu is
uncommented already.
Note1: You may have to create the directory where to mount the container rootfs as it is configured in /var/lib/lxc/foo/config on line 11
$ sudo mkdir -p /var/lib/lxc/foo/mnt
Note2: you may have to add the following two lines to your AppArmor profile to enable writable /etc, /var, /home, and /root directories. The file is in
/etc/apparmor.d/lxc/lxc-default-with-mounting
:mount fstype=tmpfs, mount fstype=overlay,
Reload AppArmor, or restart your system to activate the changes, then we can start the container with:
$ sudo lxc-start -n foo
To see what actually happens when it starts up, append -F
. Attach to
the container's /dev/console
with:
$ sudo lxc-console -n foo -t 0 -e '^p'
The last -e '^p
remaps the control key sequence to detach from your
container and return to your host: Ctrl-p q