As Solo5 is essentially a piece of "middleware" interfacing unikernel-style applications with their host systems, it is not an end-developer product as such.
To get started as a developer with Solo5, please refer primarily to the instructions provided by unikernel project you intend to develop applications with:
- MirageOS: https://mirage.io/wiki/install
- IncludeOS: http://www.includeos.org/get-started.html
That said, this document provides general information -- not specific to any unikernel -- about building Solo5, and running Solo5-based unikernels.
Solo5 itself has the following build dependencies:
- a 64-bit Linux, FreeBSD or OpenBSD system (see also Supported targets for further requirements),
- a C11 compiler; recent versions of GCC and clang are supported,
- GNU make,
- full host system headers (on Linux, kernel headers are not always installed by default),
- on Linux only, pkg-config and libseccomp >= 2.3.3 are required.
To build Solo5, run:
./configure.sh
make # gmake on BSD systems
This will build all targets and tenders supported by the host system and
generate Solo5 toolchain wrappers in toolchain/bin
.
To install Solo5 system-wide (or in an alternate location specified by
./configure.sh --prefix=PREFIX
you can use make install
.
As of Solo5 0.7.0, experimental support is provided for producing a Solo5
toolchain which can cross-compile targets (bindings) to a different
architecture than that of the host system. This is yet to be fully
documented; in the mean time see ./configure.sh --help
and the top-level
GNUmakefile for other install-*
targets useful for such a setup.
To run the built-in self-tests (requires bash
and coreutils
):
tests/setup-tests.sh # As root, sets up `tap100` for networking
tests/run-tests.sh # Root required to run network tests
Supported targets, host operating systems/hypervisors and processor architectures:
Production:
- hvt: Linux/KVM, using
solo5-hvt
as a tender, on the x86_64 architecture. - hvt: FreeBSD vmm, using
solo5-hvt
as a tender, on the x86_64 architecture. FreeBSD 11.4 or later is required, FreeBSD 12 or later is recommended forcapsicum(4)
support. - hvt: Linux/KVM, using
solo5-hvt
as a tender, on the aarch64 architecture. You will need hardware capable of running a recent (v4.14+) 64-bit mainline kernel and a 64-bit Linux distribution.
Experimental:
- hvt: OpenBSD vmm, using
solo5-hvt
as a tender, on the x86_64 architecture. OpenBSD 6.4 or later is required, 6.7 or later is recommended for full W^X support. - spt: Linux systems on the x86_64, ppc64le and aarch64 architectures, using
solo5-spt
as a tender. A Linux distribution with libseccomp >= 2.3.3 is required. - muen: The Muen Separation Kernel, on the x86_64
architecture. Muen commit
f10bd6b
or later is required.
Limited:
- virtio: Any hypervisor which virtualizes an x86_64 system with virtio devices (e.g. Linux/KVM with QEMU as a monitor, Google Compute Engine). See below under "virtio: Limitations" for details.
- xen: Xen hypervisor 4.10 or later on x86_64 only, using the PVHv2 domU ABIs. The xen bindings are not a full Solo5 target; they exist for the purpose of providing low-level bootstrap code to MirageOS and do not provide any network or block I/O functionality.
Solo5 can be installed via opam
and provides a
toolchain which can be used to compile a simple program as a unikernel.
$ opam install solo5
$ eval $(opam env)
$ cat >main.c <<EOF
#include <solo5.h>
int solo5_app_main(const struct solo5_start_info *info) {
return 0;
}
EOF
$ export ARCH=x86_64 # can be changed to aarch64
$ $ARCH-solo5-none-static-cc -c main.c -o main.o
$ cat >manifest.json <<EOF
{
"type": "solo5.manifest",
"version": 1,
"devices": []
}
EOF
$ solo5-elftool gen-manifest manifest.json manifest.c
$ $ARCH-solo5-none-static-cc -c manifest.c -o manifest.o
$ $ARCH-solo5-none-static-ld -z solo5-abi=hvt manifest.o main.o -o main.hvt
$ solo5-hvt main.hvt
| ___|
__| _ \ | _ \ __ \
\__ \ ( | | ( | ) |
____/\___/ _|\___/____/
Solo5: Bindings version v0.7.5
Solo5: Memory map: 512 MB addressable:
Solo5: reserved @ (0x0 - 0xfffff)
Solo5: text @ (0x100000 - 0x103fff)
Solo5: rodata @ (0x104000 - 0x105fff)
Solo5: data @ (0x106000 - 0x10afff)
Solo5: heap >= 0x10b000 < stack < 0x20000000
Solo5: solo5_exit(0) called
Solo5 itself does not provide a high-level mangement or orchestration layer for
unikernels -- this is intended to be provided by separate downstream projects.
If you are coming from Linux containers you can think of Solo5 as conceptually
occupying the same space in the stack as, for example, runc
.
If you are looking for a high-level stack for deploying unikernels, one such project is Albatross.
The following examples use the standalone test_net unikernel which is built as part of the normal Solo5 build process.
test_net
is a minimalist network test which will respond only to ARP
and ICMP echo requests sent to the hard-coded IP address of 10.0.0.2
. It
accepts two possible command line arguments: Either verbose
for verbose
operation or limit
to terminate after having sent 100,000 ICMP echo replies.
By convention, we will use the tap100
interface to talk to the unikernel.
To set up the tap100
interface on Linux, run (as root):
ip tuntap add tap100 mode tap
ip addr add 10.0.0.1/24 dev tap100
ip link set dev tap100 up
To set up vmm and the tap100
interface on FreeBSD, run (as root):
kldload vmm
kldload if_tap
sysctl -w net.link.tap.up_on_open=1
ifconfig tap100 create 10.0.0.1/24 link0
To set up vmm and the tap100
interface on OpenBSD, run (as root):
cd /dev
./MAKEDEV tap100
ifconfig tap100 inet 10.0.0.1 netmask 255.255.255.0
In case your unikernel needs to access internet (which is not the case in this example), you may need to add "ip forwarding" and a NAT to your host system.
The hvt ("hardware virtualized tender") target supports Linux, FreeBSD and OpenBSD systems and uses hardware virtualization to isolate the guest unikernel.
On Linux, the solo5-hvt
tender only requires access to /dev/kvm
and
/dev/net/tun
, and thus does NOT need to run as root
provided your have
granted the user in question the correct permissions. Most recent Linux
distributions provide a kvm
group for this purpose, and /dev/net/tun
is
normally world-writable.
On FreeBSD, root
privileges are currently required to run the solo5-hvt
tender in order to access the vmm
APIs.
On OpenBSD, the solo5-hvt
tender must be started as root
, however it will
drop privileges to the standard _vmd
user and further use pledge(2)
to
lower its privileges.
To launch the unikernel, in tests/test_net/
run:
../../tenders/hvt/solo5-hvt --mem=2 --net:service0=tap100 -- test_net.hvt verbose
Use ^C
to terminate the unikernel.
The option --mem=2
requests that 2 MB of host memory be allocated, but not
committed, to the unikernel. If it is not specified, a default of of 512 MB is
used.
The option --net:service0=tap100
requests that the tender attach the network
device with the logical name service0
, declared in the unikernel's
application manifest, to the host's TAP
interface named tap100
.
All devices declared by the unikernel must be attached for it to be allowed
to run. To query a unikernel's application manifest from an existing binary,
you can use solo5-elftool
:
../../elftool/solo5-elftool query-manifest test_net.hvt
The spt ("sandboxed process tender") target currently supports Linux systems only, and uses a strict (minimal whitelist) seccomp sandbox to isolate the guest unikernel, which runs as a user process on the host.
The solo5-spt
tender does not require any special privileges to run.
To launch the unikernel, in tests/test_net/
run:
../../tenders/spt/solo5-spt --mem=2 --net:service0=tap100 -- test_net.spt verbose
Use ^C
to terminate the unikernel.
The solo5-spt
tender has the same common options as solo5-hvt
. Refer to
the hvt example in the previous section for a brief description.
The solo5-virtio-run script
provides a wrapper to correctly launch qemu-system-x86_64
or bhyve
on the
host system. Using it is not required; by default it will print the commands
used to setup and launch the guest VM. You can run these manually if desired.
To launch the unikernel, in tests/test_net/
run:
../../scripts/virtio-run/solo5-virtio-run.sh -n tap100 -- test_net.virtio verbose
Use ^C
to terminate the unikernel.
The virtio target produces a unikernel that uses the multiboot protocol for booting. If your hypervisor can boot a multiboot-compliant kernel directly then this is the preferred method.
If your hypervisor requires a full disk image to boot, you can use the solo5-virtio-mkimage tool to build one.
solo5-virtio-mkimage
supports the following image formats:
raw
: A raw disk image, written out as a sparse file.tar
: A disk image suitable for uploading to Google Compute Engine.
Note that solo5-virtio-mkimage
has a fairly specific set of requirements on
the host system, and in general runs only on Linux distributions with
non-broken syslinux
packages available. For this reason, the script has
built-in support for running as a Docker container using the -d
option.
For example, to produce a raw disk image containing
the test_hello unikernel and supply a kernel command line of "Hello, Solo5!", in tests/test_hello
run:
../../scripts/virtio-mkimage/solo5-virtio-mkimage.sh -d -f raw test_hello.img test_hello.virtio "Hello, Solo5!"
A QEMU command line suitable for booting this image is:
qemu-system-x86_64 -machine q35 -display none -serial stdio -drive file=test_hello.img,if=virtio,format=raw
Use ^C
to terminate the unikernel.
The virtio target was the initial target supported by Solo5 -- while we are keeping it around as it is useful to some users, it is essentially a "compatibility layer" providing the Solo5 interface atop "legacy" full-system virtualization. As the goals of Solo5 have since evolved, we do not expect to devote a substantial amount of time to the further development of virtio.
Therefore, we recommend that new deployments use the hvt or spt target instead.
As the virtio bindings internally support only a single network and/or block device, if multiple such devices are declared in the application manifest, only the first valid "acquire" call of each device type will succeed. Conversely, if multiple such virtual hardware devices are presented to the VM by the hypervisor, only the first instance of each device type can be used by the unikernel.
The following virtual hardware devices are supported by the virtio target:
- the serial console, fixed at COM1 and 115200 baud
- the KVM paravirtualized clock, if available
- a single virtio network device attached to the PCI bus
- a single virtio block device attached to the PCI bus
Note that virtio does not support ACPI power-off. This can manifest itself in delays shutting down Solo5 guests running on hypervisors which wait for the guest to respond to ACPI power-off before performing a hard shutdown.
The muen target provides bindings to run Solo5-based unikernels as native subjects on the Muen Separation Kernel. Building systems with Muen is outside of the scope of this guide, for details refer to this in-depth article.