The package installer will put the containerlab binary in the /usr/bin directory as well as create the /usr/bin/clab -> /usr/bin/containerlab symlink. The symlink allows the users to save on typing when they use containerlab: clab <command>.
Containerlab runs on Windows powered by Windows Subsystem Linux (aka WSL), where you can run Containerlab directly or in a Devcontainer. Open up Containerlab on Windows documentation for more details.
Running containerlab on macOS is possible both on ARM (M1/M2/M3/etc) and Intel chipsets. For a long time, we had many caveats around M-chipsets on Macs, but with the introduction of ARM64-native NOSes like Nokia SR Linux and Arista cEOS, powered by Rosetta emulation for x86_64-based NOSes, it is now possible to run containerlab on ARM-based Macs.
Since we wanted to share our experience with running containerlab on macOS in details, we have created a separate - Containerlab on macOS - guide.
Containerlab is also available in a container packaging. The latest containerlab release can be pulled with:
docker pull ghcr.io/srl-labs/clab
+
The package installer will put the containerlab binary in the /usr/bin directory as well as create the /usr/bin/clab -> /usr/bin/containerlab symlink. The symlink allows the users to save on typing when they use containerlab: clab <command>.
Containerlab runs on Windows powered by Windows Subsystem Linux (aka WSL), where you can run Containerlab directly or in a Devcontainer. Open up Containerlab on Windows documentation for more details.
Running containerlab on macOS is possible both on ARM (M1/M2/M3/etc) and Intel chipsets. For a long time, we had many caveats around M-chipsets on Macs, but with the introduction of ARM64-native NOSes like Nokia SR Linux and Arista cEOS, powered by Rosetta emulation for x86_64-based NOSes, it is now possible to run containerlab on ARM-based Macs.
Since we wanted to share our experience with running containerlab on macOS in details, we have created a separate - Containerlab on macOS - guide.
Containerlab is also available in a container packaging. The latest containerlab release can be pulled with:
docker pull ghcr.io/srl-labs/clab
To pick any of the released versions starting from release 0.19.0, use the version number as a tag, for example, docker pull ghcr.io/srl-labs/clab:0.19.0
Since containerlab itself deploys containers and creates veth pairs, its run instructions are a bit more complex, but still, it is a copy-paste-able command.
For example, if your lab files are contained within the current working directory - $(pwd) - then you can launch containerlab container as follows:
Deploy your lab. You can see the demo of this workflow in this YT video.
Or use the Devcontainer if running another VM is not your thing.
If you run an Intel mac, you still use OrbStack to deploy a VM, but you will not need to worry about the hard-to-find ARM64 images, as your processor runs x86_64 natively.
For quite some time, we have been saying that containerlab and macOS is a challenging mix. This statement has been echoed through multiple workshops/demos and was based on the following reasons:
ARM64 Network OS images: With the shift to ARM64 architecture made by Apple (and Microsoft2), we found ourselves in a situation where 99% of existing network OSes were not compiled for ARM64 architecture. This meant that containerlab users would have to rely on x86_64 emulation via Rosetta or QEMU, which imposes a significant performance penalty, often making the emulation unusable for practical purposes.
Docker on macOS: Since containerlab is reliant on Docker for container orchestrations, it needs Docker to be natively installed on the host. On macOS, Docker is always provided as a Linux/arm64 VM that runs the docker daemon, with docker-cli running on macOS natively. You can imagine, that dealing with a VM that runs a network topology poses some UX challenges, like getting access to the exposed ports or dealing with files on macOS that needs to be accessible to the Docker VM.
Linux on macOS? It is not only Docker that containerlab is based on. We leverage some Linux kernel APIs (like netlink) either directly or via Docker to be available to setup links, namespaces, bind-mounts, etc. Naturally, Darwin (macOS kernel) is not Linux, and while it is POSIX compliant, it is not a drop-in replacement for Linux. This means that some of the Linux-specific features that containerlab relies on are simply not present on macOS.
Looking at the above challenges one might think that containerlab on macOS is a lost cause. However, recently things have started to take a good course, and we are happy to say that for certain labs Containerlab on macOS might be even a better (!) choice overall.
As a long time macOS user, Roman recorded an in-depth video demonstrating how to run containerlab topologies on macOS using the tools of his choice. You can watch the video below or jump to the text version of the guide below.
The first thing one needs to understand that if you run macOS on ARM chipset (M1+), then you should use ARM64 network OS images whenever possible. This will give you the best performance and compatibility.
With Rosetta virtualisation it is possible to run x86_64 images on ARM64, but this comes with the performance penalty that might even make nodes not work at all.
VM-based images
VM-based images built with hellt/vrnetlab require nested virtualization support, which is only available on M3+ chip with macOS version 15 and above.
If you happen to satisfy these requirements, please let us know in the comments which images you were able to run on your M3+ Mac.
Finally some good news on this front, as vendors started to release or at least announce ARM64 versions of their network OSes. Nokia first released the preview version of their freely distributed SR Linux for ARM64, and Arista announced the imminent cEOS availability sometime in 2024.
You can also get FRR container for ARM64 architecture from their container registry.
Yes, SR Linux, cEOS, FRR do not cover all the network OSes out there, but it is a good start, and we hope that more vendors will follow the trend.
The good news is that almost all of the popular applications that we see being used in containerlabs are already built for ARM. Your streaming telemetry stack (gnmic, prometheus/influx, grafana), regular client-emulating endpoints such as Alpine or a collection of network related tools in the network-multitool image had already been supporting ARM architecture. You can leverage the sheer ecosystem multi-arch applications that are available in the public registries.
If the image you're looking for is not available in ARM64, you can still try running the AMD64 version of the image under Rosetta emulation. Rosetta is a macOS virtualisation layer that allows you running x86_64 code on ARM64 architecture.
It has been known to work for the following images:
Ever since macOS switched to ARM architecture for their processors, people in a "containers camp" have been busy making sure that Docker works well on macOS's new architecture.
But before we start talking about Docker on ARM Macs, let's remind ourselves how Docker works on macOS with Intel processors.
At the heart of any product or project that enables the Docker engine on Mac3 is a Linux VM that hosts the Docker daemon, aka "the engine". This VM is created and managed by the application that sets up Docker on your desktop OS. The Linux VM is a mandatory piece because the whole container ecosystem is built around Linux kernel features and APIs. Therefore, running Docker on any host with an operating system other than Linux requires a Linux VM.
As shown above, on Intel Macs, the macOS runs Darwin kernel on top of an AMD64 (aka x86_64) architecture, and consequently, the Docker VM runs the same architecture. The architecture of the Docker VM is the same as the host architecture allowing for a performant virtualization, since no processor emulation is needed.
Now let's see how things change when we switch to ARM Macs:
The diagram looks 99% the same as for the Intel Macs, the only difference being the architecture that macOS runs on and consequently the architecture of the Docker VM. Now we run ARM64 architecture on the host, and the Docker VM is also ARM64.
Native vs Emulation
If Docker runs exactly the same on ARM Macs as it does on Intel Macs, then why is it suddenly a problem to run containerlab on ARM Macs?
Well, it all comes down to the requirement of having ARM64 network OS images that we discussed earlier. Now when your Docker VM runs Linux/ARM64, you can run natively only ARM64-native software in it, and we, as a network community, are not having a lot of ARM64-native network OSes. It is getting better, but we are not there yet to claim 100% parity with the x86_64 world.
You should strive to run the native images as much as possible, as it gives you the best performance and compatibility. But how do you tell if the image is ARM64-native or not? A lot of applications that you might want to run in your containerlab topologies are already ARM64-native and often available as a multi-arch image.
When running the following docker image inspect command, you can grep the Architecture field to see if the image is ARM64-native:
Deploy your lab. You can see the demo of this workflow in this YT video.
Or use the Devcontainer if running another VM is not your thing.
If you run an Intel mac, you still use OrbStack to deploy a VM, but you will not need to worry about the hard-to-find ARM64 images, as your processor runs x86_64 natively.
For quite some time, we have been saying that containerlab and macOS is a challenging mix. This statement has been echoed through multiple workshops/demos and was based on the following reasons:
ARM64 Network OS images: With the shift to ARM64 architecture made by Apple (and Microsoft2), we found ourselves in a situation where 99% of existing network OSes were not compiled for ARM64 architecture. This meant that containerlab users would have to rely on x86_64 emulation via Rosetta or QEMU, which imposes a significant performance penalty, often making the emulation unusable for practical purposes.
Docker on macOS: Since containerlab is reliant on Docker for container orchestrations, it needs Docker to be natively installed on the host. On macOS, Docker is always provided as a Linux/arm64 VM that runs the docker daemon, with docker-cli running on macOS natively. You can imagine, that dealing with a VM that runs a network topology poses some UX challenges, like getting access to the exposed ports or dealing with files on macOS that needs to be accessible to the Docker VM.
Linux on macOS? It is not only Docker that containerlab is based on. We leverage some Linux kernel APIs (like netlink) either directly or via Docker to be available to setup links, namespaces, bind-mounts, etc. Naturally, Darwin (macOS kernel) is not Linux, and while it is POSIX compliant, it is not a drop-in replacement for Linux. This means that some of the Linux-specific features that containerlab relies on are simply not present on macOS.
Looking at the above challenges one might think that containerlab on macOS is a lost cause. However, recently things have started to take a good course, and we are happy to say that for certain labs Containerlab on macOS might be even a better (!) choice overall.
As a long time macOS user, Roman recorded an in-depth video demonstrating how to run containerlab topologies on macOS using the tools of his choice. You can watch the video below or jump to the text version of the guide below.
The first thing one needs to understand that if you run macOS on ARM chipset (M1+), then you should use ARM64 network OS images whenever possible. This will give you the best performance and compatibility.
With Rosetta virtualisation it is possible to run x86_64 images on ARM64, but this comes with the performance penalty that might even make nodes not work at all.
VM-based images
VM-based images built with hellt/vrnetlab require nested virtualization support, which is only available on M3+ chip with macOS version 15 and above.
If you happen to satisfy these requirements, please let us know in the comments which images you were able to run on your M3+ Mac.
Finally some good news on this front, as vendors started to release or at least announce ARM64 versions of their network OSes. Nokia first released the preview version of their freely distributed SR Linux for ARM64, and Arista announced the imminent cEOS availability sometime in 2024.
You can also get FRR container for ARM64 architecture from their container registry.
And if all you have to play with is pure control plane, you can use Juniper cRPD, which is also available for ARM64.
Yes, SR Linux, cEOS, FRR do not cover all the network OSes out there, but it is a good start, and we hope that more vendors will follow the trend.
The good news is that almost all of the popular applications that we see being used in containerlabs are already built for ARM. Your streaming telemetry stack (gnmic, prometheus/influx, grafana), regular client-emulating endpoints such as Alpine or a collection of network related tools in the network-multitool image had already been supporting ARM architecture. You can leverage the sheer ecosystem multi-arch applications that are available in the public registries.
If the image you're looking for is not available in ARM64, you can still try running the AMD64 version of the image under Rosetta emulation. Rosetta is a macOS virtualisation layer that allows you running x86_64 code on ARM64 architecture.
It has been known to work for the following images:
Ever since macOS switched to ARM architecture for their processors, people in a "containers camp" have been busy making sure that Docker works well on macOS's new architecture.
But before we start talking about Docker on ARM Macs, let's remind ourselves how Docker works on macOS with Intel processors.
At the heart of any product or project that enables the Docker engine on Mac3 is a Linux VM that hosts the Docker daemon, aka "the engine". This VM is created and managed by the application that sets up Docker on your desktop OS. The Linux VM is a mandatory piece because the whole container ecosystem is built around Linux kernel features and APIs. Therefore, running Docker on any host with an operating system other than Linux requires a Linux VM.
As shown above, on Intel Macs, the macOS runs Darwin kernel on top of an AMD64 (aka x86_64) architecture, and consequently, the Docker VM runs the same architecture. The architecture of the Docker VM is the same as the host architecture allowing for a performant virtualization, since no processor emulation is needed.
Now let's see how things change when we switch to ARM Macs:
The diagram looks 99% the same as for the Intel Macs, the only difference being the architecture that macOS runs on and consequently the architecture of the Docker VM. Now we run ARM64 architecture on the host, and the Docker VM is also ARM64.
Native vs Emulation
If Docker runs exactly the same on ARM Macs as it does on Intel Macs, then why is it suddenly a problem to run containerlab on ARM Macs?
Well, it all comes down to the requirement of having ARM64 network OS images that we discussed earlier. Now when your Docker VM runs Linux/ARM64, you can run natively only ARM64-native software in it, and we, as a network community, are not having a lot of ARM64-native network OSes. It is getting better, but we are not there yet to claim 100% parity with the x86_64 world.
You should strive to run the native images as much as possible, as it gives you the best performance and compatibility. But how do you tell if the image is ARM64-native or not? A lot of applications that you might want to run in your containerlab topologies are already ARM64-native and often available as a multi-arch image.
When running the following docker image inspect command, you can grep the Architecture field to see if the image is ARM64-native:
Yes, the New Year is not quite here, but the gifts are. We have a new table style and layout that you will see whenever you run any of the containerlab commands that display tabular data.
Before we used the classic table style which was not very pretty, but more importantly, it was quite W I D E. We've seen you struggling with it, shrinking the terminal font size to fit the table view. It was not a great UX.
So we decided to change that, and in this release we introduce a new table style that looks nice(er) and is much more compact! Here is a 1:1 comparison of the table output for the SR Linux ACL lab that has three nodes in it:
Old table style:
+ body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}
Yes, the New Year is not quite here, but the gifts are. We have a new table style and layout that you will see whenever you run any of the containerlab commands that display tabular data.
Before we used the classic table style which was not very pretty, but more importantly, it was quite W I D E. We've seen you struggling with it, shrinking the terminal font size to fit the table view. It was not a great UX.
So we decided to change that, and in this release we introduce a new table style that looks nice(er) and is much more compact! Here is a 1:1 comparison of the table output for the SR Linux ACL lab that has three nodes in it:
As you can see, the table is now almost half the width of the old one, which means you are less likely to have to shrink the font size to fit the table. Simply lovely.
Of course, it is not the style that made the difference, you may notice that we removed some columns like Container ID and node index. We also made each node to make use of the vertical space and combined Kind/Image and v4/v6 fields. This allowed us to narrow down the overall table width.
Font matters
There is a small price to pay for the new table style; it might be sensitive to the font family you use. In the terminal most fonts will work brilliantly, but when you dump the table to some UIs it might not be as pretty.
For example, when dumping the tables to the beautiful chalk.ist, select the Nova font.
We are curious to hear your feedback, negative or positive. If you feel that we should make the style configurable, please let us know in Discord.
Transparent management mode for VM-based nodes (beta)#
As predicted, we saw a growth in container-native network OSes over the past couple of years. Slowly, but surely we are moving to a better place, where we can run networking topologies fully in containers.
But there is still a lot of legacy infrastructure out there, and we needed to support it. That was the prime motivation to integrate vrnetlab to containerlab and wrap these fatty VMs with a thin container layer.
One particular feature of vrnetlab was that VMs were using the Qemu user-mode network stack, which is a bit of a pain to work with. It boils down to all VMs having the same management interface IP address, which is, of course, not ideal. It is quite critical to network management systems, who went crazy when they saw the same IP address on all VMs calling home. It was time to fix that.
Thanks to @vista- and the work he started in hellt/vrnetlab#268 we started to chip away on what we call a "transparent management mode" for vrnetlab. In this mode, each VM will have a distinct IP address assigned to its management interface that matches the IP address you see in the containerlab table. With some tc magic we were able to achieve a functional management connectivity while keeping the telnet/console accesses intact.
The ultimate goal Containerlab pursues is to make networking labs a commodity. Doesn't matter what OS you are using, what platform you are on, or how skilled you are with containers.
Over time we approached this lofty goal by making iterative improvements. Starting with making sure it is easy to install containerlab on any Linux distro using the quick setup script.
Then making it easy to run containerlab on borrowed and free compute - that is how Codespaces integration story started and was picked up by the community.
For this release we are taking another step further and releasing two new integrations that will help you reduce the mean-time-to-lab even further.
The devcontainer integration is a way to start a lab on a laptop, desktop, server or VM without installing anything on the host besides Docker. If you rememeber how easy it was to start a lab in Codespaces, you will be happy to get the same UX now with your local compute.
DevPod takes the devcontainer experience and adds better UX on top of it
An open-source project by Loft Labs, DevPod makes it possible to use the same devcontainer specification and create a "workspace" that uses almost any IDE known to men and deploys it on a wide range of providers.
It took us a while, but we finally refreshed the macOS documentation. The availability of Nokia SR Linux in a native arm64 architecture was definitely a catalyst for this, but not the only one.
After @hellt did a video on running containerlabs on arm64 architecture where he featured OrbStack in the role of a virtual machine manager for macOS, we've been getting a lot of feedback from our users saying that they finally got to run labs on their Macs.
We also refreshed the Windows documentation that revolves around WSL. It was a bit outdated, and WSL is still improving quite a lot.
With Win11 it became even better and the tireless team of our contributors - @kaelemc, @FloSch62, and @hyposcaler-bot - spent 900 messages in dicsord while delivering a custom WSL distro to elevate WSL experience to the sky.