diff --git a/404.html b/404.html index 4d3ef1e..c516e0b 100644 --- a/404.html +++ b/404.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

404

This page could not be found.

\ No newline at end of file + }

404

This page could not be found.

\ No newline at end of file diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/gpus-for-github-actions.json b/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/gpus-for-github-actions.json deleted file mode 100644 index e9a5793..0000000 --- a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/gpus-for-github-actions.json +++ /dev/null @@ -1 +0,0 @@ -{"pageProps":{"post":{"slug":"gpus-for-github-actions","fileName":"2024-03-12-gpus-for-github-actions.md","contentHtml":"

With the surge of interest in AI and machine learning models, it's not hard to think of reasons why people want GPUs in their workstations and production environments. They make building & training, fine-tuning and serving (inference) from a machine learning model just that much quicker than running with a CPU alone.

\n

So if you build and test code for CPUs in CI pipelines like GitHub Actions, why wouldn't you do the same with code built for GPUs? Why exercise only a portion of your codebase?

\n

GPUs for GitHub Actions

\n

One of our earliest customers moved all their GitHub Actions to actuated for a team of around 30 people, but since Firecracker has no support for GPUs, they had to keep a few self-hosted runners around for testing their models. Their second hand Dell servers were racked in their own datacentre, with 8x 3090 GPUs in each machine.

\n

Their request for GPU support in actuated predated the hype around OpenAI, and was the catalyst for us doing this work.

\n

I'll tell you a bit more about it, how to build your own workstation with commodity hardware, or where to rent a powerful bare-metal host with a capable GPU for less than 200 USD / mo that you can use with actuated.

\n

Available in early access

\n

So today, we're announcing early access to actuated for GPUs. Whether your machine has one GPU, two, or ten, you can allocate them directly to a microVM for a CI job, giving strong isolation, and the same ephemeral environment that you're used to with GitHub's hosted runners.

\n

\"Our

\n
\n

Our test rig has with 2x 3060s and is available for customer demos and early testing.

\n
\n

We've compiled a list of vendors that provide access to fast, bare-metal compute, but at the moment, there are only a few options for bare-metal with GPUs.

\n\n

We have a full bill of materials available for anyone who wants to build a workstation with 2x Nvidia 3060 graphics cards, giving 24GB of usage RAM at a relatively low maximum power consumption of 170W. It's ideal for CI and end to end testing.

\n

If you'd like to go even more premium, the Nvidia RTX 4000 card comes with 20GB of RAM, so two of those would give you 40GB of RAM available for Large Language Models (LLMs).

\n

For Hetzner, you can get started with an i5 bare-metal host with 14 cores, 64GB RAM and a dedicated Nvidia RTX 4000 for around 184 EUR / mo (less than 200 USD / mo). If that sounds like ridiculously good value, it's because it is.

\n

What does the build look like?

\n

Once you've installed the actuated agent, it's the same process as a regular bare-metal host.

\n

It'll show up on your actuated dashboard, and you can start sending jobs to it immediately.

\n

\"The

\n
\n

The server with 2x GPUs showing up in the dashboard

\n
\n

Here's how we install the Nvidia driver for a consumer-grade card. The process is very similar for the datacenter range of GPUs found in enterprise servers.

\n
name: nvidia-smi\n\njobs:\n    nvidia-smi:\n        name: nvidia-smi\n        runs-on: [actuated-8cpu-16gb, gpu]\n        steps:\n        - uses: actions/checkout@v1\n        - name: Download Nvidia install package\n          run: |\n           curl -s -S -L -O https://us.download.nvidia.com/XFree86/Linux-x86_64/525.60.11/NVIDIA-Linux-x86_64-525.60.11.run \\\n              && chmod +x ./NVIDIA-Linux-x86_64-525.60.11.run\n        - name: Install Nvidia driver and Kernel module\n          run: |\n              sudo ./NVIDIA-Linux-x86_64-525.60.11.run \\\n                  --accept-license \\\n                  --ui=none \\\n                  --no-questions \\\n                  --no-x-check \\\n                  --no-check-for-alternate-installs \\\n                  --no-nouveau-check\n        - name: Run nvidia-smi\n          run: |\n            nvidia-smi\n
\n

This is a very similar approach to installing a driver on your own machine, just without any interactive prompts. It took around 38s which is not very long considering how much time AI and ML operations can run for when doing end to end testing. The process installs some binaries like nvidia-smi and compiles a Kernel module to load the graphics driver, these could easily be cached with GitHub Action's built-in caching mechanism.

\n

For convenience, we created a composite action that reduces the duplication if you have lots of workflows with the Nvidia driver installed.

\n
name: gpu-job\n\njobs:\n    gpu-job:\n        name: gpu-job\n        runs-on: [actuated-8cpu-16gb, gpu]\n        steps:\n        - uses: actions/checkout@v1\n        - uses: self-actuated/nvidia-run@master\n        - name: Run nvidia-smi\n          run: |\n            nvidia-smi\n
\n

Of course, if you have an AMD graphics card, or even an ML accelerator like a PCIe Google Corale, that can also be passed through into a VM in a dedicated way.

\n

The mechanism being used is called VFIO, and allows a VM to take full, dedicated, isolated control over a PCI device.

\n

A quick example to get started

\n

To show the difference between using a GPU and CPU, I ran OpenAI's Whisper project, which transcribes audio or video to a text file.

\n

With the following demo video of Actuated's SSH gateway, running with the tiny model.

\n\n

That's over 2x quicker, for a 5:34 minute video. If you process a lot of clips, or much longer clips then the difference may be even more marked.

\n

The tiny model is really designed for demos, and in production you'd use the medium or large model which is much more resource intensive.

\n

Here's a screenshot showing what this looks like with the medium model, which is much larger and more accurate:

\n

Medium model running on a GPU via actuated

\n
\n

Medium model running on a GPU via actuated

\n
\n

With a CPU, even with 16 vCPU, all of them get pinned at 100%, and then it takes a significantly longer time to process.

\n

\"You

\n
\n

You can run the medium model on CPU, but would you want to?

\n
\n

With the medium model:

\n\n

The GPU increased the speed by 9x, imagine how much quicker it'd be if you used an Nvidia 3090, 4090, or even an RTX 4000.

\n

If you want to just explore the system, and run commands interactively, you can use actuated's SSH feature to get a shell. Once you know the commands you want to run, you can copy them into your workflow YAML file for GitHub Actions.

\n

The technical details

\n

Since launch, actuated powered by Firecracker has securely isolated over 220k CI jobs for GitHub Actions users. Whilst it's a complex project to integrate, it has been very reliable in production.

\n

Now in order to bring GPUs to actuated, we needed to add support for a second Virtual Machine Manager (VMM), and we picked cloud-hypervisor.

\n

cloud-hypervisor was originally a fork from Firecracker and shares a significant amount of code. One place it diverged was adding support for PCI devices, such as GPUs. Through VFIO, cloud-hypervisor allows for a GPU to be passed through to a VM in a dedicated way, so it can be used in isolation.

\n

Here's the first demo that I ran when we had everything working, showing the output from nvidia-smi:

\n

\"The

\n
\n

The first run of nvidia-smi

\n
\n

Reach out for more

\n

In a relatively short period of time, we were able to update our codebase to support both Firecracker and cloud-hypervisor, and to enable consumer-grade GPUs to be passed through to VMs in isolation.

\n

You can rent a really powerful and capable machine from Hetzner for under 200 USD / mo, or build your own workstation with dual graphics cards like our demo rig, for less than 2000 USD and then you own that and can use it as much as you want, plugged in under your desk or left in a cabinet in your office.

\n

A quick recap on use-cases

\n

Let's say you want to run end to end tests for an application that uses a GPU? Perhaps it runs on Kubernetes? You can do that.

\n

Do you want to fine-tune, train, or run a batch of inferences on a model? You can do that. GitHub Actions has a 6 hour timeout, which is plenty for many tasks.

\n

Would it make sense to run Stable Diffusion in the background, with different versions, different inputs, across a matrix? GitHub Actions makes that easy, and actuated can manage the GPU allocations for you.

\n

Do you run inference from OpenFaaS functions? We have a tutorial on OpenAI Whisper within a function with GPU acceleration here and a separate one on how to serve Server Sent Events (SSE) from OpenAI or self-hosted models, which is popular for chat-style interfaces to AI models.

\n

If you're interested in GPU support for GitHub Actions, then reach out to talk to us with this form.

","title":"Accelerate GitHub Actions with dedicated GPUs","description":"You can now accelerate GitHub Actions with dedicated GPUs for machine learning and AI use-cases.","tags":["ai","ml","githubactions","openai","transcription","machinelearning"],"author_img":"alex","image":"/images/2024-03-gpus/background.png","date":"2024-03-12"}},"__N_SSG":true} \ No newline at end of file diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/amd-zenbleed-update-now.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/amd-zenbleed-update-now.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/amd-zenbleed-update-now.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/amd-zenbleed-update-now.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/arm-ci-cncf-ampere.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/arm-ci-cncf-ampere.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/arm-ci-cncf-ampere.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/arm-ci-cncf-ampere.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/automate-packer-qemu-image-builds.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/automate-packer-qemu-image-builds.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/automate-packer-qemu-image-builds.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/automate-packer-qemu-image-builds.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/blazing-fast-ci-with-microvms.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/blazing-fast-ci-with-microvms.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/blazing-fast-ci-with-microvms.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/blazing-fast-ci-with-microvms.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/caching-in-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/caching-in-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/caching-in-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/caching-in-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/calyptia-case-study-arm.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/calyptia-case-study-arm.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/calyptia-case-study-arm.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/calyptia-case-study-arm.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/case-study-bring-your-own-bare-metal-to-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/case-study-bring-your-own-bare-metal-to-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/case-study-bring-your-own-bare-metal-to-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/case-study-bring-your-own-bare-metal-to-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/cncf-arm-march-update.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/cncf-arm-march-update.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/cncf-arm-march-update.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/cncf-arm-march-update.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/custom-sizes-bpf-kvm.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/custom-sizes-bpf-kvm.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/custom-sizes-bpf-kvm.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/custom-sizes-bpf-kvm.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/develop-a-great-go-cli.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/develop-a-great-go-cli.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/develop-a-great-go-cli.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/develop-a-great-go-cli.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/faster-nix-builds.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/faster-nix-builds.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/faster-nix-builds.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/faster-nix-builds.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/faster-self-hosted-cache.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/faster-self-hosted-cache.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/faster-self-hosted-cache.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/faster-self-hosted-cache.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/firecracker-container-lab.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/firecracker-container-lab.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/firecracker-container-lab.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/firecracker-container-lab.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/github-actions-usage-cli.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/github-actions-usage-cli.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/github-actions-usage-cli.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/github-actions-usage-cli.json diff --git a/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/gpus-for-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/gpus-for-github-actions.json new file mode 100644 index 0000000..b6a6c4d --- /dev/null +++ b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/gpus-for-github-actions.json @@ -0,0 +1 @@ +{"pageProps":{"post":{"slug":"gpus-for-github-actions","fileName":"2024-03-12-gpus-for-github-actions.md","contentHtml":"

With the surge of interest in AI and machine learning models, it's not hard to think of reasons why people want GPUs in their workstations and production environments. They make building & training, fine-tuning and serving (inference) from a machine learning model just that much quicker than running with a CPU alone.

\n

So if you build and test code for CPUs in CI pipelines like GitHub Actions, why wouldn't you do the same with code built for GPUs? Why exercise only a portion of your codebase?

\n

GPUs for GitHub Actions

\n

One of our earliest customers moved all their GitHub Actions to actuated for a team of around 30 people, but since Firecracker has no support for GPUs, they had to keep a few self-hosted runners around for testing their models. Their second hand Dell servers were racked in their own datacentre, with 8x 3090 GPUs in each machine.

\n

Their request for GPU support in actuated predated the hype around OpenAI, and was the catalyst for us doing this work.

\n

They told us how many issues they had keeping drivers in sync when trying to use self-hosted runners, and the security issues they ran into with mounting Docker sockets or running privileged containers in Kubernetes.

\n

With a microVM and actuated, you'll be able to test out different versions of drivers as you see fit, and know there will never be side-effects between builds. You can read more in our FAQ on how actuated differs from other solutions which rely on the poor isolation afforded by containers. Actuated is the closest you can get to a hosted runner, whilst having full access to your own hardware.

\n

I'll tell you a bit more about it, how to build your own workstation with commodity hardware, or where to rent a powerful bare-metal host with a capable GPU for less than 200 USD / mo that you can use with actuated.

\n

Available in early access

\n

So today, we're announcing early access to actuated for GPUs. Whether your machine has one GPU, two, or ten, you can allocate them directly to a microVM for a CI job, giving strong isolation, and the same ephemeral environment that you're used to with GitHub's hosted runners.

\n

\"Our

\n
\n

Our test rig has with 2x 3060s and is available for customer demos and early testing.

\n
\n

We've compiled a list of vendors that provide access to fast, bare-metal compute, but at the moment, there are only a few options for bare-metal with GPUs.

\n\n

We have a full bill of materials available for anyone who wants to build a workstation with 2x Nvidia 3060 graphics cards, giving 24GB of usage RAM at a relatively low maximum power consumption of 170W. It's ideal for CI and end to end testing.

\n

If you'd like to go even more premium, the Nvidia RTX 4000 card comes with 20GB of RAM, so two of those would give you 40GB of RAM available for Large Language Models (LLMs).

\n

For Hetzner, you can get started with an i5 bare-metal host with 14 cores, 64GB RAM and a dedicated Nvidia RTX 4000 for around 184 EUR / mo (less than 200 USD / mo). If that sounds like ridiculously good value, it's because it is.

\n

What does the build look like?

\n

Once you've installed the actuated agent, it's the same process as a regular bare-metal host.

\n

It'll show up on your actuated dashboard, and you can start sending jobs to it immediately.

\n

\"The

\n
\n

The server with 2x GPUs showing up in the dashboard

\n
\n

Here's how we install the Nvidia driver for a consumer-grade card. The process is very similar for the datacenter range of GPUs found in enterprise servers.

\n
name: nvidia-smi\n\njobs:\n    nvidia-smi:\n        name: nvidia-smi\n        runs-on: [actuated-8cpu-16gb, gpu]\n        steps:\n        - uses: actions/checkout@v1\n        - name: Download Nvidia install package\n          run: |\n           curl -s -S -L -O https://us.download.nvidia.com/XFree86/Linux-x86_64/525.60.11/NVIDIA-Linux-x86_64-525.60.11.run \\\n              && chmod +x ./NVIDIA-Linux-x86_64-525.60.11.run\n        - name: Install Nvidia driver and Kernel module\n          run: |\n              sudo ./NVIDIA-Linux-x86_64-525.60.11.run \\\n                  --accept-license \\\n                  --ui=none \\\n                  --no-questions \\\n                  --no-x-check \\\n                  --no-check-for-alternate-installs \\\n                  --no-nouveau-check\n        - name: Run nvidia-smi\n          run: |\n            nvidia-smi\n
\n

This is a very similar approach to installing a driver on your own machine, just without any interactive prompts. It took around 38s which is not very long considering how much time AI and ML operations can run for when doing end to end testing. The process installs some binaries like nvidia-smi and compiles a Kernel module to load the graphics driver, these could easily be cached with GitHub Action's built-in caching mechanism.

\n

For convenience, we created a composite action that reduces the duplication if you have lots of workflows with the Nvidia driver installed.

\n
name: gpu-job\n\njobs:\n    gpu-job:\n        name: gpu-job\n        runs-on: [actuated-8cpu-16gb, gpu]\n        steps:\n        - uses: actions/checkout@v1\n        - uses: self-actuated/nvidia-run@master\n        - name: Run nvidia-smi\n          run: |\n            nvidia-smi\n
\n

Of course, if you have an AMD graphics card, or even an ML accelerator like a PCIe Google Corale, that can also be passed through into a VM in a dedicated way.

\n

The mechanism being used is called VFIO, and allows a VM to take full, dedicated, isolated control over a PCI device.

\n

A quick example to get started

\n

To show the difference between using a GPU and CPU, I ran OpenAI's Whisper project, which transcribes audio or video to a text file.

\n

With the following demo video of Actuated's SSH gateway, running with the tiny model.

\n\n

That's over 2x quicker, for a 5:34 minute video. If you process a lot of clips, or much longer clips then the difference may be even more marked.

\n

The tiny model is really designed for demos, and in production you'd use the medium or large model which is much more resource intensive.

\n

Here's a screenshot showing what this looks like with the medium model, which is much larger and more accurate:

\n

Medium model running on a GPU via actuated

\n
\n

Medium model running on a GPU via actuated

\n
\n

With a CPU, even with 16 vCPU, all of them get pinned at 100%, and then it takes a significantly longer time to process.

\n

\"You

\n
\n

You can run the medium model on CPU, but would you want to?

\n
\n

With the medium model:

\n\n

The GPU increased the speed by 9x, imagine how much quicker it'd be if you used an Nvidia 3090, 4090, or even an RTX 4000.

\n

If you want to just explore the system, and run commands interactively, you can use actuated's SSH feature to get a shell. Once you know the commands you want to run, you can copy them into your workflow YAML file for GitHub Actions.

\n

The technical details

\n

Since launch, actuated powered by Firecracker has securely isolated over 220k CI jobs for GitHub Actions users. Whilst it's a complex project to integrate, it has been very reliable in production.

\n

Now in order to bring GPUs to actuated, we needed to add support for a second Virtual Machine Manager (VMM), and we picked cloud-hypervisor.

\n

cloud-hypervisor was originally a fork from Firecracker and shares a significant amount of code. One place it diverged was adding support for PCI devices, such as GPUs. Through VFIO, cloud-hypervisor allows for a GPU to be passed through to a VM in a dedicated way, so it can be used in isolation.

\n

Here's the first demo that I ran when we had everything working, showing the output from nvidia-smi:

\n

\"The

\n
\n

The first run of nvidia-smi

\n
\n

Reach out for more

\n

In a relatively short period of time, we were able to update our codebase to support both Firecracker and cloud-hypervisor, and to enable consumer-grade GPUs to be passed through to VMs in isolation.

\n

You can rent a really powerful and capable machine from Hetzner for under 200 USD / mo, or build your own workstation with dual graphics cards like our demo rig, for less than 2000 USD and then you own that and can use it as much as you want, plugged in under your desk or left in a cabinet in your office.

\n

A quick recap on use-cases

\n

Let's say you want to run end to end tests for an application that uses a GPU? Perhaps it runs on Kubernetes? You can do that.

\n

Do you want to fine-tune, train, or run a batch of inferences on a model? You can do that. GitHub Actions has a 6 hour timeout, which is plenty for many tasks.

\n

Would it make sense to run Stable Diffusion in the background, with different versions, different inputs, across a matrix? GitHub Actions makes that easy, and actuated can manage the GPU allocations for you.

\n

Do you run inference from OpenFaaS functions? We have a tutorial on OpenAI Whisper within a function with GPU acceleration here and a separate one on how to serve Server Sent Events (SSE) from OpenAI or self-hosted models, which is popular for chat-style interfaces to AI models.

\n

If you're interested in GPU support for GitHub Actions, then reach out to talk to us with this form.

","title":"Accelerate GitHub Actions with dedicated GPUs","description":"You can now accelerate GitHub Actions with dedicated GPUs for machine learning and AI use-cases.","tags":["ai","ml","githubactions","openai","transcription","machinelearning"],"author_img":"alex","image":"/images/2024-03-gpus/background.png","date":"2024-03-12"}},"__N_SSG":true} \ No newline at end of file diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/how-to-run-multi-arch-builds-natively.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/how-to-run-multi-arch-builds-natively.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/how-to-run-multi-arch-builds-natively.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/how-to-run-multi-arch-builds-natively.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/is-the-self-hosted-runner-safe-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/is-the-self-hosted-runner-safe-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/is-the-self-hosted-runner-safe-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/is-the-self-hosted-runner-safe-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/kvm-in-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/kvm-in-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/kvm-in-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/kvm-in-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/local-caching-for-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/local-caching-for-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/local-caching-for-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/local-caching-for-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/managing-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/managing-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/managing-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/managing-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/multi-arch-docker-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/multi-arch-docker-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/multi-arch-docker-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/multi-arch-docker-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/native-arm64-for-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/native-arm64-for-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/native-arm64-for-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/native-arm64-for-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/oidc-proxy-for-openfaas.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/oidc-proxy-for-openfaas.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/oidc-proxy-for-openfaas.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/oidc-proxy-for-openfaas.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/right-sizing-vms-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/right-sizing-vms-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/right-sizing-vms-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/right-sizing-vms-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/sbom-in-github-actions.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/sbom-in-github-actions.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/sbom-in-github-actions.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/sbom-in-github-actions.json diff --git a/_next/data/iSl-M_e6KXaBNyqaBwsgE/blog/secure-microvm-ci-gitlab.json b/_next/data/tO_ZSfSw7smnr0wB1nKwH/blog/secure-microvm-ci-gitlab.json similarity index 100% rename from _next/data/iSl-M_e6KXaBNyqaBwsgE/blog/secure-microvm-ci-gitlab.json rename to _next/data/tO_ZSfSw7smnr0wB1nKwH/blog/secure-microvm-ci-gitlab.json diff --git a/_next/static/iSl-M_e6KXaBNyqaBwsgE/_buildManifest.js b/_next/static/tO_ZSfSw7smnr0wB1nKwH/_buildManifest.js similarity index 100% rename from _next/static/iSl-M_e6KXaBNyqaBwsgE/_buildManifest.js rename to _next/static/tO_ZSfSw7smnr0wB1nKwH/_buildManifest.js diff --git a/_next/static/iSl-M_e6KXaBNyqaBwsgE/_ssgManifest.js b/_next/static/tO_ZSfSw7smnr0wB1nKwH/_ssgManifest.js similarity index 100% rename from _next/static/iSl-M_e6KXaBNyqaBwsgE/_ssgManifest.js rename to _next/static/tO_ZSfSw7smnr0wB1nKwH/_ssgManifest.js diff --git a/blog.html b/blog.html index 05aeeef..a3ddf47 100644 --- a/blog.html +++ b/blog.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Actuated Blog

The latest news, tutorials, case-studies, and announcements.

\ No newline at end of file +

Actuated Blog

The latest news, tutorials, case-studies, and announcements.

\ No newline at end of file diff --git a/blog/amd-zenbleed-update-now.html b/blog/amd-zenbleed-update-now.html index a11bf6d..c3bd6a7 100644 --- a/blog/amd-zenbleed-update-now.html +++ b/blog/amd-zenbleed-update-now.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Update your AMD hosts now to mitigate the Zenbleed exploit

Learn how to update the microcode on your AMD CPU to avoid the Zenbleed exploit.

On 24th July 2023, The Register covered a new exploit for certain AMD CPUs based upon the Zen architecture. The exploit, dubbed Zenbleed, allows an attacker to read arbitrary physical memory locations on the host system. It works by allowing memory to be read after it's been set to be freed up aka "use-after-free". This is a serious vulnerability, and you should update your AMD hosts as soon as possible.

+

Update your AMD hosts now to mitigate the Zenbleed exploit

Learn how to update the microcode on your AMD CPU to avoid the Zenbleed exploit.

On 24th July 2023, The Register covered a new exploit for certain AMD CPUs based upon the Zen architecture. The exploit, dubbed Zenbleed, allows an attacker to read arbitrary physical memory locations on the host system. It works by allowing memory to be read after it's been set to be freed up aka "use-after-free". This is a serious vulnerability, and you should update your AMD hosts as soon as possible.

The Register made the claim that "any level of emulation such as QEMU" would prevent the exploit from working. This is misleading because QEMU only makes sense in production when used with hardware acceleration (KVM). We were able to run the exploit with a GitHub Action using actuated on an AMD Epyc server from Equinix Metal using Firecracker and KVM.

"If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails."

@@ -100,4 +100,4 @@

Verifying the mitigation

Wrapping up

Actuated uses Firecracker, an open source Virtual Machine Manager (VMM) that works with Linux KVM to run isolated systems on a host. We have verified that the exploit works on Firecracker, and that the mitigation works too. So whilst VM-level isolation and an immutable filesystem is much more appropriate than a container for CI, this is an example of why we must still be vigilant and ready to respond to security vulnerabilities.

-

This is an unfortunate, and serious vulnerability. It affects bare-metal, VMs and containers, which is why it's important to update your systems as soon as possible.

\ No newline at end of file +

This is an unfortunate, and serious vulnerability. It affects bare-metal, VMs and containers, which is why it's important to update your systems as soon as possible.

\ No newline at end of file diff --git a/blog/arm-ci-cncf-ampere.html b/blog/arm-ci-cncf-ampere.html index 16bc604..873df4b 100644 --- a/blog/arm-ci-cncf-ampere.html +++ b/blog/arm-ci-cncf-ampere.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Announcing managed Arm CI for CNCF projects

Ampere Computing and The Cloud Native Computing Foundation are sponsoring a pilot of actuated's managed Arm CI for CNCF projects.

In this post, we'll cover why Ampere Computing and The Cloud Native Computing Foundation (CNCF) are sponsoring a pilot of actuated for open source projects, how you can get involved.

+

Announcing managed Arm CI for CNCF projects

Ampere Computing and The Cloud Native Computing Foundation are sponsoring a pilot of actuated's managed Arm CI for CNCF projects.

In this post, we'll cover why Ampere Computing and The Cloud Native Computing Foundation (CNCF) are sponsoring a pilot of actuated for open source projects, how you can get involved.

We'll also give you a quick one year recap on actuated, if you haven't checked in with us for a while.

Managed Arm CI for CNCF projects

At KubeCon EU, I spoke to Chris Aniszczyk, CTO at the Cloud Native Computing Foundation (CNCF), and told him about some of the results we'd been seeing with actuated customers, including Fluent Bit, which is a CNCF project. Chris told me that many teams were either putting off Arm support all together, were suffering with the slow builds that come from using QEMU, or were managing their own infrastructure which was underutilized.

@@ -88,4 +88,4 @@

Learn more about actuated

Did you know? Actuated for GitLab CI is now in technical preview, watch a demo here.

-
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/automate-packer-qemu-image-builds.html b/blog/automate-packer-qemu-image-builds.html index 730e375..9a4ae1f 100644 --- a/blog/automate-packer-qemu-image-builds.html +++ b/blog/automate-packer-qemu-image-builds.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Automate Packer Images with QEMU and Actuated

Learn how to automate Packer images using QEMU and nested virtualisation through actuated.

One of the most popular tools for creating images for virtual machines is Packer by Hashicorp. Packer automates the process of building images for a variety of platforms from a single source configuration. Different builders can be used to create machines and generate images from those machines.

+

Automate Packer Images with QEMU and Actuated

Learn how to automate Packer images using QEMU and nested virtualisation through actuated.

One of the most popular tools for creating images for virtual machines is Packer by Hashicorp. Packer automates the process of building images for a variety of platforms from a single source configuration. Different builders can be used to create machines and generate images from those machines.

In this tutorial we will use the QEMU builder to create a KVM virtual machine image.

We will see how the Packer build can be completely automated by integrating Packer into a continuous integration (CI) pipeline with GitHub Actions. The workflow will automatically trigger image builds on changes and publish the resulting images as GitHub release artifacts.

Actuated supports nested virtualsation where a VM can make use of KVM to launch additional VMs within a GitHub Action. This makes it possible to run the Packer QEMU builder in GitHub Action workflows. Something that is not possible with GitHub's default hosted runners.

@@ -236,4 +236,4 @@

Taking it further

We exported the image in qcow2 format but you might need a different image format. The QEMU builder also supports outputting images in raw format. In our Packer template the output format can be changed by setting the format variable.

Additional tools like the qemu disk image utility can also be used to convert images between different formats. A post-processor would be the ideal place for these kinds of extra processing steps.

AWS also supports importing VM images and converting them to an AMI so they can be used to launch EC2 instances. See: Create an AMI from a VM image

-

If you'd like to know more about nested virtualisation support, check out: How to run KVM guests in your GitHub Actions

\ No newline at end of file +

If you'd like to know more about nested virtualisation support, check out: How to run KVM guests in your GitHub Actions

\ No newline at end of file diff --git a/blog/blazing-fast-ci-with-microvms.html b/blog/blazing-fast-ci-with-microvms.html index a5d1a5b..66b1090 100644 --- a/blog/blazing-fast-ci-with-microvms.html +++ b/blog/blazing-fast-ci-with-microvms.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Blazing fast CI with MicroVMs

Alex Ellis

I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

Around 6-8 months ago I started exploring MicroVMs out of curiosity. Around the same time, I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

+

Blazing fast CI with MicroVMs

Alex Ellis

I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

Around 6-8 months ago I started exploring MicroVMs out of curiosity. Around the same time, I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

There's three parts to this post:

  1. A quick debrief on Firecracker and MicroVMs vs legacy solutions
  2. @@ -172,4 +172,4 @@

    What are people saying about actuated?

    This is awesome!" (After reducing Parca build time from 33.5 minutes to 1 minute 26s)

    Frederic Branczyk, Co-founder, Polar Signals

    -
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/caching-in-github-actions.html b/blog/caching-in-github-actions.html index 59d2251..e75949b 100644 --- a/blog/caching-in-github-actions.html +++ b/blog/caching-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Make your builds run faster with Caching for GitHub Actions

Han Verstraete

Learn how we made a Golang project build 4x faster using GitHub's built-in caching mechanism.

GitHub provides a cache action that allows caching dependencies and build outputs to improve workflow execution time.

+

Make your builds run faster with Caching for GitHub Actions

Han Verstraete

Learn how we made a Golang project build 4x faster using GitHub's built-in caching mechanism.

GitHub provides a cache action that allows caching dependencies and build outputs to improve workflow execution time.

A common use case would be to cache packages and dependencies from tools such as npm, pip, Gradle, ... . If you are using Go, caching go modules and the build cache can save you a significant amount of build time as we will see in the next section.

Caching can be configured manually, but a lot of setup actions already use the actions/cache under the hood and provide a configuration option to enable caching.

We use the actions cache to speed up workflows for building the Actuated base images. As part of those workflows we build a kernel and then a rootfs. Since the kernel’s configuration is changed infrequently it makes sense to cache that output.

@@ -106,4 +106,4 @@

Conclusion

Want to learn more about Go and GitHub Actions?

Alex's eBook Everyday Golang has a chapter dedicated to building Go programs with Docker and GitHub Actions.

-
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/calyptia-case-study-arm.html b/blog/calyptia-case-study-arm.html index 0311d47..8af63bd 100644 --- a/blog/calyptia-case-study-arm.html +++ b/blog/calyptia-case-study-arm.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

How Calyptia fixed its Arm builds whilst saving money

Learn how Calyptia fixed its failing Arm builds for open-source Fluent Bit and accelerated our commercial development by adopting Actuated and bare-metal runners.

This is a case-study, and guest article by Patrick Stephens, Tech Lead of Infrastructure at Calyptia.

+

How Calyptia fixed its Arm builds whilst saving money

Learn how Calyptia fixed its failing Arm builds for open-source Fluent Bit and accelerated our commercial development by adopting Actuated and bare-metal runners.

This is a case-study, and guest article by Patrick Stephens, Tech Lead of Infrastructure at Calyptia.

Introduction

Different architecture builds can be slow using the Github Actions hosted runners due to emulation of the non-native architecture for the build. This blog shows a simple way to make use of self-hosted runners for dedicated builds but in a secure and easy to maintain fashion.

Calyptia maintains the OSS and Cloud Native Computing Foundation (CNCF) graduated Fluent projects including Fluent Bit. We then add value to the open-source core by providing commercial services and enterprise-level features.

@@ -118,4 +118,4 @@

Actuated support

Conclusion

Alex Ellis: We've learned a lot working with Patrick and Calyptia and are pleased to see that they were able to save money, whilst getting much quicker, and safer Open Source and commercial builds.

We value getting feedback and suggestions from customers, and Patrick continues to provide plenty of them.

-

If you'd like to learn more about actuated, reach out to speak to our team by clicking "Sign-up" and filling out the form. We'll be in touch to arrange a call.

\ No newline at end of file +

If you'd like to learn more about actuated, reach out to speak to our team by clicking "Sign-up" and filling out the form. We'll be in touch to arrange a call.

\ No newline at end of file diff --git a/blog/case-study-bring-your-own-bare-metal-to-actions.html b/blog/case-study-bring-your-own-bare-metal-to-actions.html index e729c52..966889d 100644 --- a/blog/case-study-bring-your-own-bare-metal-to-actions.html +++ b/blog/case-study-bring-your-own-bare-metal-to-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Bring Your Own Metal Case Study with GitHub Actions

Alex Ellis

See how BYO bare-metal made a 6 hour GitHub Actions build complete 25x faster.

I'm going to show you how both a regular x86_64 build and an Arm build were made dramatically faster by using Bring Your Own (BYO) bare-metal servers.

+

Bring Your Own Metal Case Study with GitHub Actions

Alex Ellis

See how BYO bare-metal made a 6 hour GitHub Actions build complete 25x faster.

I'm going to show you how both a regular x86_64 build and an Arm build were made dramatically faster by using Bring Your Own (BYO) bare-metal servers.

At the early stage of a project, GitHub's standard runners with 2x cores, 8GB RAM, and a little free disk space are perfect because they're free for public repos. For private repos they come in at a modest cost, if you keep your usage low.

What's not to love?

Well, Ed Warnicke, Distinguished Engineer at Cisco contacted me a few weeks ago and told me about the VPP project, and some of the problems he was running into trying to build it with hosted runners.

@@ -99,4 +99,4 @@

Want to work with us?

  • The efficient way to publish multi-arch containers from GitHub Actions
  • Is the GitHub Actions self-hosted runner safe for Open Source?
  • How to make GitHub Actions 22x faster with bare-metal Arm
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/cncf-arm-march-update.html b/blog/cncf-arm-march-update.html index 5296633..d01b40c 100644 --- a/blog/cncf-arm-march-update.html +++ b/blog/cncf-arm-march-update.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    The state of Arm CI for the CNCF

    After running over 70k build minutes for top-tier CNCF projects, we give an update on the sponsored Arm CI program.

    It's now been 4 months since we kicked off the sponsored program with the Cloud Native Computing Foundation (CNCF) and Ampere to manage CI for the foundation's open source projects. But even before that, Calyptia, the maintainer of Fluent approached us to run Arm CI for the open source fluent repos, so we've been running CI for CNCF projects since June 2023.

    +

    The state of Arm CI for the CNCF

    After running over 70k build minutes for top-tier CNCF projects, we give an update on the sponsored Arm CI program.

    It's now been 4 months since we kicked off the sponsored program with the Cloud Native Computing Foundation (CNCF) and Ampere to manage CI for the foundation's open source projects. But even before that, Calyptia, the maintainer of Fluent approached us to run Arm CI for the open source fluent repos, so we've been running CI for CNCF projects since June 2023.

    Over that time, we've got to work directly with some really bright, friendly, and helpful maintainers, who wanted to have a safe, fast and secure way to create release artifacts, test PRs, and to run end to end tests. Their alternative until this point was either to go against GitHub's own advice, and to run an unsafe, self-hosted runner on an open source repo, or to use QEMU that in the case of Fluent meant their 5 minute build took over 6 hours before failing.

    You can find out more about why we put this program together in the original announcement: Announcing managed Arm CI for CNCF projects

    Measuring the impact

    @@ -123,4 +123,4 @@

    Wrapping up

    Summing up the program so far

    Through the sponsored program, actuated has now run over 70k build minutes for around 10 CNCF projects, and we've heard from a growing number of projects who would like access.

    We've secured the supply chain by removing unsafe runners that GitHub says should definitely not be used for open source repositories, and we've lessened the burden of server management on already busy maintainers.

    -

    Whilst the original pilot program is now full, we have the capacity to onboard many other projects and would love to work with you. We are happy to offer a discounted subscription if your employer that sponsors your time on the said CNCF project will pay for it. Otherwise, contact us anyway, and we'll put you into email contact with Chris Aniszczyk so you can let him know how this would help you.

    \ No newline at end of file +

    Whilst the original pilot program is now full, we have the capacity to onboard many other projects and would love to work with you. We are happy to offer a discounted subscription if your employer that sponsors your time on the said CNCF project will pay for it. Otherwise, contact us anyway, and we'll put you into email contact with Chris Aniszczyk so you can let him know how this would help you.

    \ No newline at end of file diff --git a/blog/custom-sizes-bpf-kvm.html b/blog/custom-sizes-bpf-kvm.html index d5bc462..9151f59 100644 --- a/blog/custom-sizes-bpf-kvm.html +++ b/blog/custom-sizes-bpf-kvm.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration

    You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.

    In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

    +

    December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration

    You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.

    In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

    New eBPF support and 5.10.201 Kernel

    And as part of our work to provide hosted Arm CI for CNCF projects, including Tetragon and Cilium, we've now enabled eBPF and BTF features within the Kernel.

    @@ -68,4 +68,4 @@

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/develop-a-great-go-cli.html b/blog/develop-a-great-go-cli.html index 68bb95a..b0d45a0 100644 --- a/blog/develop-a-great-go-cli.html +++ b/blog/develop-a-great-go-cli.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to develop a great CLI with Go

    Alex shares his insights from building half a dozen popular Go CLIs. Which can you apply to your projects?

    Is your project's CLI growing with you? I'll cover some of the lessons learned writing the OpenFaaS, actuated, actions-usage, arkade and k3sup CLIs, going as far back as 2016. I hope you'll find some ideas or inspiration for your own projects - either to start them off, or to improve them as you go along.

    +

    How to develop a great CLI with Go

    Alex shares his insights from building half a dozen popular Go CLIs. Which can you apply to your projects?

    Is your project's CLI growing with you? I'll cover some of the lessons learned writing the OpenFaaS, actuated, actions-usage, arkade and k3sup CLIs, going as far back as 2016. I hope you'll find some ideas or inspiration for your own projects - either to start them off, or to improve them as you go along.

    Just starting your journey, or want to go deeper?

    You can master the fundamentals of Go (also called Golang) with my eBook Everyday Golang, which includes chapters on Go routines, HTTP clients and servers, text templates, unit testing and crafting a CLI. If you're on a budget, I would recommend checkout out the official Go tour, too.

    @@ -228,4 +228,4 @@

    Wrapping up

  • faas-cli - CLI client for OpenFaaS
  • k3sup - Install K3s over SSH
  • arkade - Download CLI tools from GitHub releases
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/faster-nix-builds.html b/blog/faster-nix-builds.html index 8cb11c6..02895e8 100644 --- a/blog/faster-nix-builds.html +++ b/blog/faster-nix-builds.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Faster Nix builds with GitHub Actions and actuated

    Speed up your Nix project builds on GitHub Actions with runners powered by Firecracker.

    faasd is a lightweight and portable version of OpenFaaS that was created to run on a single host. In my spare time I maintain faasd-nix, a project that packages faasd and exposes a NixOS module so it can be run with NixOS.

    +

    Faster Nix builds with GitHub Actions and actuated

    Speed up your Nix project builds on GitHub Actions with runners powered by Firecracker.

    faasd is a lightweight and portable version of OpenFaaS that was created to run on a single host. In my spare time I maintain faasd-nix, a project that packages faasd and exposes a NixOS module so it can be run with NixOS.

    The module itself depends on faasd, containerd and the CNI plugins and all of these binaries are built in CI with Nix and then cached using Cachix to save time on subsequent builds.

    I often deploy faasd with NixOS on a Raspberry Pi and to the cloud, so I build binaries for both x86_64 and aarch64. The build usually runs on the default GitHub hosted action runners. Now because GitHub currently doesn't have Arm support, I use QEMU instead which can emulate them. The drawback of this approach is that builds can sometimes be several times slower.

    @@ -107,4 +107,4 @@

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/faster-self-hosted-cache.html b/blog/faster-self-hosted-cache.html index ece48ec..0486bf5 100644 --- a/blog/faster-self-hosted-cache.html +++ b/blog/faster-self-hosted-cache.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Fixing the cache latency for self-hosted GitHub Actions

    The cache for GitHub Actions can speed up CI/CD pipelines. But what about when it slows you down?

    In some of our builds for actuated we cache things like the Linux Kernel, so we don't needlessly rebuild it when we update packages in our base images. It can shave minutes off every build meaning our servers can be used more efficiently. Most customers we've seen so far only make light to modest use of GitHub's hosted cache, so haven't noticed much of a latency problem.

    +

    Fixing the cache latency for self-hosted GitHub Actions

    The cache for GitHub Actions can speed up CI/CD pipelines. But what about when it slows you down?

    In some of our builds for actuated we cache things like the Linux Kernel, so we don't needlessly rebuild it when we update packages in our base images. It can shave minutes off every build meaning our servers can be used more efficiently. Most customers we've seen so far only make light to modest use of GitHub's hosted cache, so haven't noticed much of a latency problem.

    But you don't have to spend too long on the issuer tracker for GitHub Actions to find people complaining about the cache being slow or locking up completely for self-hosted runners.

    Go, Rust, Python and other languages don't tend to make heavy use of caches, and Docker has some of its own mechanisms like building cached steps into published images aka inline caching. But for the Node.js ecosystem, the node_modules folder and yarn cache can become huge and take a long time to download. That's one place where you may start to see tension between the speed of self-hosted runners and the latency of the cache. If your repository is a monorepo or has lots of large artifacts, you may get a speed boost by caching that too.

    So why is GitHub's cache so fast for hosted runners, and (sometimes) so slow self-hosted runners?

    @@ -184,4 +184,4 @@

    Conclusion

    You can find out how to configure a container mirror for the Docker Hub using actuated here: Set up a registry mirror. When testing builds for the Discourse team, there was a 2.5GB container image used for UI testing with various browsers preinstalled within it. We found that we could shave off a few minutes off the build time by using the local mirror. Imagine 10x of those builds running at once, needlessly downloading 250GB of data.

    What if you're not an actuated customer? Can you still benefit from a faster cache? You could try out a hosted service like AWS S3 or Google Cloud Storage, provisioned in a region closer to your runners. The speed probably won't quite be as good, but it should still be a lot faster than reaching over the Internet to GitHub's cache.

    If you'd like to try out actuated for your team, reach out to us to find out more.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/firecracker-container-lab.html b/blog/firecracker-container-lab.html index 17fe133..8aa7045 100644 --- a/blog/firecracker-container-lab.html +++ b/blog/firecracker-container-lab.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Grab your lab coat - we're building a microVM from a container

    No more broken tutorials, build a microVM from a container, boot it, access the Internet

    When I started learning Firecracker, I ran into frustration after frustration with broken tutorials that were popular in their day, but just hadn't been kept up to date. Almost nothing worked, or was far too complex for the level of interest I had at the time. Most recently, one of the Firecracker maintainers in an effort to make the quickstart better, made it even harder to use. (You can still get a copy of the original Firecracker quickstart in our tutorial on nested virtualisation)

    +

    Grab your lab coat - we're building a microVM from a container

    No more broken tutorials, build a microVM from a container, boot it, access the Internet

    When I started learning Firecracker, I ran into frustration after frustration with broken tutorials that were popular in their day, but just hadn't been kept up to date. Almost nothing worked, or was far too complex for the level of interest I had at the time. Most recently, one of the Firecracker maintainers in an effort to make the quickstart better, made it even harder to use. (You can still get a copy of the original Firecracker quickstart in our tutorial on nested virtualisation)

    So I wrote a lab that takes a container image and converts it to a microVM. You'll get your hands dirty, you'll run a microVM, you'll be able to use curl and ssh, even expose a HTTP server to the Internet via inlets, if (like me), you find that kind of thing fun.

    Why would you want to explore Firecracker? A friend of mine, Ivan Velichko is a prolific writer on containers, and Docker. He is one of the biggest independent evangelists for containers and Kubernetes that I know.

    So when he wanted to build an online labs and training environment, why did he pick Firecracker instead? Simply put, he told us that containers don't cut it. He needed something that would mirror the type of machine that you'd encounter in production, when you provision an EC2 instance or a GCP VM. Running Docker, Kubernetes, and performing are hard to do securely within a container, and he knew that was important for his students.

    @@ -208,4 +208,4 @@

    Wrapping up

    Did you enjoy the lab? Have you got a use-case for Firecracker? Let me know on Twitter @alexellisuk

    If you'd like to see how we've applied Firecracker to bring fast and secure CI to teams, check out our product actuated.dev

    Here's a quick demo of our control-plane, scheduler and bare-metal agent in action:

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/github-actions-usage-cli.html b/blog/github-actions-usage-cli.html index a9c2159..65e1c7c 100644 --- a/blog/github-actions-usage-cli.html +++ b/blog/github-actions-usage-cli.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Understand your usage of GitHub Actions

    Learn how you or your team is using GitHub Actions across your personal account or organisation.

    Whenever GitHub Actions users get in touch with us to ask about actuated, we ask them a number of questions. What do you build? What pain points have you been running into? What are you currently spending? And then - how many minutes are you using?

    +

    Understand your usage of GitHub Actions

    Learn how you or your team is using GitHub Actions across your personal account or organisation.

    Whenever GitHub Actions users get in touch with us to ask about actuated, we ask them a number of questions. What do you build? What pain points have you been running into? What are you currently spending? And then - how many minutes are you using?

    That final question is a hard one for many to answer because the GitHub UI and API will only show billable minutes. Why is that a problem? Some teams only use open-source repositories with free runners. Others may have a large free allowance of credit for one reason or another and so also don't really know what they're using. Then you have people who already use some form of self-hosted runners - they are also excluded from what GitHub shows you.

    So we built an Open Source CLI tool called actions-usage to generate a report of your total minutes by querying GitHub's REST API.

    And over time, we had requests to break-down per day - so for our customers in the Middle East like Kubiya, it's common to see a very busy day on Sunday, and not a lot of action on Friday. Given that some teams use mono-repos, we also added the ability to break-down per repository - so you can see which ones are the most active. And finally, we added the ability to see hot-spots of usage like the longest running repo or the most active day.

    @@ -156,4 +156,4 @@

    Wrapping up

    Check out self-actuated/actions-usage on GitHub

    I wrote an eBook writing CLIs like this in Go and keep it up to date on a regular basis - adding new examples and features of Go.

    Why not check out what people are saying about it on Gumroad?

    -

    Everyday Go - The Fast Track

    \ No newline at end of file +

    Everyday Go - The Fast Track

    \ No newline at end of file diff --git a/blog/gpus-for-github-actions.html b/blog/gpus-for-github-actions.html index ccbeafd..8e9d6b7 100644 --- a/blog/gpus-for-github-actions.html +++ b/blog/gpus-for-github-actions.html @@ -4,11 +4,13 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Accelerate GitHub Actions with dedicated GPUs

    You can now accelerate GitHub Actions with dedicated GPUs for machine learning and AI use-cases.

    With the surge of interest in AI and machine learning models, it's not hard to think of reasons why people want GPUs in their workstations and production environments. They make building & training, fine-tuning and serving (inference) from a machine learning model just that much quicker than running with a CPU alone.

    +

    Accelerate GitHub Actions with dedicated GPUs

    You can now accelerate GitHub Actions with dedicated GPUs for machine learning and AI use-cases.

    With the surge of interest in AI and machine learning models, it's not hard to think of reasons why people want GPUs in their workstations and production environments. They make building & training, fine-tuning and serving (inference) from a machine learning model just that much quicker than running with a CPU alone.

    So if you build and test code for CPUs in CI pipelines like GitHub Actions, why wouldn't you do the same with code built for GPUs? Why exercise only a portion of your codebase?

    GPUs for GitHub Actions

    One of our earliest customers moved all their GitHub Actions to actuated for a team of around 30 people, but since Firecracker has no support for GPUs, they had to keep a few self-hosted runners around for testing their models. Their second hand Dell servers were racked in their own datacentre, with 8x 3090 GPUs in each machine.

    Their request for GPU support in actuated predated the hype around OpenAI, and was the catalyst for us doing this work.

    +

    They told us how many issues they had keeping drivers in sync when trying to use self-hosted runners, and the security issues they ran into with mounting Docker sockets or running privileged containers in Kubernetes.

    +

    With a microVM and actuated, you'll be able to test out different versions of drivers as you see fit, and know there will never be side-effects between builds. You can read more in our FAQ on how actuated differs from other solutions which rely on the poor isolation afforded by containers. Actuated is the closest you can get to a hosted runner, whilst having full access to your own hardware.

    I'll tell you a bit more about it, how to build your own workstation with commodity hardware, or where to rent a powerful bare-metal host with a capable GPU for less than 200 USD / mo that you can use with actuated.

    Available in early access

    So today, we're announcing early access to actuated for GPUs. Whether your machine has one GPU, two, or ten, you can allocate them directly to a microVM for a CI job, giving strong isolation, and the same ephemeral environment that you're used to with GitHub's hosted runners.

    @@ -117,4 +119,4 @@

    Reach out for more

    Do you want to fine-tune, train, or run a batch of inferences on a model? You can do that. GitHub Actions has a 6 hour timeout, which is plenty for many tasks.

    Would it make sense to run Stable Diffusion in the background, with different versions, different inputs, across a matrix? GitHub Actions makes that easy, and actuated can manage the GPU allocations for you.

    Do you run inference from OpenFaaS functions? We have a tutorial on OpenAI Whisper within a function with GPU acceleration here and a separate one on how to serve Server Sent Events (SSE) from OpenAI or self-hosted models, which is popular for chat-style interfaces to AI models.

    -

    If you're interested in GPU support for GitHub Actions, then reach out to talk to us with this form.

    \ No newline at end of file +

    If you're interested in GPU support for GitHub Actions, then reach out to talk to us with this form.

    \ No newline at end of file diff --git a/blog/how-to-run-multi-arch-builds-natively.html b/blog/how-to-run-multi-arch-builds-natively.html index bf155fd..3c4d89e 100644 --- a/blog/how-to-run-multi-arch-builds-natively.html +++ b/blog/how-to-run-multi-arch-builds-natively.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to split up multi-arch Docker builds to run natively

    Alex Ellis

    QEMU is a convenient way to publish containers for multiple architectures, but it can be incredibly slow. Native is much faster.

    In two previous articles, we covered huge improvements in performance for the Parca project and VPP (Network Service Mesh) simply by switching to actuated with Arm64 runners instead of using QEMU and hosted runners.

    +

    How to split up multi-arch Docker builds to run natively

    Alex Ellis

    QEMU is a convenient way to publish containers for multiple architectures, but it can be incredibly slow. Native is much faster.

    In two previous articles, we covered huge improvements in performance for the Parca project and VPP (Network Service Mesh) simply by switching to actuated with Arm64 runners instead of using QEMU and hosted runners.

    In the first case, using QEMU took over 33 minutes, and bare-metal Arm showed a 22x improvement at only 1 minute 26 seconds. For Network Service Mesh, VPP couldn't even complete a build in 6 hours using QEMU - and I got it down to 9 minutes flat using a bare-metal Ampere Altra server.

    What are we going to see and why is it better?

    In this article, I'll show you how to run multi-arch builds natively on bare-metal hardware using GitHub Actions and actuated.

    @@ -252,4 +252,4 @@

    Wrapping up

    Want to talk to us about your CI/CD needs? We're happy to help.

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/is-the-self-hosted-runner-safe-github-actions.html b/blog/is-the-self-hosted-runner-safe-github-actions.html index abf8ced..8397877 100644 --- a/blog/is-the-self-hosted-runner-safe-github-actions.html +++ b/blog/is-the-self-hosted-runner-safe-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Is the GitHub Actions self-hosted runner safe for Open Source?

    Alex Ellis

    GitHub warns against using self-hosted Actions runners for public repositories - but why? And are there alternatives?

    First of all, why would someone working on an open source project need a self-hosted runner?

    +

    Is the GitHub Actions self-hosted runner safe for Open Source?

    Alex Ellis

    GitHub warns against using self-hosted Actions runners for public repositories - but why? And are there alternatives?

    First of all, why would someone working on an open source project need a self-hosted runner?

    Having contributed to dozens of open source projects, and gotten to know many different maintainers, the primary reason tends to be out of necessity. They face an 18 minute build time upon every commit or Pull Request revision, and want to make the best of what little time they can give over to Open Source.

    Having faster builds also lowers friction for contributors, and since many contributors are unpaid and rely on their own internal drive and enthusiasm, a fast build time can be the difference between them fixing a broken test or waiting another few days.

    To sum up, there are probably just a few main reasons:

    @@ -121,4 +121,4 @@

    Get involved today

  • Find a server for your builds
  • Register for actuated
  • -

    Follow us on Twitter - selfactuated

    \ No newline at end of file +

    Follow us on Twitter - selfactuated

    \ No newline at end of file diff --git a/blog/kvm-in-github-actions.html b/blog/kvm-in-github-actions.html index 20fa00c..77f3e4a 100644 --- a/blog/kvm-in-github-actions.html +++ b/blog/kvm-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to run KVM guests in your GitHub Actions

    Han Verstraete

    From building cloud images, to running NixOS tests and the android emulator, we look at how and why you'd want to run a VM in GitHub Actions.

    GitHub's hosted runners do not support nested virtualization. This means some frequently used tools that require KVM like packer, the Android emulator, etc can not be used in GitHub Actions CI pipelines.

    +

    How to run KVM guests in your GitHub Actions

    Han Verstraete

    From building cloud images, to running NixOS tests and the android emulator, we look at how and why you'd want to run a VM in GitHub Actions.

    GitHub's hosted runners do not support nested virtualization. This means some frequently used tools that require KVM like packer, the Android emulator, etc can not be used in GitHub Actions CI pipelines.

    We noticed there are quite a few issues for people requesting KVM support for GitHub Actions:

    Want to see a demo or talk to our team? Contact us here

    -

    Just want to try it out instead? Register your GitHub Organisation and set-up a subscription

    \ No newline at end of file +

    Just want to try it out instead? Register your GitHub Organisation and set-up a subscription

    \ No newline at end of file diff --git a/blog/local-caching-for-github-actions.html b/blog/local-caching-for-github-actions.html index 9afe604..7138706 100644 --- a/blog/local-caching-for-github-actions.html +++ b/blog/local-caching-for-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Testing the impact of a local cache for building Discourse

    We compare the impact of switching Discourse's GitHub Actions from self-hosted runners and a hosted cache, to a local cache with S3.

    We heard from the Discourse project last year because they were looking to speed up their builds. After trying out a couple of solutions that automated self-hosted runners, they found out that whilst faster CPUs were nice, reliability was a problem and the cache hosted on GitHub's network became the new bottleneck. We ran some tests to compare the hosted cache with hosted runners, to self-hosted with a local cache running with S3. This post will cover what we found.

    +

    Testing the impact of a local cache for building Discourse

    We compare the impact of switching Discourse's GitHub Actions from self-hosted runners and a hosted cache, to a local cache with S3.

    We heard from the Discourse project last year because they were looking to speed up their builds. After trying out a couple of solutions that automated self-hosted runners, they found out that whilst faster CPUs were nice, reliability was a problem and the cache hosted on GitHub's network became the new bottleneck. We ran some tests to compare the hosted cache with hosted runners, to self-hosted with a local cache running with S3. This post will cover what we found.

    Discourse logo

    Discourse is the online home for your community. We offer a 100% open source community platform to those who want complete control over how and where their site is run.

    @@ -96,4 +96,4 @@

    Conclusion

  • Make your builds run faster with Caching for GitHub Actions
  • Fixing the cache latency for self-hosted GitHub Actions
  • Is GitHub's self-hosted runner safe for open source?
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/managing-github-actions.html b/blog/managing-github-actions.html index 043eb4b..45b9748 100644 --- a/blog/managing-github-actions.html +++ b/blog/managing-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Lessons learned managing GitHub Actions and Firecracker

    Alex Ellis

    Alex shares lessons from building a managed service for GitHub Actions with Firecracker.

    Over the past few months, we've launched over 20,000 VMs for customers and have handled over 60,000 webhook messages from the GitHub API. We've learned a lot from every customer PoC and from our own usage in OpenFaaS.

    +

    Lessons learned managing GitHub Actions and Firecracker

    Alex Ellis

    Alex shares lessons from building a managed service for GitHub Actions with Firecracker.

    Over the past few months, we've launched over 20,000 VMs for customers and have handled over 60,000 webhook messages from the GitHub API. We've learned a lot from every customer PoC and from our own usage in OpenFaaS.

    First of all - what is it we're offering? And how is it different to managed runners and the self-hosted runner that GitHub offers?

    Actuated replicates the hosted experience you get from paying for hosted runners, and brings it to hardware under your own control. That could be a bare-metal Arm server, or a regular Intel/AMD cloud VM that has nested virtualisation enabled.

    Just like managed runners - every time actuated starts up a runner, it's within a single-tenant virtual machine (VM), with an immutable filesystem.

    @@ -132,4 +132,4 @@

    Wrapping up

    Whilst there was a significant amount of very technical work at the beginning of actuated, most of our time now is spent on customer support, education, and improving the onboarding experience.

    If you'd like to know how actuated compares to hosted runners or managing the self-hosted runner on your own, we'd encourage checking out the blog and FAQ.

    Are your builds slowing the team down? Do you need better organisation-level insights and reporting? Or do you need Arm support? Are you frustrated with managing self-hosted runners?

    -

    Reach out to us take part in the pilot

    \ No newline at end of file +

    Reach out to us take part in the pilot

    \ No newline at end of file diff --git a/blog/multi-arch-docker-github-actions.html b/blog/multi-arch-docker-github-actions.html index 32d6bf3..06c6d8d 100644 --- a/blog/multi-arch-docker-github-actions.html +++ b/blog/multi-arch-docker-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    The efficient way to publish multi-arch containers from GitHub Actions

    Alex Ellis

    Learn how to publish container images for both Arm and Intel machines from GitHub Actions.

    In 2017, I wrote an article on multi-stage builds with Docker, and it's now part of the Docker Documentation. In my opinion, multi-arch builds were the proceeding step in the evolution of container images.

    +

    The efficient way to publish multi-arch containers from GitHub Actions

    Alex Ellis

    Learn how to publish container images for both Arm and Intel machines from GitHub Actions.

    In 2017, I wrote an article on multi-stage builds with Docker, and it's now part of the Docker Documentation. In my opinion, multi-arch builds were the proceeding step in the evolution of container images.

    What's multi-arch and why should you care?

    If you want users to be able to use your containers on different types of computer, then you'll often need to build different versions of your binaries and containers.

    The faas-cli tool is how users interact with OpenFaaS.

    @@ -222,4 +222,4 @@

    Putting it all together

    A word of caution

    QEMU can be incredibly slow at times when using a hosted runner, where a build takes takes 1-2 minutes can extend to over half an hour. If you do run into that, one option is to check out actuated or another solution, which can build directly on an Arm server with a securely isolated Virtual Machine.

    In How to make GitHub Actions 22x faster with bare-metal Arm, we showed how we decreased the build time of an open-source Go project from 30.5 mins to 1.5 mins. If this is the direction you go in, you can use a matrix-build instead of a QEMU-based multi-arch build.

    -

    See also: Recommended bare-metal Arm servers

    \ No newline at end of file +

    See also: Recommended bare-metal Arm servers

    \ No newline at end of file diff --git a/blog/native-arm64-for-github-actions.html b/blog/native-arm64-for-github-actions.html index 387ae1a..029908e 100644 --- a/blog/native-arm64-for-github-actions.html +++ b/blog/native-arm64-for-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to make GitHub Actions 22x faster with bare-metal Arm

    Alex Ellis

    GitHub doesn't provide hosted Arm runners, so how can you use native Arm runners safely & securely?

    GitHub Actions is a modern, fast and efficient way to build and test software, with free runners available. We use the free runners for various open source projects and are generally very pleased with them, after all, who can argue with good enough and free? But one of the main caveats is that GitHub's hosted runners don't yet support the Arm architecture.

    +

    How to make GitHub Actions 22x faster with bare-metal Arm

    Alex Ellis

    GitHub doesn't provide hosted Arm runners, so how can you use native Arm runners safely & securely?

    GitHub Actions is a modern, fast and efficient way to build and test software, with free runners available. We use the free runners for various open source projects and are generally very pleased with them, after all, who can argue with good enough and free? But one of the main caveats is that GitHub's hosted runners don't yet support the Arm architecture.

    So many people turn to software-based emulation using QEMU. QEMU is tricky to set up, and requires specific code and tricks if you want to use software in a standard way, without modifying it. But QEMU is great when it runs with hardware acceleration. Unfortunately, the hosted runners on GitHub do not have KVM available, so builds tend to be incredibly slow, and I mean so slow that it's going to distract you and your team from your work.

    This was even more evident when Frederic Branczyk tweeted about his experience with QEMU on GitHub Actions for his open source observability project named Parca.

    @@ -131,4 +131,4 @@

    Wrapping up

  • Actuated docs
  • FAQ & comparison to other solutions
  • Follow actuated on Twitter
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/oidc-proxy-for-openfaas.html b/blog/oidc-proxy-for-openfaas.html index f7a1c82..597b9d4 100644 --- a/blog/oidc-proxy-for-openfaas.html +++ b/blog/oidc-proxy-for-openfaas.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Keyless deployment to OpenFaaS with OIDC and GitHub Actions

    Alex Ellis

    We're announcing a new OIDC proxy for OpenFaaS for keyless deployments from GitHub Actions.

    In 2021, GitHub released OpenID Connect (OIDC) support for CI jobs running under GitHub Actions. This was a huge step forward for security meaning that any GitHub Action could mint an OIDC token and use it to securely federate into another system without having to store long-lived credentials in the repository.

    +

    Keyless deployment to OpenFaaS with OIDC and GitHub Actions

    Alex Ellis

    We're announcing a new OIDC proxy for OpenFaaS for keyless deployments from GitHub Actions.

    In 2021, GitHub released OpenID Connect (OIDC) support for CI jobs running under GitHub Actions. This was a huge step forward for security meaning that any GitHub Action could mint an OIDC token and use it to securely federate into another system without having to store long-lived credentials in the repository.

    I wrote a prototype for OpenFaaS shortly after the announcement and a deep dive explaining how it works. I used inlets to set up a HTTPS tunnel, and send the token to my machine for inspection. Various individuals and technical teams have used my content as a reference guide when working with GitHub Actions and OIDC.

    See the article: Deploy without credentials with GitHub Actions and OIDC

    Since then, custom actions for GCP, AWS and Azure have been created which allow an OIDC token from a GitHub Action to be exchanged for a short-lived access token for their API - meaning you can manage cloud resources securely. For example, see: Configuring OpenID Connect in Amazon Web Services - we have actuated customers who use this approach to deploy to ECR from their self-hosted runners without having to store long-lived credentials in their repositories.

    @@ -177,4 +177,4 @@

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/right-sizing-vms-github-actions.html b/blog/right-sizing-vms-github-actions.html index a9e51b6..42e9e24 100644 --- a/blog/right-sizing-vms-github-actions.html +++ b/blog/right-sizing-vms-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Right sizing VMs for GitHub Actions

    How do you pick the right VM size for your GitHub Actions runners? We wrote a custom tool to help you find out.

    When we onboarded the etcd project from the CNCF, they'd previously been using a self-hosted runner for their repositories on a bare-metal host. There are several drawbacks to this approach, including potential security issues, especially when using Docker.

    +

    Right sizing VMs for GitHub Actions

    How do you pick the right VM size for your GitHub Actions runners? We wrote a custom tool to help you find out.

    When we onboarded the etcd project from the CNCF, they'd previously been using a self-hosted runner for their repositories on a bare-metal host. There are several drawbacks to this approach, including potential security issues, especially when using Docker.

    actuated VM sizes can be configured by a label, and you can pick any combination of vCPU and RAM, there's no need to pick a pre-defined size.

    At the same time, it can be hard to know what size to pick, and if you make the VM size too large, then you won't be able to run as many jobs at once.

    There's three main things to consider:

    @@ -80,4 +80,4 @@

    Wrapping up

  • post.js communicates to vmmeter over a TCP port and tells it to dump its response to /tmp/vmmeter.log, then to exit
  • What if you're not using GitHub Actions?

    -

    You can run vmmeter with bash on your own system, and may also able to use vmmeter in GitLab CI or Jenkins. We've got manual instructions for vmmeter here. You can even just start it up right now, do some work and then call the collect endpoint to see what was used over that period of time, a bit like a generic profiler.

    \ No newline at end of file +

    You can run vmmeter with bash on your own system, and may also able to use vmmeter in GitLab CI or Jenkins. We've got manual instructions for vmmeter here. You can even just start it up right now, do some work and then call the collect endpoint to see what was used over that period of time, a bit like a generic profiler.

    \ No newline at end of file diff --git a/blog/sbom-in-github-actions.html b/blog/sbom-in-github-actions.html index d03d2a4..3d649d4 100644 --- a/blog/sbom-in-github-actions.html +++ b/blog/sbom-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions

    Alex Ellis

    Learn how to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions in a few easy steps.

    What is a Software Bill of Materials (SBOM)?

    +

    How to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions

    Alex Ellis

    Learn how to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions in a few easy steps.

    What is a Software Bill of Materials (SBOM)?

    In April 2022 Justin Cormack, CTO of Docker announced that Docker was adding support to generate a Software Bill of Materials (SBOM) for container images.

    An SBOM is an inventory of the components that make up a software application. It is a list of the components that make up a software application including the version of each component. The version is important because it can be cross-reference with a vulnerability database to determine if the component has any known vulnerabilities.

    Many organisations are also required to company with certain Open Source Software (OSS) licenses. So if SBOMs are included in the software they purchase or consume from vendors, then it can be used to determine if the software is compliant with their specific license requirements, lowering legal and compliance risk.

    @@ -204,4 +204,4 @@

    Wrapping up

  • Announcing Docker SBOM: A step towards more visibility into Docker images- Justin Cormack
  • Generating SBOMs for Your Image with BuildKit - Justin Chadwell
  • Anchore on GitHub
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/secure-microvm-ci-gitlab.html b/blog/secure-microvm-ci-gitlab.html index e1943f4..91d7d6b 100644 --- a/blog/secure-microvm-ci-gitlab.html +++ b/blog/secure-microvm-ci-gitlab.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Secure CI for GitLab with Firecracker microVMs

    Learn how actuated for GitLab CI can help you secure your CI/CD pipelines with Firecracker.

    We started building actuated for GitHub Actions because we at OpenFaaS Ltd had a need for: unmetered CI minutes, faster & more powerful x86 machines, native Arm builds and low maintenance CI builds.

    +

    Secure CI for GitLab with Firecracker microVMs

    Learn how actuated for GitLab CI can help you secure your CI/CD pipelines with Firecracker.

    We started building actuated for GitHub Actions because we at OpenFaaS Ltd had a need for: unmetered CI minutes, faster & more powerful x86 machines, native Arm builds and low maintenance CI builds.

    And most importantly, we needed it to be low-maintenance, and securely isolated.

    None of the solutions at the time could satisfy all of those requirements, and even today with GitHub adopting the community-based Kubernetes controller to run CI in Pods, there is still a lot lacking.

    As we've gained more experience with customers who largely had the same needs as we did for GitHub Actions, we started to hear more and more from GitLab CI users. From large enterprise companies who are concerned about the security risks of running CI with privileged Docker containers, Docker socket binding (from the host!) or the flakey nature and slow speed of VFS with Docker In Docker (DIND).

    @@ -50,4 +50,4 @@

    Wrapping up

  • Browse the docs and FAQ for actuated for GitHub Actions
  • Read customer testimonials
  • -

    You can follow @selfactuated on Twitter, or find me there too to keep an eye on what we're building.

    \ No newline at end of file +

    You can follow @selfactuated on Twitter, or find me there too to keep an eye on what we're building.

    \ No newline at end of file diff --git a/index.html b/index.html index f5e441c..dbe1f0e 100644 --- a/index.html +++ b/index.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -
    The team working collaborating on a project

    Fast and secure GitHub Actionson your own infrastructure.

    Standard hosted runners are slow and expensive, larger runners cost even more.

    Actuated runs on your own infrastructure with a predictable cost, low management and secure isolation with Firecracker.

    Trusted by a growing number of companies

    Riskfuel
    OpenFaaS Ltd
    Observability, simplified
    Swiss National Data and Service Center
    Kubiya.ai
    Cloud Native Computing Foundation (CNCF)
    Toolpath
    Waffle Labs
    Tuple

    After one week actuated has been a huge improvement! Even our relatively fast Go build seems to be running 30%+ faster and being able to commit to several repos and only wait 10 minutes instead of an hour has been transformative.

    Steven Byrnes
    VP Engineering, Waffle Labs
    SettleMint

    Finally our self-hosted GitHub Actions Runners are on fire 🔥. Thanks to Alex for all the help onboarding our monorepo with NX, Docker, x86 and Arm on bare-metal.

    Our worst case build/deploy went down from 1h to 20-30mins, for a repo with 200k lines-of-code.

    Roderik van der Veer
    Founder & CTO at SettleMint

    Keep in touch with us
    For news and announcements

    We care about your data. Read our privacy policy.

    Increase speed, security and efficiency.

    Learn what friction actuated solves and how it compares to other solutions: Read the announcement

    Secure CI that feels like hosted.

    Every job runs in an immutable microVM, just like hosted runners, but on your own servers.

    That means you can run sudo, Docker and Kubernetes directly, just like you do during development, there's no need for Docker-in-Docker (DIND), Kaniko or complex user-namespaces.

    What about management and scheduling? Provision a server with a minimal Operating System, then install the agent. We'll do the rest.

    Arm / M1 from Dev to Production.

    Apple Silicon is a new favourite for developer workstations. Did you know that when your team run Docker, they're using an Arm VM?

    With actuated, those same builds can be performed on Arm servers during CI, and even deployed to production on more efficient Ampere or AWS Graviton servers.

    Run directly within your datacenter.

    If you work with large container images or datasets, then you'll benefit from having your CI run with direct local network access to your internal network.

    This is not possible with hosted runners, and this is crucial for one of our customers who can regularly saturate a 10GbE link during GitHub Actions runs.

    In contrast, VPNs are complex to manage, capped on speed, and incur costly egress charges.

    Debug live over SSH

    We heard from many teams that they missed CircleCI's "debug this job" button, so we built it for you.

    We realise you won't debug your jobs on a regular basis, but when you are stuck, and have to wait 15-20 minutes to get to the line you've changed in a job, debugging with a terminal can save you hours.

    "How cool!!! you don't know how many hours I have lost on GitHub Actions without this." - Ivan Subotic, Swiss National Data and Service Center (DaSCH)

    Lower your costs, and keep them there.

    Actuated is billed by the maximum amount of concurrent jobs you can run, so no matter how many build minutes your team requires, the charge will not increase with usage.

    In a recent interview, a lead SRE at UK-based scale-up told us that their bill had increased 5x over the past 6 months. They are now paying 5000 GBP / month and we worked out that we could make their builds faster and at least half the costs.

    Lower management costs.

    Whenever your team has to manage self-hosted runners, or explain non-standard tools like Kaniko to developers, you're losing money.

    With actuated, you bring your own servers, install our agent, and we do the rest. Our VM image is built with automation and kept up to date, so you don't have to manage packages.

    Predictable billing.

    Most small teams we interviewed were spending at least 1000-1500 USD / mo for the standard, slower hosted runners.

    That cost would only increase with GitHub's faster runners, but with actuated the cost is flat-rate, no matter how many minutes you use.

    Get insights into your organisation

    When you have more than a few teammates and a dozen repositories, it's near impossible to get insights into patterns of usage.

    Inspired by Google Analytics, Actuated contrasts usage for the current period vs the previous period - for the whole organisation, each repository and each developer.

    Organisational level insights

    Learn about the actuated dashboard

    These are the results you've been waiting for

    “Our team loves actuated. It gave us a 3x speed increase on a Go repo that we build throughout the day. Alex and team are really responsive, and we're already happy customers of OpenFaaS and Inlets.”

    Shaked Askayo
    CTO, Kubiya.ai

    “We just switched from Actions Runtime Controller to Actuated. It only took 30s create 5x isolated VMs, run the jobs and tear them down again inside our on-prem environment (no Docker socket mounting shenanigans)! Pretty impressive stuff.”

    Addison van den Hoeven
    DevOps Lead, Riskfuel

    “From my time at Mirantis, I learned how slow and unreliable Docker In Docker can be. Compared to GitHub's 16 core runners, actuated is 2-3x faster for us.”

    Sergei Lukianov
    Founding Engineer, Githedgehog

    “This is great, perfect for jobs that take forever on normal GitHub runners. I love what Alex is doing here.”

    Richard Case
    Principal Engineer, SUSE

    “We needed to build public repos on Arm runners, but QEMU couldn't finish in 6 hours, so we had to turn the build off. Actuated now builds the same code in 4 minutes.”

    Patrick Stephens
    Tech Lead of Infrastructure, Calyptia/Fluent Bit.

    “We needed to build Arm containers for our customers to deploy to Graviton instances on AWS EC2. Hosted runners with QEMU failed to finish within the 6 hour limit. With actuated it takes 20 minutes - and thanks to Firecracker, each build is safely isolated for our open source repositories.”

    Ivan Subotic
    Head of Engineering, Swiss National Data and Service Center for the Humanities

    “One of our Julia builds was taking 5 hours of compute time to complete, and was costing us over 1500USD / mo. Alex's team got us a 3x improvement on speed and lowered our costs at the same time. Actuated is working great for us.”

    Justin Gray, Ph.D.
    CTO & Co-founder at Toolpath

    Frequently asked questions

    Workcation

    “It's cheaper - and less frustrating for your developers — to pay more for better hardware to keep your team on track.

    The upfront cost for more CPU power pays off over time. And your developers will thank you.”

     

    Read the article: The hidden costs of waiting on slow build times (GitHub.com)

    Natalie Somersall
    GitHub Staff

    Ready to go faster?

    You can try out actuated on a monthly basis, with no commitment beyond that.

    Make your builds faster today
    Matrix build with k3sup
    \ No newline at end of file +
    The team working collaborating on a project

    Fast and secure GitHub Actionson your own infrastructure.

    Standard hosted runners are slow and expensive, larger runners cost even more.

    Actuated runs on your own infrastructure with a predictable cost, low management and secure isolation with Firecracker.

    Trusted by a growing number of companies

    Riskfuel
    OpenFaaS Ltd
    Observability, simplified
    Swiss National Data and Service Center
    Kubiya.ai
    Cloud Native Computing Foundation (CNCF)
    Toolpath
    Waffle Labs
    Tuple

    After one week actuated has been a huge improvement! Even our relatively fast Go build seems to be running 30%+ faster and being able to commit to several repos and only wait 10 minutes instead of an hour has been transformative.

    Steven Byrnes
    VP Engineering, Waffle Labs
    SettleMint

    Finally our self-hosted GitHub Actions Runners are on fire 🔥. Thanks to Alex for all the help onboarding our monorepo with NX, Docker, x86 and Arm on bare-metal.

    Our worst case build/deploy went down from 1h to 20-30mins, for a repo with 200k lines-of-code.

    Roderik van der Veer
    Founder & CTO at SettleMint

    Keep in touch with us
    For news and announcements

    We care about your data. Read our privacy policy.

    Increase speed, security and efficiency.

    Learn what friction actuated solves and how it compares to other solutions: Read the announcement

    Secure CI that feels like hosted.

    Every job runs in an immutable microVM, just like hosted runners, but on your own servers.

    That means you can run sudo, Docker and Kubernetes directly, just like you do during development, there's no need for Docker-in-Docker (DIND), Kaniko or complex user-namespaces.

    What about management and scheduling? Provision a server with a minimal Operating System, then install the agent. We'll do the rest.

    Arm / M1 from Dev to Production.

    Apple Silicon is a new favourite for developer workstations. Did you know that when your team run Docker, they're using an Arm VM?

    With actuated, those same builds can be performed on Arm servers during CI, and even deployed to production on more efficient Ampere or AWS Graviton servers.

    Run directly within your datacenter.

    If you work with large container images or datasets, then you'll benefit from having your CI run with direct local network access to your internal network.

    This is not possible with hosted runners, and this is crucial for one of our customers who can regularly saturate a 10GbE link during GitHub Actions runs.

    In contrast, VPNs are complex to manage, capped on speed, and incur costly egress charges.

    Debug live over SSH

    We heard from many teams that they missed CircleCI's "debug this job" button, so we built it for you.

    We realise you won't debug your jobs on a regular basis, but when you are stuck, and have to wait 15-20 minutes to get to the line you've changed in a job, debugging with a terminal can save you hours.

    "How cool!!! you don't know how many hours I have lost on GitHub Actions without this." - Ivan Subotic, Swiss National Data and Service Center (DaSCH)

    Lower your costs, and keep them there.

    Actuated is billed by the maximum amount of concurrent jobs you can run, so no matter how many build minutes your team requires, the charge will not increase with usage.

    In a recent interview, a lead SRE at UK-based scale-up told us that their bill had increased 5x over the past 6 months. They are now paying 5000 GBP / month and we worked out that we could make their builds faster and at least half the costs.

    Lower management costs.

    Whenever your team has to manage self-hosted runners, or explain non-standard tools like Kaniko to developers, you're losing money.

    With actuated, you bring your own servers, install our agent, and we do the rest. Our VM image is built with automation and kept up to date, so you don't have to manage packages.

    Predictable billing.

    Most small teams we interviewed were spending at least 1000-1500 USD / mo for the standard, slower hosted runners.

    That cost would only increase with GitHub's faster runners, but with actuated the cost is flat-rate, no matter how many minutes you use.

    Get insights into your organisation

    When you have more than a few teammates and a dozen repositories, it's near impossible to get insights into patterns of usage.

    Inspired by Google Analytics, Actuated contrasts usage for the current period vs the previous period - for the whole organisation, each repository and each developer.

    Organisational level insights

    Learn about the actuated dashboard

    These are the results you've been waiting for

    “Our team loves actuated. It gave us a 3x speed increase on a Go repo that we build throughout the day. Alex and team are really responsive, and we're already happy customers of OpenFaaS and Inlets.”

    Shaked Askayo
    CTO, Kubiya.ai

    “We just switched from Actions Runtime Controller to Actuated. It only took 30s create 5x isolated VMs, run the jobs and tear them down again inside our on-prem environment (no Docker socket mounting shenanigans)! Pretty impressive stuff.”

    Addison van den Hoeven
    DevOps Lead, Riskfuel

    “From my time at Mirantis, I learned how slow and unreliable Docker In Docker can be. Compared to GitHub's 16 core runners, actuated is 2-3x faster for us.”

    Sergei Lukianov
    Founding Engineer, Githedgehog

    “This is great, perfect for jobs that take forever on normal GitHub runners. I love what Alex is doing here.”

    Richard Case
    Principal Engineer, SUSE

    “We needed to build public repos on Arm runners, but QEMU couldn't finish in 6 hours, so we had to turn the build off. Actuated now builds the same code in 4 minutes.”

    Patrick Stephens
    Tech Lead of Infrastructure, Calyptia/Fluent Bit.

    “We needed to build Arm containers for our customers to deploy to Graviton instances on AWS EC2. Hosted runners with QEMU failed to finish within the 6 hour limit. With actuated it takes 20 minutes - and thanks to Firecracker, each build is safely isolated for our open source repositories.”

    Ivan Subotic
    Head of Engineering, Swiss National Data and Service Center for the Humanities

    “One of our Julia builds was taking 5 hours of compute time to complete, and was costing us over 1500USD / mo. Alex's team got us a 3x improvement on speed and lowered our costs at the same time. Actuated is working great for us.”

    Justin Gray, Ph.D.
    CTO & Co-founder at Toolpath

    Frequently asked questions

    Workcation

    “It's cheaper - and less frustrating for your developers — to pay more for better hardware to keep your team on track.

    The upfront cost for more CPU power pays off over time. And your developers will thank you.”

     

    Read the article: The hidden costs of waiting on slow build times (GitHub.com)

    Natalie Somersall
    GitHub Staff

    Ready to go faster?

    You can try out actuated on a monthly basis, with no commitment beyond that.

    Make your builds faster today
    Matrix build with k3sup
    \ No newline at end of file diff --git a/pricing.html b/pricing.html index 02cb57f..13879a3 100644 --- a/pricing.html +++ b/pricing.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Plans & Pricing

    You can get started with us the same day as your call.

      Simple, flat-rate pricing

      Actuated plans are based upon the total number of concurrent builds needed for your team. All plans include unmetered minutes, with no additional charges.

      The higher the concurrency level for your plan, the more benefits you'll receive.

      The onboarding process is quick and easy: learn more.

      In all tiers

      • Unlimited build minutes
      • Bring your own server
      • Free onboarding call with our engineers
      • Linux x86_64 and 64-bit Arm
      • Reporting and usage metrics
      • Support via email

      With higher concurrency

      • Debug tricky builds with SSH
      • Enhanced SLA
      • Private Prometheus metrics
      • Private Slack channel
      • Recommendations to improve build speed

      Early access

      • Self-hosted GitLab CI
      • GitHub Enterprise Server (GHES)
      • GPU pass-through

      Pay monthly, for up to 3x faster CI

      $100USD / concurrent build

      Talk to us

      You'll meet with our founder to discuss your current challenges and answer any questions you may have.

    \ No newline at end of file +

    Plans & Pricing

    You can get started with us the same day as your call.

      Simple, flat-rate pricing

      Actuated plans are based upon the total number of concurrent builds needed for your team. All plans include unmetered minutes, with no additional charges.

      The higher the concurrency level for your plan, the more benefits you'll receive.

      The onboarding process is quick and easy: learn more.

      In all tiers

      • Unlimited build minutes
      • Bring your own server
      • Free onboarding call with our engineers
      • Linux x86_64 and 64-bit Arm
      • Reporting and usage metrics
      • Support via email

      With higher concurrency

      • Debug tricky builds with SSH
      • Enhanced SLA
      • Private Prometheus metrics
      • Private Slack channel
      • Recommendations to improve build speed

      Early access

      • Self-hosted GitLab CI
      • GitHub Enterprise Server (GHES)
      • GPU pass-through

      Pay monthly, for up to 3x faster CI

      $100USD / concurrent build

      Talk to us

      You'll meet with our founder to discuss your current challenges and answer any questions you may have.

    \ No newline at end of file diff --git a/rss.xml b/rss.xml index 5e45eb6..9fd7597 100644 --- a/rss.xml +++ b/rss.xml @@ -4,7 +4,7 @@ Actuated - blog https://actuated.dev Keep your team productive & focused with blazing fast CI - Tue, 12 Mar 2024 12:33:11 GMT + Tue, 12 Mar 2024 12:35:50 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed @@ -23,6 +23,8 @@

    GPUs for GitHub Actions

    One of our earliest customers moved all their GitHub Actions to actuated for a team of around 30 people, but since Firecracker has no support for GPUs, they had to keep a few self-hosted runners around for testing their models. Their second hand Dell servers were racked in their own datacentre, with 8x 3090 GPUs in each machine.

    Their request for GPU support in actuated predated the hype around OpenAI, and was the catalyst for us doing this work.

    +

    They told us how many issues they had keeping drivers in sync when trying to use self-hosted runners, and the security issues they ran into with mounting Docker sockets or running privileged containers in Kubernetes.

    +

    With a microVM and actuated, you'll be able to test out different versions of drivers as you see fit, and know there will never be side-effects between builds. You can read more in our FAQ on how actuated differs from other solutions which rely on the poor isolation afforded by containers. Actuated is the closest you can get to a hosted runner, whilst having full access to your own hardware.

    I'll tell you a bit more about it, how to build your own workstation with commodity hardware, or where to rent a powerful bare-metal host with a capable GPU for less than 200 USD / mo that you can use with actuated.

    Available in early access

    So today, we're announcing early access to actuated for GPUs. Whether your machine has one GPU, two, or ten, you can allocate them directly to a microVM for a CI job, giving strong isolation, and the same ephemeral environment that you're used to with GitHub's hosted runners.