diff --git a/404.html b/404.html index cc35dda..1079885 100644 --- a/404.html +++ b/404.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

404

This page could not be found.

\ No newline at end of file + }

404

This page could not be found.

\ No newline at end of file diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/amd-zenbleed-update-now.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/amd-zenbleed-update-now.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/amd-zenbleed-update-now.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/amd-zenbleed-update-now.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/arm-ci-cncf-ampere.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/arm-ci-cncf-ampere.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/arm-ci-cncf-ampere.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/arm-ci-cncf-ampere.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/automate-packer-qemu-image-builds.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/automate-packer-qemu-image-builds.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/automate-packer-qemu-image-builds.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/automate-packer-qemu-image-builds.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/blazing-fast-ci-with-microvms.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/blazing-fast-ci-with-microvms.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/blazing-fast-ci-with-microvms.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/blazing-fast-ci-with-microvms.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/caching-in-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/caching-in-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/caching-in-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/caching-in-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/calyptia-case-study-arm.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/calyptia-case-study-arm.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/calyptia-case-study-arm.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/calyptia-case-study-arm.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/case-study-bring-your-own-bare-metal-to-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/case-study-bring-your-own-bare-metal-to-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/case-study-bring-your-own-bare-metal-to-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/case-study-bring-your-own-bare-metal-to-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/custom-sizes-bpf-kvm.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/custom-sizes-bpf-kvm.json similarity index 72% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/custom-sizes-bpf-kvm.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/custom-sizes-bpf-kvm.json index 602113f..86fc9c1 100644 --- a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/custom-sizes-bpf-kvm.json +++ b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/custom-sizes-bpf-kvm.json @@ -1 +1 @@ -{"pageProps":{"post":{"slug":"custom-sizes-bpf-kvm","fileName":"2023-12-04-custom-sizes-bpf-kvm.md","contentHtml":"

In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

\n

New eBPF support and 5.10.201 Kernel

\n

And as part of our work to provide hosted Arm CI for CNCF projects, including Tetragon and Cilium, we've now enabled eBPF and BTF features within the Kernel.

\n
\n

Berkley Packet Filter (BPF) is an advanced way to integrate with with the Kernel, for observability, security and networking. You'll see it included in various CNCF projects like Cilium, Falco, Kepler, and others.

\n
\n

Whilst BPF is powerful, it's also a very fast moving space, and was particularly complicated to patch to Firecracker's minimal Kernel configuration. We want to say a thank you to Mahé Tardy\n who maintains Tetragon and to Duffie Coolie both from Isovalent for pointers and collaboration.

\n

We've made a big jump in the supported Kernel version from 5.10.77 up to 5.10.201, with newer revisions being made available on a continual basis.

\n

To update your servers, log in via SSH and edit /etc/default/actuated.

\n

For amd64:

\n
IMAGE_REF=\"ghcr.io/openfaasltd/actuated-ubuntu22.04:x86_64-latest\"\nKERNEL_REF=\"ghcr.io/openfaasltd/actuated-kernel:x86_64-latest\"\n
\n

For arm64:

\n
IMAGE_REF=\"ghcr.io/openfaasltd/actuated-ubuntu22.04:aarch64-latest\"\nKERNEL_REF=\"ghcr.io/openfaasltd/actuated-kernel:aarch64-latest\"\n
\n

Once you have the new images in place, reboot the server. Updates to the Kernel and root filesystem will be delivered Over The Air (OTA) automatically by our team.

\n

Request custom vCPU and RAM per job

\n

Our initial version of actuated aimed to set a specific vCPU and RAM value for each build, designed to slice up a machine equally for the best mix of performance and concurrency. We would recommend it to teams during their onboarding call, then mostly leave it as it was. For a machine with 128GB RAM and 32 threads, you may have set it up for 8 jobs with 4x vCPU and 16GB RAM each, or 4 jobs with 8x vCPU and 32GB RAM.

\n

However, whilst working with Justin Gray, CTO at Toolpath, we found that their build needed increasing amounts of RAM to avoid an Out Of Memory (OOM) crash, and so implemented custom labels.

\n

These labels do not have any predetermined values, so you can change them to any value you like, independently. You're not locked into a set combinations.

\n

Small tasks, automation, publishing Helm charts?

\n
runs-on: actuated-2cpu-8gb\n
\n

Building a large application, or training an AI model?

\n
runs-on: actuated-32cpu-128gb\n
\n

Spreading your plan across all available hosts

\n

Previously, if you had a plan with 10 concurrent builds and both an Arm server and an amd64 server, we'd split your plan statically 50/50. So you could run a maximum of 5 Arm and 5 amd64 builds at the same time.

\n

Now, we've made this dynamic, all of your 10 builds can start on the Arm or amd64 server. Or, 1 could start on the Arm server, then 9 on the amd64 server, and so on.

\n

The change makes the product better value for money, and we had always wanted it to work this way.

\n

Thanks to Patrick Stephens at Fluent/Calyptia for the suggestion and for helping us test it out.

\n

KVM acceleration aka running VMs in your CI pipeline

\n

When we started actuated over 12 months ago, there was no support for using KVM acceleration, or running a VM within a GitHub Actions job within GitHub's infrastructure. We made it available for our customers first, with a custom Kernel configuration for x86_64 servers. Arm support for launching VMs within VMs is not currently available in the current generation of Ampere servers, but may be available within the next generation of chips and Kernels.

\n

We have several tutorials including how to run Firecracker itself within a CI job, Packer, nix and more.

\n

When you run Packer in a VM, instead of with one of the cloud drivers, you save on time and costs, by not having to fire up cloud resources on AWS, GCP, Azure, and so forth. Instead, you can run a local VM to build the image, then convert it to an AMI or another format.

\n

One of our customers has started exploring launching a VM during a CI job in order to test air-gapped support for enterprise customers. This is a great example of how you can use nested virtualisation to test your own product in a repeatable way.

\n

Nix benefits particularly from being able to create a clean, isolated environment within a CI pipeline, to get a repeatable build. Graham Christensen from Determinate Systems reached out to collaborate on testing their Nix installer in actuated.

\n

He didn't expect it to run, but when it worked first time, he remarked: \"Nice, perfect!\"

\n
jobs:\n  specs:\n    name: ci\n    runs-on: [actuated-16cpu-32gb]\n    steps:\n      - uses: DeterminateSystems/nix-installer-action@main\n      - run: |\n            nix-build '<nixpkgs/nixos/tests/doas.nix>'\n
\n\n

Wrapping up

\n

We've now released eBPF/BTF support as part of onboarding CNCF projects, updated to the latest Kernel revision, made scheduling better value for money & easier to customise, and have added a range of tutorials for getting the most out of nested virtualisation.

\n

If you'd like to try out actuated, you can get started same day.

\n

Talk to us..

\n

You may also like:

\n","title":"December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration","description":"You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.","tags":["ebpf","cloudnative","opensource"],"author_img":"alex","image":"/images/2023-12-scheduling-bpf/background-bpf.png","date":"2023-12-04"}},"__N_SSG":true} \ No newline at end of file +{"pageProps":{"post":{"slug":"custom-sizes-bpf-kvm","fileName":"2023-12-04-custom-sizes-bpf-kvm.md","contentHtml":"

In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

\n

New eBPF support and 5.10.201 Kernel

\n

And as part of our work to provide hosted Arm CI for CNCF projects, including Tetragon and Cilium, we've now enabled eBPF and BTF features within the Kernel.

\n
\n

Berkley Packet Filter (BPF) is an advanced way to integrate with with the Kernel, for observability, security and networking. You'll see it included in various CNCF projects like Cilium, Falco, Kepler, and others.

\n
\n

Whilst BPF is powerful, it's also a very fast moving space, and was particularly complicated to patch to Firecracker's minimal Kernel configuration. We want to say a thank you to Mahé Tardy\n who maintains Tetragon and to Duffie Coolie both from Isovalent for pointers and collaboration.

\n

We've made a big jump in the supported Kernel version from 5.10.77 up to 5.10.201, with newer revisions being made available on a continual basis.

\n

To update your servers, log in via SSH and edit /etc/default/actuated.

\n

For amd64:

\n
IMAGE_REF=\"ghcr.io/openfaasltd/actuated-ubuntu22.04:x86_64-latest\"\nKERNEL_REF=\"ghcr.io/openfaasltd/actuated-kernel:x86_64-latest\"\n
\n

For arm64:

\n
IMAGE_REF=\"ghcr.io/openfaasltd/actuated-ubuntu22.04:aarch64-latest\"\nKERNEL_REF=\"ghcr.io/openfaasltd/actuated-kernel:aarch64-latest\"\n
\n

Once you have the new images in place, reboot the server. Updates to the Kernel and root filesystem will be delivered Over The Air (OTA) automatically by our team.

\n

Request custom vCPU and RAM per job

\n

Our initial version of actuated aimed to set a specific vCPU and RAM value for each build, designed to slice up a machine equally for the best mix of performance and concurrency. We would recommend it to teams during their onboarding call, then mostly leave it as it was. For a machine with 128GB RAM and 32 threads, you may have set it up for 8 jobs with 4x vCPU and 16GB RAM each, or 4 jobs with 8x vCPU and 32GB RAM.

\n

However, whilst working with Justin Gray, CTO at Toolpath, we found that their build needed increasing amounts of RAM to avoid an Out Of Memory (OOM) crash, and so implemented custom labels.

\n

These labels do not have any predetermined values, so you can change them to any value you like, independently. You're not locked into a set combinations.

\n

Small tasks, automation, publishing Helm charts?

\n
runs-on: actuated-2cpu-8gb\n
\n

Building a large application, or training an AI model?

\n
runs-on: actuated-32cpu-128gb\n
\n

Spreading your plan across all available hosts

\n

Previously, if you had a plan with 10 concurrent builds and both an Arm server and an amd64 server, we'd split your plan statically 50/50. So you could run a maximum of 5 Arm and 5 amd64 builds at the same time.

\n

Now, we've made this dynamic, all of your 10 builds can start on the Arm or amd64 server. Or, 1 could start on the Arm server, then 9 on the amd64 server, and so on.

\n

The change makes the product better value for money, and we had always wanted it to work this way.

\n

Thanks to Patrick Stephens at Fluent/Calyptia for the suggestion and for helping us test it out.

\n

KVM acceleration aka running VMs in your CI pipeline

\n

When we started actuated over 12 months ago, there was no support for using KVM acceleration, or running a VM within a GitHub Actions job within GitHub's infrastructure. We made it available for our customers first, with a custom Kernel configuration for x86_64 servers. Arm support for launching VMs within VMs is not currently available in the current generation of Ampere servers, but may be available within the next generation of chips and Kernels.

\n

We have several tutorials including how to run Firecracker itself within a CI job, Packer, nix and more.

\n

When you run Packer in a VM, instead of with one of the cloud drivers, you save on time and costs, by not having to fire up cloud resources on AWS, GCP, Azure, and so forth. Instead, you can run a local VM to build the image, then convert it to an AMI or another format.

\n

One of our customers has started exploring launching a VM during a CI job in order to test air-gapped support for enterprise customers. This is a great example of how you can use nested virtualisation to test your own product in a repeatable way.

\n

Nix benefits particularly from being able to create a clean, isolated environment within a CI pipeline, to get a repeatable build. Graham Christensen from Determinate Systems reached out to collaborate on testing their Nix installer in actuated.

\n

He didn't expect it to run, but when it worked first time, he remarked: \"Perfect! I'm impressed and happy that our action works out of the box.\"

\n
jobs:\n  specs:\n    name: ci\n    runs-on: [actuated-16cpu-32gb]\n    steps:\n      - uses: DeterminateSystems/nix-installer-action@main\n      - run: |\n            nix-build '<nixpkgs/nixos/tests/doas.nix>'\n
\n\n

Wrapping up

\n

We've now released eBPF/BTF support as part of onboarding CNCF projects, updated to the latest Kernel revision, made scheduling better value for money & easier to customise, and have added a range of tutorials for getting the most out of nested virtualisation.

\n

If you'd like to try out actuated, you can get started same day.

\n

Talk to us..

\n

You may also like:

\n","title":"December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration","description":"You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.","tags":["ebpf","cloudnative","opensource"],"author_img":"alex","image":"/images/2023-12-scheduling-bpf/background-bpf.png","date":"2023-12-04"}},"__N_SSG":true} \ No newline at end of file diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/develop-a-great-go-cli.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/develop-a-great-go-cli.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/develop-a-great-go-cli.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/develop-a-great-go-cli.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/faster-nix-builds.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/faster-nix-builds.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/faster-nix-builds.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/faster-nix-builds.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/faster-self-hosted-cache.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/faster-self-hosted-cache.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/faster-self-hosted-cache.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/faster-self-hosted-cache.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/firecracker-container-lab.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/firecracker-container-lab.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/firecracker-container-lab.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/firecracker-container-lab.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/github-actions-usage-cli.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/github-actions-usage-cli.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/github-actions-usage-cli.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/github-actions-usage-cli.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/how-to-run-multi-arch-builds-natively.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/how-to-run-multi-arch-builds-natively.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/how-to-run-multi-arch-builds-natively.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/how-to-run-multi-arch-builds-natively.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/is-the-self-hosted-runner-safe-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/is-the-self-hosted-runner-safe-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/is-the-self-hosted-runner-safe-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/is-the-self-hosted-runner-safe-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/kvm-in-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/kvm-in-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/kvm-in-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/kvm-in-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/managing-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/managing-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/managing-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/managing-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/multi-arch-docker-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/multi-arch-docker-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/multi-arch-docker-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/multi-arch-docker-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/native-arm64-for-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/native-arm64-for-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/native-arm64-for-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/native-arm64-for-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/oidc-proxy-for-openfaas.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/oidc-proxy-for-openfaas.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/oidc-proxy-for-openfaas.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/oidc-proxy-for-openfaas.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/sbom-in-github-actions.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/sbom-in-github-actions.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/sbom-in-github-actions.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/sbom-in-github-actions.json diff --git a/_next/data/sWZ02q12KXUNYpz9v5jwh/blog/secure-microvm-ci-gitlab.json b/_next/data/wXUtKbTt0GdMTEEVlueRm/blog/secure-microvm-ci-gitlab.json similarity index 100% rename from _next/data/sWZ02q12KXUNYpz9v5jwh/blog/secure-microvm-ci-gitlab.json rename to _next/data/wXUtKbTt0GdMTEEVlueRm/blog/secure-microvm-ci-gitlab.json diff --git a/_next/static/sWZ02q12KXUNYpz9v5jwh/_buildManifest.js b/_next/static/wXUtKbTt0GdMTEEVlueRm/_buildManifest.js similarity index 100% rename from _next/static/sWZ02q12KXUNYpz9v5jwh/_buildManifest.js rename to _next/static/wXUtKbTt0GdMTEEVlueRm/_buildManifest.js diff --git a/_next/static/sWZ02q12KXUNYpz9v5jwh/_ssgManifest.js b/_next/static/wXUtKbTt0GdMTEEVlueRm/_ssgManifest.js similarity index 100% rename from _next/static/sWZ02q12KXUNYpz9v5jwh/_ssgManifest.js rename to _next/static/wXUtKbTt0GdMTEEVlueRm/_ssgManifest.js diff --git a/blog.html b/blog.html index b010d57..e4247d1 100644 --- a/blog.html +++ b/blog.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Actuated Blog

The latest news, tutorials, case-studies, and announcements.

\ No newline at end of file +

Actuated Blog

The latest news, tutorials, case-studies, and announcements.

\ No newline at end of file diff --git a/blog/amd-zenbleed-update-now.html b/blog/amd-zenbleed-update-now.html index 9b980a2..cf3d520 100644 --- a/blog/amd-zenbleed-update-now.html +++ b/blog/amd-zenbleed-update-now.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Update your AMD hosts now to mitigate the Zenbleed exploit

Learn how to update the microcode on your AMD CPU to avoid the Zenbleed exploit.

On 24th July 2023, The Register covered a new exploit for certain AMD CPUs based upon the Zen architecture. The exploit, dubbed Zenbleed, allows an attacker to read arbitrary physical memory locations on the host system. It works by allowing memory to be read after it's been set to be freed up aka "use-after-free". This is a serious vulnerability, and you should update your AMD hosts as soon as possible.

+

Update your AMD hosts now to mitigate the Zenbleed exploit

Learn how to update the microcode on your AMD CPU to avoid the Zenbleed exploit.

On 24th July 2023, The Register covered a new exploit for certain AMD CPUs based upon the Zen architecture. The exploit, dubbed Zenbleed, allows an attacker to read arbitrary physical memory locations on the host system. It works by allowing memory to be read after it's been set to be freed up aka "use-after-free". This is a serious vulnerability, and you should update your AMD hosts as soon as possible.

The Register made the claim that "any level of emulation such as QEMU" would prevent the exploit from working. This is misleading because QEMU only makes sense in production when used with hardware acceleration (KVM). We were able to run the exploit with a GitHub Action using actuated on an AMD Epyc server from Equinix Metal using Firecracker and KVM.

"If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails."

@@ -100,4 +100,4 @@

Verifying the mitigation

Wrapping up

Actuated uses Firecracker, an open source Virtual Machine Manager (VMM) that works with Linux KVM to run isolated systems on a host. We have verified that the exploit works on Firecracker, and that the mitigation works too. So whilst VM-level isolation and an immutable filesystem is much more appropriate than a container for CI, this is an example of why we must still be vigilant and ready to respond to security vulnerabilities.

-

This is an unfortunate, and serious vulnerability. It affects bare-metal, VMs and containers, which is why it's important to update your systems as soon as possible.

\ No newline at end of file +

This is an unfortunate, and serious vulnerability. It affects bare-metal, VMs and containers, which is why it's important to update your systems as soon as possible.

\ No newline at end of file diff --git a/blog/arm-ci-cncf-ampere.html b/blog/arm-ci-cncf-ampere.html index 97afc5f..8a893a7 100644 --- a/blog/arm-ci-cncf-ampere.html +++ b/blog/arm-ci-cncf-ampere.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Announcing managed Arm CI for CNCF projects

Ampere Computing and The Cloud Native Computing Foundation are sponsoring a pilot of actuated's managed Arm CI for CNCF projects.

In this post, we'll cover why Ampere Computing and The Cloud Native Computing Foundation (CNCF) are sponsoring a pilot of actuated for open source projects, how you can get involved.

+

Announcing managed Arm CI for CNCF projects

Ampere Computing and The Cloud Native Computing Foundation are sponsoring a pilot of actuated's managed Arm CI for CNCF projects.

In this post, we'll cover why Ampere Computing and The Cloud Native Computing Foundation (CNCF) are sponsoring a pilot of actuated for open source projects, how you can get involved.

We'll also give you a quick one year recap on actuated, if you haven't checked in with us for a while.

Managed Arm CI for CNCF projects

At KubeCon EU, I spoke to Chris Aniszczyk, CTO at the CNCF, and told him about some of the results we'd been seeing with actuated customers, including Fluent Bit, which is a CNCF project. Chris told me that many teams were either putting off Arm support all together, were suffering with the slow builds that come from using QEMU, or were managing their own infrastructure which was underutilized.

@@ -79,4 +79,4 @@

Learn more about actuated

Did you know? Actuated for GitLab CI is now in technical preview, watch a demo here.

-
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/automate-packer-qemu-image-builds.html b/blog/automate-packer-qemu-image-builds.html index b1329f8..84097b2 100644 --- a/blog/automate-packer-qemu-image-builds.html +++ b/blog/automate-packer-qemu-image-builds.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Automate Packer Images with QEMU and Actuated

Learn how to automate Packer images using QEMU and nested virtualisation through actuated.

One of the most popular tools for creating images for virtual machines is Packer by Hashicorp. Packer automates the process of building images for a variety of platforms from a single source configuration. Different builders can be used to create machines and generate images from those machines.

+

Automate Packer Images with QEMU and Actuated

Learn how to automate Packer images using QEMU and nested virtualisation through actuated.

One of the most popular tools for creating images for virtual machines is Packer by Hashicorp. Packer automates the process of building images for a variety of platforms from a single source configuration. Different builders can be used to create machines and generate images from those machines.

In this tutorial we will use the QEMU builder to create a KVM virtual machine image.

We will see how the Packer build can be completely automated by integrating Packer into a continuous integration (CI) pipeline with GitHub Actions. The workflow will automatically trigger image builds on changes and publish the resulting images as GitHub release artifacts.

Actuated supports nested virtualsation where a VM can make use of KVM to launch additional VMs within a GitHub Action. This makes it possible to run the Packer QEMU builder in GitHub Action workflows. Something that is not possible with GitHub's default hosted runners.

@@ -236,4 +236,4 @@

Taking it further

We exported the image in qcow2 format but you might need a different image format. The QEMU builder also supports outputting images in raw format. In our Packer template the output format can be changed by setting the format variable.

Additional tools like the qemu disk image utility can also be used to convert images between different formats. A post-processor would be the ideal place for these kinds of extra processing steps.

AWS also supports importing VM images and converting them to an AMI so they can be used to launch EC2 instances. See: Create an AMI from a VM image

-

If you'd like to know more about nested virtualisation support, check out: How to run KVM guests in your GitHub Actions

\ No newline at end of file +

If you'd like to know more about nested virtualisation support, check out: How to run KVM guests in your GitHub Actions

\ No newline at end of file diff --git a/blog/blazing-fast-ci-with-microvms.html b/blog/blazing-fast-ci-with-microvms.html index b014e61..06feef1 100644 --- a/blog/blazing-fast-ci-with-microvms.html +++ b/blog/blazing-fast-ci-with-microvms.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Blazing fast CI with MicroVMs

Alex Ellis

I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

Around 6-8 months ago I started exploring MicroVMs out of curiosity. Around the same time, I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

+

Blazing fast CI with MicroVMs

Alex Ellis

I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

Around 6-8 months ago I started exploring MicroVMs out of curiosity. Around the same time, I saw an opportunity to fix self-hosted runners for GitHub Actions. Actuated is now in pilot and aims to solve most if not all of the friction.

There's three parts to this post:

  1. A quick debrief on Firecracker and MicroVMs vs legacy solutions
  2. @@ -172,4 +172,4 @@

    What are people saying about actuated?

    This is awesome!" (After reducing Parca build time from 33.5 minutes to 1 minute 26s)

    Frederic Branczyk, Co-founder, Polar Signals

    -
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/caching-in-github-actions.html b/blog/caching-in-github-actions.html index 2618a5f..274dfd1 100644 --- a/blog/caching-in-github-actions.html +++ b/blog/caching-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Make your builds run faster with Caching for GitHub Actions

Han Verstraete

Learn how we made a Golang project build 4x faster using GitHub's built-in caching mechanism.

GitHub provides a cache action that allows caching dependencies and build outputs to improve workflow execution time.

+

Make your builds run faster with Caching for GitHub Actions

Han Verstraete

Learn how we made a Golang project build 4x faster using GitHub's built-in caching mechanism.

GitHub provides a cache action that allows caching dependencies and build outputs to improve workflow execution time.

A common use case would be to cache packages and dependencies from tools such as npm, pip, Gradle, ... . If you are using Go, caching go modules and the build cache can save you a significant amount of build time as we will see in the next section.

Caching can be configured manually, but a lot of setup actions already use the actions/cache under the hood and provide a configuration option to enable caching.

We use the actions cache to speed up workflows for building the Actuated base images. As part of those workflows we build a kernel and then a rootfs. Since the kernel’s configuration is changed infrequently it makes sense to cache that output.

@@ -106,4 +106,4 @@

Conclusion

Want to learn more about Go and GitHub Actions?

Alex's eBook Everyday Golang has a chapter dedicated to building Go programs with Docker and GitHub Actions.

-
\ No newline at end of file +
\ No newline at end of file diff --git a/blog/calyptia-case-study-arm.html b/blog/calyptia-case-study-arm.html index 39ec0ea..7ee3287 100644 --- a/blog/calyptia-case-study-arm.html +++ b/blog/calyptia-case-study-arm.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

How Calyptia fixed its Arm builds whilst saving money

Learn how Calyptia fixed its failing Arm builds for open-source Fluent Bit and accelerated our commercial development by adopting Actuated and bare-metal runners.

This is a case-study, and guest article by Patrick Stephens, Tech Lead of Infrastructure at Calyptia.

+

How Calyptia fixed its Arm builds whilst saving money

Learn how Calyptia fixed its failing Arm builds for open-source Fluent Bit and accelerated our commercial development by adopting Actuated and bare-metal runners.

This is a case-study, and guest article by Patrick Stephens, Tech Lead of Infrastructure at Calyptia.

Introduction

Different architecture builds can be slow using the Github Actions hosted runners due to emulation of the non-native architecture for the build. This blog shows a simple way to make use of self-hosted runners for dedicated builds but in a secure and easy to maintain fashion.

Calyptia maintains the OSS and Cloud Native Computing Foundation (CNCF) graduated Fluent projects including Fluent Bit. We then add value to the open-source core by providing commercial services and enterprise-level features.

@@ -118,4 +118,4 @@

Actuated support

Conclusion

Alex Ellis: We've learned a lot working with Patrick and Calyptia and are pleased to see that they were able to save money, whilst getting much quicker, and safer Open Source and commercial builds.

We value getting feedback and suggestions from customers, and Patrick continues to provide plenty of them.

-

If you'd like to learn more about actuated, reach out to speak to our team by clicking "Sign-up" and filling out the form. We'll be in touch to arrange a call.

\ No newline at end of file +

If you'd like to learn more about actuated, reach out to speak to our team by clicking "Sign-up" and filling out the form. We'll be in touch to arrange a call.

\ No newline at end of file diff --git a/blog/case-study-bring-your-own-bare-metal-to-actions.html b/blog/case-study-bring-your-own-bare-metal-to-actions.html index 04047da..736524a 100644 --- a/blog/case-study-bring-your-own-bare-metal-to-actions.html +++ b/blog/case-study-bring-your-own-bare-metal-to-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

Bring Your Own Metal Case Study with GitHub Actions

Alex Ellis

See how BYO bare-metal made a 6 hour GitHub Actions build complete 25x faster.

I'm going to show you how both a regular x86_64 build and an Arm build were made dramatically faster by using Bring Your Own (BYO) bare-metal servers.

+

Bring Your Own Metal Case Study with GitHub Actions

Alex Ellis

See how BYO bare-metal made a 6 hour GitHub Actions build complete 25x faster.

I'm going to show you how both a regular x86_64 build and an Arm build were made dramatically faster by using Bring Your Own (BYO) bare-metal servers.

At the early stage of a project, GitHub's standard runners with 2x cores, 8GB RAM, and a little free disk space are perfect because they're free for public repos. For private repos they come in at a modest cost, if you keep your usage low.

What's not to love?

Well, Ed Warnicke, Distinguished Engineer at Cisco contacted me a few weeks ago and told me about the VPP project, and some of the problems he was running into trying to build it with hosted runners.

@@ -99,4 +99,4 @@

Want to work with us?

  • The efficient way to publish multi-arch containers from GitHub Actions
  • Is the GitHub Actions self-hosted runner safe for Open Source?
  • How to make GitHub Actions 22x faster with bare-metal Arm
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/custom-sizes-bpf-kvm.html b/blog/custom-sizes-bpf-kvm.html index e787e72..c51572f 100644 --- a/blog/custom-sizes-bpf-kvm.html +++ b/blog/custom-sizes-bpf-kvm.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration

    You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.

    In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

    +

    December Boost: Custom Job Sizes, eBPF Support & KVM Acceleration

    You can now request custom amounts of RAM and vCPU for jobs, run eBPF within jobs, and use KVM acceleration.

    In this December update, we've got three new updates to the platform that we think you'll benefit from. From requesting custom vCPU and RAM per job, to eBPF features, to spreading your plan across multiple machines dynamically, it's all new and makes actuated better value.

    New eBPF support and 5.10.201 Kernel

    And as part of our work to provide hosted Arm CI for CNCF projects, including Tetragon and Cilium, we've now enabled eBPF and BTF features within the Kernel.

    @@ -44,7 +44,7 @@

    KVM acceleration aka running VMs in your CI pipeline

    When you run Packer in a VM, instead of with one of the cloud drivers, you save on time and costs, by not having to fire up cloud resources on AWS, GCP, Azure, and so forth. Instead, you can run a local VM to build the image, then convert it to an AMI or another format.

    One of our customers has started exploring launching a VM during a CI job in order to test air-gapped support for enterprise customers. This is a great example of how you can use nested virtualisation to test your own product in a repeatable way.

    Nix benefits particularly from being able to create a clean, isolated environment within a CI pipeline, to get a repeatable build. Graham Christensen from Determinate Systems reached out to collaborate on testing their Nix installer in actuated.

    -

    He didn't expect it to run, but when it worked first time, he remarked: "Nice, perfect!"

    +

    He didn't expect it to run, but when it worked first time, he remarked: "Perfect! I'm impressed and happy that our action works out of the box."

    jobs:
       specs:
         name: ci
    @@ -68,4 +68,4 @@ 

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/develop-a-great-go-cli.html b/blog/develop-a-great-go-cli.html index 5096aad..6e7e270 100644 --- a/blog/develop-a-great-go-cli.html +++ b/blog/develop-a-great-go-cli.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to develop a great CLI with Go

    Alex shares his insights from building half a dozen popular Go CLIs. Which can you apply to your projects?

    Is your project's CLI growing with you? I'll cover some of the lessons learned writing the OpenFaaS, actuated, actions-usage, arkade and k3sup CLIs, going as far back as 2016. I hope you'll find some ideas or inspiration for your own projects - either to start them off, or to improve them as you go along.

    +

    How to develop a great CLI with Go

    Alex shares his insights from building half a dozen popular Go CLIs. Which can you apply to your projects?

    Is your project's CLI growing with you? I'll cover some of the lessons learned writing the OpenFaaS, actuated, actions-usage, arkade and k3sup CLIs, going as far back as 2016. I hope you'll find some ideas or inspiration for your own projects - either to start them off, or to improve them as you go along.

    Just starting your journey, or want to go deeper?

    You can master the fundamentals of Go (also called Golang) with my eBook Everyday Golang, which includes chapters on Go routines, HTTP clients and servers, text templates, unit testing and crafting a CLI. If you're on a budget, I would recommend checkout out the official Go tour, too.

    @@ -228,4 +228,4 @@

    Wrapping up

  • faas-cli - CLI client for OpenFaaS
  • k3sup - Install K3s over SSH
  • arkade - Download CLI tools from GitHub releases
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/faster-nix-builds.html b/blog/faster-nix-builds.html index e1d8a1a..47cc842 100644 --- a/blog/faster-nix-builds.html +++ b/blog/faster-nix-builds.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Faster Nix builds with GitHub Actions and actuated

    Speed up your Nix project builds on GitHub Actions with runners powered by Firecracker.

    faasd is a lightweight and portable version of OpenFaaS that was created to run on a single host. In my spare time I maintain faasd-nix, a project that packages faasd and exposes a NixOS module so it can be run with NixOS.

    +

    Faster Nix builds with GitHub Actions and actuated

    Speed up your Nix project builds on GitHub Actions with runners powered by Firecracker.

    faasd is a lightweight and portable version of OpenFaaS that was created to run on a single host. In my spare time I maintain faasd-nix, a project that packages faasd and exposes a NixOS module so it can be run with NixOS.

    The module itself depends on faasd, containerd and the CNI plugins and all of these binaries are built in CI with Nix and then cached using Cachix to save time on subsequent builds.

    I often deploy faasd with NixOS on a Raspberry Pi and to the cloud, so I build binaries for both x86_64 and aarch64. The build usually runs on the default GitHub hosted action runners. Now because GitHub currently doesn't have Arm support, I use QEMU instead which can emulate them. The drawback of this approach is that builds can sometimes be several times slower.

    @@ -107,4 +107,4 @@

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/faster-self-hosted-cache.html b/blog/faster-self-hosted-cache.html index 651d504..fdc386a 100644 --- a/blog/faster-self-hosted-cache.html +++ b/blog/faster-self-hosted-cache.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Fixing the cache latency for self-hosted GitHub Actions

    The cache for GitHub Actions can speed up CI/CD pipelines. But what about when it slows you down?

    In some of our builds for actuated we cache things like the Linux Kernel, so we don't needlessly rebuild it when we update packages in our base images. It can shave minutes off every build meaning our servers can be used more efficiently. Most customers we've seen so far only make light to modest use of GitHub's hosted cache, so haven't noticed much of a latency problem.

    +

    Fixing the cache latency for self-hosted GitHub Actions

    The cache for GitHub Actions can speed up CI/CD pipelines. But what about when it slows you down?

    In some of our builds for actuated we cache things like the Linux Kernel, so we don't needlessly rebuild it when we update packages in our base images. It can shave minutes off every build meaning our servers can be used more efficiently. Most customers we've seen so far only make light to modest use of GitHub's hosted cache, so haven't noticed much of a latency problem.

    But you don't have to spend too long on the issuer tracker for GitHub Actions to find people complaining about the cache being slow or locking up completely for self-hosted runners.

    Go, Rust, Python and other languages don't tend to make heavy use of caches, and Docker has some of its own mechanisms like building cached steps into published images aka inline caching. But for the Node.js ecosystem, the node_modules folder and yarn cache can become huge and take a long time to download. That's one place where you may start to see tension between the speed of self-hosted runners and the latency of the cache. If your repository is a monorepo or has lots of large artifacts, you may get a speed boost by caching that too.

    So why is GitHub's cache so fast for hosted runners, and (sometimes) so slow self-hosted runners?

    @@ -184,4 +184,4 @@

    Conclusion

    You can find out how to configure a container mirror for the Docker Hub using actuated here: Set up a registry mirror. When testing builds for the Discourse team, there was a 2.5GB container image used for UI testing with various browsers preinstalled within it. We found that we could shave off a few minutes off the build time by using the local mirror. Imagine 10x of those builds running at once, needlessly downloading 250GB of data.

    What if you're not an actuated customer? Can you still benefit from a faster cache? You could try out a hosted service like AWS S3 or Google Cloud Storage, provisioned in a region closer to your runners. The speed probably won't quite be as good, but it should still be a lot faster than reaching over the Internet to GitHub's cache.

    If you'd like to try out actuated for your team, reach out to us to find out more.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/firecracker-container-lab.html b/blog/firecracker-container-lab.html index b963ad1..ce61467 100644 --- a/blog/firecracker-container-lab.html +++ b/blog/firecracker-container-lab.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Grab your lab coat - we're building a microVM from a container

    No more broken tutorials, build a microVM from a container, boot it, access the Internet

    When I started learning Firecracker, I ran into frustration after frustration with broken tutorials that were popular in their day, but just hadn't been kept up to date. Almost nothing worked, or was far too complex for the level of interest I had at the time. Most recently, one of the Firecracker maintainers in an effort to make the quickstart better, made it even harder to use. (You can still get a copy of the original Firecracker quickstart in our tutorial on nested virtualisation)

    +

    Grab your lab coat - we're building a microVM from a container

    No more broken tutorials, build a microVM from a container, boot it, access the Internet

    When I started learning Firecracker, I ran into frustration after frustration with broken tutorials that were popular in their day, but just hadn't been kept up to date. Almost nothing worked, or was far too complex for the level of interest I had at the time. Most recently, one of the Firecracker maintainers in an effort to make the quickstart better, made it even harder to use. (You can still get a copy of the original Firecracker quickstart in our tutorial on nested virtualisation)

    So I wrote a lab that takes a container image and converts it to a microVM. You'll get your hands dirty, you'll run a microVM, you'll be able to use curl and ssh, even expose a HTTP server to the Internet via inlets, if (like me), you find that kind of thing fun.

    Why would you want to explore Firecracker? A friend of mine, Ivan Velichko is a prolific writer on containers, and Docker. He is one of the biggest independent evangelists for containers and Kubernetes that I know.

    So when he wanted to build an online labs and training environment, why did he pick Firecracker instead? Simply put, he told us that containers don't cut it. He needed something that would mirror the type of machine that you'd encounter in production, when you provision an EC2 instance or a GCP VM. Running Docker, Kubernetes, and performing are hard to do securely within a container, and he knew that was important for his students.

    @@ -208,4 +208,4 @@

    Wrapping up

    Did you enjoy the lab? Have you got a use-case for Firecracker? Let me know on Twitter @alexellisuk

    If you'd like to see how we've applied Firecracker to bring fast and secure CI to teams, check out our product actuated.dev

    Here's a quick demo of our control-plane, scheduler and bare-metal agent in action:

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/github-actions-usage-cli.html b/blog/github-actions-usage-cli.html index 1c52611..75d2f5f 100644 --- a/blog/github-actions-usage-cli.html +++ b/blog/github-actions-usage-cli.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Understand your usage of GitHub Actions

    Learn how you or your team is using GitHub Actions across your personal account or organisation.

    Whenever GitHub Actions users get in touch with us to ask about actuated, we ask them a number of questions. What do you build? What pain points have you been running into? What are you currently spending? And then - how many minutes are you using?

    +

    Understand your usage of GitHub Actions

    Learn how you or your team is using GitHub Actions across your personal account or organisation.

    Whenever GitHub Actions users get in touch with us to ask about actuated, we ask them a number of questions. What do you build? What pain points have you been running into? What are you currently spending? And then - how many minutes are you using?

    That final question is a hard one for many to answer because the GitHub UI and API will only show billable minutes. Why is that a problem? Some teams only use open-source repositories with free runners. Others may have a large free allowance of credit for one reason or another and so also don't really know what they're using. Then you have people who already use some form of self-hosted runners - they are also excluded from what GitHub shows you.

    So we built an Open Source CLI tool called actions-usage to generate a report of your total minutes by querying GitHub's REST API.

    And over time, we had requests to break-down per day - so for our customers in the Middle East like Kubiya, it's common to see a very busy day on Sunday, and not a lot of action on Friday. Given that some teams use mono-repos, we also added the ability to break-down per repository - so you can see which ones are the most active. And finally, we added the ability to see hot-spots of usage like the longest running repo or the most active day.

    @@ -156,4 +156,4 @@

    Wrapping up

    Check out self-actuated/actions-usage on GitHub

    I wrote an eBook writing CLIs like this in Go and keep it up to date on a regular basis - adding new examples and features of Go.

    Why not check out what people are saying about it on Gumroad?

    -

    Everyday Go - The Fast Track

    \ No newline at end of file +

    Everyday Go - The Fast Track

    \ No newline at end of file diff --git a/blog/how-to-run-multi-arch-builds-natively.html b/blog/how-to-run-multi-arch-builds-natively.html index df4d81c..64fb35e 100644 --- a/blog/how-to-run-multi-arch-builds-natively.html +++ b/blog/how-to-run-multi-arch-builds-natively.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to split up multi-arch Docker builds to run natively

    Alex Ellis

    QEMU is a convenient way to publish containers for multiple architectures, but it can be incredibly slow. Native is much faster.

    In two previous articles, we covered huge improvements in performance for the Parca project and VPP (Network Service Mesh) simply by switching to actuated with Arm64 runners instead of using QEMU and hosted runners.

    +

    How to split up multi-arch Docker builds to run natively

    Alex Ellis

    QEMU is a convenient way to publish containers for multiple architectures, but it can be incredibly slow. Native is much faster.

    In two previous articles, we covered huge improvements in performance for the Parca project and VPP (Network Service Mesh) simply by switching to actuated with Arm64 runners instead of using QEMU and hosted runners.

    In the first case, using QEMU took over 33 minutes, and bare-metal Arm showed a 22x improvement at only 1 minute 26 seconds. For Network Service Mesh, VPP couldn't even complete a build in 6 hours using QEMU - and I got it down to 9 minutes flat using a bare-metal Ampere Altra server.

    What are we going to see and why is it better?

    In this article, I'll show you how to run multi-arch builds natively on bare-metal hardware using GitHub Actions and actuated.

    @@ -252,4 +252,4 @@

    Wrapping up

    Want to talk to us about your CI/CD needs? We're happy to help.

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/is-the-self-hosted-runner-safe-github-actions.html b/blog/is-the-self-hosted-runner-safe-github-actions.html index 7ec437c..10eef80 100644 --- a/blog/is-the-self-hosted-runner-safe-github-actions.html +++ b/blog/is-the-self-hosted-runner-safe-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Is the GitHub Actions self-hosted runner safe for Open Source?

    Alex Ellis

    GitHub warns against using self-hosted Actions runners for public repositories - but why? And are there alternatives?

    First of all, why would someone working on an open source project need a self-hosted runner?

    +

    Is the GitHub Actions self-hosted runner safe for Open Source?

    Alex Ellis

    GitHub warns against using self-hosted Actions runners for public repositories - but why? And are there alternatives?

    First of all, why would someone working on an open source project need a self-hosted runner?

    Having contributed to dozens of open source projects, and gotten to know many different maintainers, the primary reason tends to be out of necessity. They face an 18 minute build time upon every commit or Pull Request revision, and want to make the best of what little time they can give over to Open Source.

    Having faster builds also lowers friction for contributors, and since many contributors are unpaid and rely on their own internal drive and enthusiasm, a fast build time can be the difference between them fixing a broken test or waiting another few days.

    To sum up, there are probably just a few main reasons:

    @@ -121,4 +121,4 @@

    Get involved today

  • Find a server for your builds
  • Register for actuated
  • -

    Follow us on Twitter - selfactuated

    \ No newline at end of file +

    Follow us on Twitter - selfactuated

    \ No newline at end of file diff --git a/blog/kvm-in-github-actions.html b/blog/kvm-in-github-actions.html index b532f3a..3b2479d 100644 --- a/blog/kvm-in-github-actions.html +++ b/blog/kvm-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to run KVM guests in your GitHub Actions

    Han Verstraete

    From building cloud images, to running NixOS tests and the android emulator, we look at how and why you'd want to run a VM in GitHub Actions.

    GitHub's hosted runners do not support nested virtualization. This means some frequently used tools that require KVM like packer, the Android emulator, etc can not be used in GitHub Actions CI pipelines.

    +

    How to run KVM guests in your GitHub Actions

    Han Verstraete

    From building cloud images, to running NixOS tests and the android emulator, we look at how and why you'd want to run a VM in GitHub Actions.

    GitHub's hosted runners do not support nested virtualization. This means some frequently used tools that require KVM like packer, the Android emulator, etc can not be used in GitHub Actions CI pipelines.

    We noticed there are quite a few issues for people requesting KVM support for GitHub Actions:

    Want to see a demo or talk to our team? Contact us here

    -

    Just want to try it out instead? Register your GitHub Organisation and set-up a subscription

    \ No newline at end of file +

    Just want to try it out instead? Register your GitHub Organisation and set-up a subscription

    \ No newline at end of file diff --git a/blog/managing-github-actions.html b/blog/managing-github-actions.html index 320aebf..b4bd8e8 100644 --- a/blog/managing-github-actions.html +++ b/blog/managing-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Lessons learned managing GitHub Actions and Firecracker

    Alex Ellis

    Alex shares lessons from building a managed service for GitHub Actions with Firecracker.

    Over the past few months, we've launched over 20,000 VMs for customers and have handled over 60,000 webhook messages from the GitHub API. We've learned a lot from every customer PoC and from our own usage in OpenFaaS.

    +

    Lessons learned managing GitHub Actions and Firecracker

    Alex Ellis

    Alex shares lessons from building a managed service for GitHub Actions with Firecracker.

    Over the past few months, we've launched over 20,000 VMs for customers and have handled over 60,000 webhook messages from the GitHub API. We've learned a lot from every customer PoC and from our own usage in OpenFaaS.

    First of all - what is it we're offering? And how is it different to managed runners and the self-hosted runner that GitHub offers?

    Actuated replicates the hosted experience you get from paying for hosted runners, and brings it to hardware under your own control. That could be a bare-metal Arm server, or a regular Intel/AMD cloud VM that has nested virtualisation enabled.

    Just like managed runners - every time actuated starts up a runner, it's within a single-tenant virtual machine (VM), with an immutable filesystem.

    @@ -132,4 +132,4 @@

    Wrapping up

    Whilst there was a significant amount of very technical work at the beginning of actuated, most of our time now is spent on customer support, education, and improving the onboarding experience.

    If you'd like to know how actuated compares to hosted runners or managing the self-hosted runner on your own, we'd encourage checking out the blog and FAQ.

    Are your builds slowing the team down? Do you need better organisation-level insights and reporting? Or do you need Arm support? Are you frustrated with managing self-hosted runners?

    -

    Reach out to us take part in the pilot

    \ No newline at end of file +

    Reach out to us take part in the pilot

    \ No newline at end of file diff --git a/blog/multi-arch-docker-github-actions.html b/blog/multi-arch-docker-github-actions.html index 2deae15..c2a14bc 100644 --- a/blog/multi-arch-docker-github-actions.html +++ b/blog/multi-arch-docker-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    The efficient way to publish multi-arch containers from GitHub Actions

    Alex Ellis

    Learn how to publish container images for both Arm and Intel machines from GitHub Actions.

    In 2017, I wrote an article on multi-stage builds with Docker, and it's now part of the Docker Documentation. In my opinion, multi-arch builds were the proceeding step in the evolution of container images.

    +

    The efficient way to publish multi-arch containers from GitHub Actions

    Alex Ellis

    Learn how to publish container images for both Arm and Intel machines from GitHub Actions.

    In 2017, I wrote an article on multi-stage builds with Docker, and it's now part of the Docker Documentation. In my opinion, multi-arch builds were the proceeding step in the evolution of container images.

    What's multi-arch and why should you care?

    If you want users to be able to use your containers on different types of computer, then you'll often need to build different versions of your binaries and containers.

    The faas-cli tool is how users interact with OpenFaaS.

    @@ -222,4 +222,4 @@

    Putting it all together

    A word of caution

    QEMU can be incredibly slow at times when using a hosted runner, where a build takes takes 1-2 minutes can extend to over half an hour. If you do run into that, one option is to check out actuated or another solution, which can build directly on an Arm server with a securely isolated Virtual Machine.

    In How to make GitHub Actions 22x faster with bare-metal Arm, we showed how we decreased the build time of an open-source Go project from 30.5 mins to 1.5 mins. If this is the direction you go in, you can use a matrix-build instead of a QEMU-based multi-arch build.

    -

    See also: Recommended bare-metal Arm servers

    \ No newline at end of file +

    See also: Recommended bare-metal Arm servers

    \ No newline at end of file diff --git a/blog/native-arm64-for-github-actions.html b/blog/native-arm64-for-github-actions.html index d6ab8c1..b119b1a 100644 --- a/blog/native-arm64-for-github-actions.html +++ b/blog/native-arm64-for-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to make GitHub Actions 22x faster with bare-metal Arm

    Alex Ellis

    GitHub doesn't provide hosted Arm runners, so how can you use native Arm runners safely & securely?

    GitHub Actions is a modern, fast and efficient way to build and test software, with free runners available. We use the free runners for various open source projects and are generally very pleased with them, after all, who can argue with good enough and free? But one of the main caveats is that GitHub's hosted runners don't yet support the Arm architecture.

    +

    How to make GitHub Actions 22x faster with bare-metal Arm

    Alex Ellis

    GitHub doesn't provide hosted Arm runners, so how can you use native Arm runners safely & securely?

    GitHub Actions is a modern, fast and efficient way to build and test software, with free runners available. We use the free runners for various open source projects and are generally very pleased with them, after all, who can argue with good enough and free? But one of the main caveats is that GitHub's hosted runners don't yet support the Arm architecture.

    So many people turn to software-based emulation using QEMU. QEMU is tricky to set up, and requires specific code and tricks if you want to use software in a standard way, without modifying it. But QEMU is great when it runs with hardware acceleration. Unfortunately, the hosted runners on GitHub do not have KVM available, so builds tend to be incredibly slow, and I mean so slow that it's going to distract you and your team from your work.

    This was even more evident when Frederic Branczyk tweeted about his experience with QEMU on GitHub Actions for his open source observability project named Parca.

    @@ -131,4 +131,4 @@

    Wrapping up

  • Actuated docs
  • FAQ & comparison to other solutions
  • Follow actuated on Twitter
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/oidc-proxy-for-openfaas.html b/blog/oidc-proxy-for-openfaas.html index d2b6187..f7910bc 100644 --- a/blog/oidc-proxy-for-openfaas.html +++ b/blog/oidc-proxy-for-openfaas.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Keyless deployment to OpenFaaS with OIDC and GitHub Actions

    Alex Ellis

    We're announcing a new OIDC proxy for OpenFaaS for keyless deployments from GitHub Actions.

    In 2021, GitHub released OpenID Connect (OIDC) support for CI jobs running under GitHub Actions. This was a huge step forward for security meaning that any GitHub Action could mint an OIDC token and use it to securely federate into another system without having to store long-lived credentials in the repository.

    +

    Keyless deployment to OpenFaaS with OIDC and GitHub Actions

    Alex Ellis

    We're announcing a new OIDC proxy for OpenFaaS for keyless deployments from GitHub Actions.

    In 2021, GitHub released OpenID Connect (OIDC) support for CI jobs running under GitHub Actions. This was a huge step forward for security meaning that any GitHub Action could mint an OIDC token and use it to securely federate into another system without having to store long-lived credentials in the repository.

    I wrote a prototype for OpenFaaS shortly after the announcement and a deep dive explaining how it works. I used inlets to set up a HTTPS tunnel, and send the token to my machine for inspection. Various individuals and technical teams have used my content as a reference guide when working with GitHub Actions and OIDC.

    See the article: Deploy without credentials with GitHub Actions and OIDC

    Since then, custom actions for GCP, AWS and Azure have been created which allow an OIDC token from a GitHub Action to be exchanged for a short-lived access token for their API - meaning you can manage cloud resources securely. For example, see: Configuring OpenID Connect in Amazon Web Services - we have actuated customers who use this approach to deploy to ECR from their self-hosted runners without having to store long-lived credentials in their repositories.

    @@ -177,4 +177,4 @@

    Wrapping up

    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/sbom-in-github-actions.html b/blog/sbom-in-github-actions.html index 8c88b19..500a6f6 100644 --- a/blog/sbom-in-github-actions.html +++ b/blog/sbom-in-github-actions.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    How to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions

    Alex Ellis

    Learn how to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions in a few easy steps.

    What is a Software Bill of Materials (SBOM)?

    +

    How to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions

    Alex Ellis

    Learn how to add a Software Bill of Materials (SBOM) to your containers with GitHub Actions in a few easy steps.

    What is a Software Bill of Materials (SBOM)?

    In April 2022 Justin Cormack, CTO of Docker announced that Docker was adding support to generate a Software Bill of Materials (SBOM) for container images.

    An SBOM is an inventory of the components that make up a software application. It is a list of the components that make up a software application including the version of each component. The version is important because it can be cross-reference with a vulnerability database to determine if the component has any known vulnerabilities.

    Many organisations are also required to company with certain Open Source Software (OSS) licenses. So if SBOMs are included in the software they purchase or consume from vendors, then it can be used to determine if the software is compliant with their specific license requirements, lowering legal and compliance risk.

    @@ -204,4 +204,4 @@

    Wrapping up

  • Announcing Docker SBOM: A step towards more visibility into Docker images- Justin Cormack
  • Generating SBOMs for Your Image with BuildKit - Justin Chadwell
  • Anchore on GitHub
  • -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/blog/secure-microvm-ci-gitlab.html b/blog/secure-microvm-ci-gitlab.html index 7535b07..d515193 100644 --- a/blog/secure-microvm-ci-gitlab.html +++ b/blog/secure-microvm-ci-gitlab.html @@ -4,7 +4,7 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Secure CI for GitLab with Firecracker microVMs

    Learn how actuated for GitLab CI can help you secure your CI/CD pipelines with Firecracker.

    We started building actuated for GitHub Actions because we at OpenFaaS Ltd had a need for: unmetered CI minutes, faster & more powerful x86 machines, native Arm builds and low maintenance CI builds.

    +

    Secure CI for GitLab with Firecracker microVMs

    Learn how actuated for GitLab CI can help you secure your CI/CD pipelines with Firecracker.

    We started building actuated for GitHub Actions because we at OpenFaaS Ltd had a need for: unmetered CI minutes, faster & more powerful x86 machines, native Arm builds and low maintenance CI builds.

    And most importantly, we needed it to be low-maintenance, and securely isolated.

    None of the solutions at the time could satisfy all of those requirements, and even today with GitHub adopting the community-based Kubernetes controller to run CI in Pods, there is still a lot lacking.

    As we've gained more experience with customers who largely had the same needs as we did for GitHub Actions, we started to hear more and more from GitLab CI users. From large enterprise companies who are concerned about the security risks of running CI with privileged Docker containers, Docker socket binding (from the host!) or the flakey nature and slow speed of VFS with Docker In Docker (DIND).

    @@ -50,4 +50,4 @@

    Wrapping up

  • Browse the docs and FAQ for actuated for GitHub Actions
  • Read customer testimonials
  • -

    You can follow @selfactuated on Twitter, or find me there too to keep an eye on what we're building.

    \ No newline at end of file +

    You can follow @selfactuated on Twitter, or find me there too to keep an eye on what we're building.

    \ No newline at end of file diff --git a/index.html b/index.html index 6c0077f..fbd888c 100644 --- a/index.html +++ b/index.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -
    The team working collaborating on a project

    Fast and secure GitHub Actionson your own infrastructure.

    Standard hosted runners are slow and expensive, larger runners cost even more.

    Actuated runs on your own infrastructure with a predictable cost, low management and secure isolation with Firecracker.

    Trusted by a growing number of companies

    Riskfuel
    OpenFaaS Ltd
    Observability, simplified
    Swiss National Data and Service Center
    Kubiya.ai
    Settlemint
    Toolpath
    Waffle Labs
    Tuple

    After one week actuated has been a huge improvement! Even our relatively fast Go build seems to be running 30%+ faster and being able to commit to several repos and only wait 10 minutes instead of an hour has been transformative.

    Steven Byrnes
    VP Engineering, Waffle Labs
    SettleMint

    Finally our self-hosted GitHub Actions Runners are on fire 🔥. Thanks to Alex for all the help onboarding our monorepo with NX, Docker, x86 and Arm on bare-metal.

    Our worst case build/deploy went down from 1h to 20-30mins, for a repo with 200k lines-of-code.

    Roderik van der Veer
    Founder & CTO at SettleMint

    Keep in touch with us
    For news and announcements

    We care about your data. Read our privacy policy.

    Increase speed, security and efficiency.

    Learn what friction actuated solves and how it compares to other solutions: Read the announcement

    Secure CI that feels like hosted.

    Every job runs in an immutable microVM, just like hosted runners, but on your own servers.

    That means you can run sudo, Docker and Kubernetes directly, just like you do during development, there's no need for Docker-in-Docker (DIND), Kaniko or complex user-namespaces.

    What about management and scheduling? Provision a server with a minimal Operating System, then install the agent. We'll do the rest.

    Arm / M1 from Dev to Production.

    Apple Silicon is a new favourite for developer workstations. Did you know that when your team run Docker, they're using an Arm VM?

    With actuated, those same builds can be performed on Arm servers during CI, and even deployed to production on more efficient Ampere or AWS Graviton servers.

    Run directly within your datacenter.

    If you work with large container images or datasets, then you'll benefit from having your CI run with direct local network access to your internal network.

    This is not possible with hosted runners, and this is crucial for one of our customers who can regularly saturate a 10GbE link during GitHub Actions runs.

    In contrast, VPNs are complex to manage, capped on speed, and incur costly egress charges.

    Debug live over SSH

    We heard from many teams that they missed CircleCI's "debug this job" button, so we built it for you.

    We realise you won't debug your jobs on a regular basis, but when you are stuck, and have to wait 15-20 minutes to get to the line you've changed in a job, debugging with a terminal can save you hours.

    "How cool!!! you don't know how many hours I have lost on GitHub Actions without this." - Ivan Subotic, Swiss National Data and Service Center (DaSCH)

    Lower your costs, and keep them there.

    Actuated is billed by the maximum amount of concurrent jobs you can run, so no matter how many build minutes your team requires, the charge will not increase with usage.

    In a recent interview, a lead SRE at UK-based scale-up told us that their bill had increased 5x over the past 6 months. They are now paying 5000 GBP / month and we worked out that we could make their builds faster and at least half the costs.

    Lower management costs.

    Whenever your team has to manage self-hosted runners, or explain non-standard tools like Kaniko to developers, you're losing money.

    With actuated, you bring your own servers, install our agent, and we do the rest. Our VM image is built with automation and kept up to date, so you don't have to manage packages.

    Predictable billing.

    Most small teams we interviewed were spending at least 1000-1500 USD / mo for the standard, slower hosted runners.

    That cost would only increase with GitHub's faster runners, but with actuated the cost is flat-rate, no matter how many minutes you use.

    Get insights into your organisation

    When you have more than a few teammates and a dozen repositories, it's near impossible to get insights into patterns of usage.

    Inspired by Google Analytics, Actuated contrasts usage for the current period vs the previous period - for the whole organisation, each repository and each developer.

    Organisational level insights

    Learn about the actuated dashboard

    These are the results you've been waiting for

    “Our team loves actuated. It gave us a 3x speed increase on a Go repo that we build throughout the day. Alex and team are really responsive, and we're already happy customers of OpenFaaS and Inlets.”

    Shaked Askayo
    CTO, Kubiya.ai

    “We just switched from Actions Runtime Controller to Actuated. It only took 30s create 5x isolated VMs, run the jobs and tear them down again inside our on-prem environment (no Docker socket mounting shenanigans)! Pretty impressive stuff.”

    Addison van den Hoeven
    DevOps Lead, Riskfuel

    “From my time at Mirantis, I learned how slow and unreliable Docker In Docker can be. Compared to GitHub's 16 core runners, actuated is 2-3x faster for us.”

    Sergei Lukianov
    Founding Engineer, Githedgehog

    “This is great, perfect for jobs that take forever on normal GitHub runners. I love what Alex is doing here.”

    Richard Case
    Principal Engineer, SUSE

    “We needed to build public repos on Arm runners, but QEMU couldn't finish in 6 hours, so we had to turn the build off. Actuated now builds the same code in 4 minutes.”

    Patrick Stephens
    Tech Lead of Infrastructure, Calyptia/Fluent Bit.

    “We needed to build Arm containers for our customers to deploy to Graviton instances on AWS EC2. Hosted runners with QEMU failed to finish within the 6 hour limit. With actuated it takes 20 minutes - and thanks to Firecracker, each build is safely isolated for our open source repositories.”

    Ivan Subotic
    Head of Engineering, Swiss National Data and Service Center for the Humanities

    “One of our Julia builds was taking 5 hours of compute time to complete, and was costing us over 1500USD / mo. Alex's team got us a 3x improvement on speed and lowered our costs at the same time. Actuated is working great for us.”

    Justin Gray, Ph.D.
    CTO & Co-founder at Toolpath

    Frequently asked questions

    Workcation

    “It's cheaper - and less frustrating for your developers — to pay more for better hardware to keep your team on track.

    The upfront cost for more CPU power pays off over time. And your developers will thank you.”

     

    Read the article: The hidden costs of waiting on slow build times (GitHub.com)

    Natalie Somersall
    GitHub Staff

    Ready to go faster?

    You can try out actuated on a monthly basis, with no commitment beyond that.

    Make your builds faster today
    Matrix build with k3sup
    \ No newline at end of file +
    The team working collaborating on a project

    Fast and secure GitHub Actionson your own infrastructure.

    Standard hosted runners are slow and expensive, larger runners cost even more.

    Actuated runs on your own infrastructure with a predictable cost, low management and secure isolation with Firecracker.

    Trusted by a growing number of companies

    Riskfuel
    OpenFaaS Ltd
    Observability, simplified
    Swiss National Data and Service Center
    Kubiya.ai
    Settlemint
    Toolpath
    Waffle Labs
    Tuple

    After one week actuated has been a huge improvement! Even our relatively fast Go build seems to be running 30%+ faster and being able to commit to several repos and only wait 10 minutes instead of an hour has been transformative.

    Steven Byrnes
    VP Engineering, Waffle Labs
    SettleMint

    Finally our self-hosted GitHub Actions Runners are on fire 🔥. Thanks to Alex for all the help onboarding our monorepo with NX, Docker, x86 and Arm on bare-metal.

    Our worst case build/deploy went down from 1h to 20-30mins, for a repo with 200k lines-of-code.

    Roderik van der Veer
    Founder & CTO at SettleMint

    Keep in touch with us
    For news and announcements

    We care about your data. Read our privacy policy.

    Increase speed, security and efficiency.

    Learn what friction actuated solves and how it compares to other solutions: Read the announcement

    Secure CI that feels like hosted.

    Every job runs in an immutable microVM, just like hosted runners, but on your own servers.

    That means you can run sudo, Docker and Kubernetes directly, just like you do during development, there's no need for Docker-in-Docker (DIND), Kaniko or complex user-namespaces.

    What about management and scheduling? Provision a server with a minimal Operating System, then install the agent. We'll do the rest.

    Arm / M1 from Dev to Production.

    Apple Silicon is a new favourite for developer workstations. Did you know that when your team run Docker, they're using an Arm VM?

    With actuated, those same builds can be performed on Arm servers during CI, and even deployed to production on more efficient Ampere or AWS Graviton servers.

    Run directly within your datacenter.

    If you work with large container images or datasets, then you'll benefit from having your CI run with direct local network access to your internal network.

    This is not possible with hosted runners, and this is crucial for one of our customers who can regularly saturate a 10GbE link during GitHub Actions runs.

    In contrast, VPNs are complex to manage, capped on speed, and incur costly egress charges.

    Debug live over SSH

    We heard from many teams that they missed CircleCI's "debug this job" button, so we built it for you.

    We realise you won't debug your jobs on a regular basis, but when you are stuck, and have to wait 15-20 minutes to get to the line you've changed in a job, debugging with a terminal can save you hours.

    "How cool!!! you don't know how many hours I have lost on GitHub Actions without this." - Ivan Subotic, Swiss National Data and Service Center (DaSCH)

    Lower your costs, and keep them there.

    Actuated is billed by the maximum amount of concurrent jobs you can run, so no matter how many build minutes your team requires, the charge will not increase with usage.

    In a recent interview, a lead SRE at UK-based scale-up told us that their bill had increased 5x over the past 6 months. They are now paying 5000 GBP / month and we worked out that we could make their builds faster and at least half the costs.

    Lower management costs.

    Whenever your team has to manage self-hosted runners, or explain non-standard tools like Kaniko to developers, you're losing money.

    With actuated, you bring your own servers, install our agent, and we do the rest. Our VM image is built with automation and kept up to date, so you don't have to manage packages.

    Predictable billing.

    Most small teams we interviewed were spending at least 1000-1500 USD / mo for the standard, slower hosted runners.

    That cost would only increase with GitHub's faster runners, but with actuated the cost is flat-rate, no matter how many minutes you use.

    Get insights into your organisation

    When you have more than a few teammates and a dozen repositories, it's near impossible to get insights into patterns of usage.

    Inspired by Google Analytics, Actuated contrasts usage for the current period vs the previous period - for the whole organisation, each repository and each developer.

    Organisational level insights

    Learn about the actuated dashboard

    These are the results you've been waiting for

    “Our team loves actuated. It gave us a 3x speed increase on a Go repo that we build throughout the day. Alex and team are really responsive, and we're already happy customers of OpenFaaS and Inlets.”

    Shaked Askayo
    CTO, Kubiya.ai

    “We just switched from Actions Runtime Controller to Actuated. It only took 30s create 5x isolated VMs, run the jobs and tear them down again inside our on-prem environment (no Docker socket mounting shenanigans)! Pretty impressive stuff.”

    Addison van den Hoeven
    DevOps Lead, Riskfuel

    “From my time at Mirantis, I learned how slow and unreliable Docker In Docker can be. Compared to GitHub's 16 core runners, actuated is 2-3x faster for us.”

    Sergei Lukianov
    Founding Engineer, Githedgehog

    “This is great, perfect for jobs that take forever on normal GitHub runners. I love what Alex is doing here.”

    Richard Case
    Principal Engineer, SUSE

    “We needed to build public repos on Arm runners, but QEMU couldn't finish in 6 hours, so we had to turn the build off. Actuated now builds the same code in 4 minutes.”

    Patrick Stephens
    Tech Lead of Infrastructure, Calyptia/Fluent Bit.

    “We needed to build Arm containers for our customers to deploy to Graviton instances on AWS EC2. Hosted runners with QEMU failed to finish within the 6 hour limit. With actuated it takes 20 minutes - and thanks to Firecracker, each build is safely isolated for our open source repositories.”

    Ivan Subotic
    Head of Engineering, Swiss National Data and Service Center for the Humanities

    “One of our Julia builds was taking 5 hours of compute time to complete, and was costing us over 1500USD / mo. Alex's team got us a 3x improvement on speed and lowered our costs at the same time. Actuated is working great for us.”

    Justin Gray, Ph.D.
    CTO & Co-founder at Toolpath

    Frequently asked questions

    Workcation

    “It's cheaper - and less frustrating for your developers — to pay more for better hardware to keep your team on track.

    The upfront cost for more CPU power pays off over time. And your developers will thank you.”

     

    Read the article: The hidden costs of waiting on slow build times (GitHub.com)

    Natalie Somersall
    GitHub Staff

    Ready to go faster?

    You can try out actuated on a monthly basis, with no commitment beyond that.

    Make your builds faster today
    Matrix build with k3sup
    \ No newline at end of file diff --git a/pricing.html b/pricing.html index 9cb5dfe..4a7c6ba 100644 --- a/pricing.html +++ b/pricing.html @@ -4,4 +4,4 @@ gtag('js', new Date()); gtag('config', 'G-M5YNDNX7VT'); -

    Plans & Pricing

    You can get started with us the same day as your call.

      Simple, flat-rate pricing

      Actuated plans are based upon the total number of concurrent builds needed for your team. All plans include unmetered minutes, with no additional charges.

      The higher the concurrency level for your plan, the more benefits you'll receive.

      The onboarding process is quick and easy: learn more.

      In all tiers

      • Unlimited build minutes
      • Bring your own server
      • Free onboarding call with our engineers
      • Linux x86_64 and 64-bit Arm
      • Reporting and usage metrics
      • Support via email

      With higher concurrency

      • Debug tricky builds with SSH
      • Enhanced SLA
      • Private Prometheus metrics
      • Private Slack channel
      • Recommendations to improve build speed

      Early access

      • Self-hosted GitLab CI
      • GitHub Enterprise Server (GHES)
      • GPU pass-through

      Pay monthly, for up to 3x faster CI

      $100USD / concurrent build

      Talk to us

      You'll meet with our founder to discuss your current challenges and answer any questions you may have.

    \ No newline at end of file +

    Plans & Pricing

    You can get started with us the same day as your call.

      Simple, flat-rate pricing

      Actuated plans are based upon the total number of concurrent builds needed for your team. All plans include unmetered minutes, with no additional charges.

      The higher the concurrency level for your plan, the more benefits you'll receive.

      The onboarding process is quick and easy: learn more.

      In all tiers

      • Unlimited build minutes
      • Bring your own server
      • Free onboarding call with our engineers
      • Linux x86_64 and 64-bit Arm
      • Reporting and usage metrics
      • Support via email

      With higher concurrency

      • Debug tricky builds with SSH
      • Enhanced SLA
      • Private Prometheus metrics
      • Private Slack channel
      • Recommendations to improve build speed

      Early access

      • Self-hosted GitLab CI
      • GitHub Enterprise Server (GHES)
      • GPU pass-through

      Pay monthly, for up to 3x faster CI

      $100USD / concurrent build

      Talk to us

      You'll meet with our founder to discuss your current challenges and answer any questions you may have.

    \ No newline at end of file diff --git a/rss.xml b/rss.xml index 8a90015..d0dd951 100644 --- a/rss.xml +++ b/rss.xml @@ -4,7 +4,7 @@ Actuated - blog https://actuated.dev Keep your team productive & focused with blazing fast CI - Mon, 04 Dec 2023 09:04:25 GMT + Mon, 04 Dec 2023 09:06:47 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed @@ -58,7 +58,7 @@ KERNEL_REF="ghcr.io/openfaasltd/actuated-kernel:aarch64-latest"

    When you run Packer in a VM, instead of with one of the cloud drivers, you save on time and costs, by not having to fire up cloud resources on AWS, GCP, Azure, and so forth. Instead, you can run a local VM to build the image, then convert it to an AMI or another format.

    One of our customers has started exploring launching a VM during a CI job in order to test air-gapped support for enterprise customers. This is a great example of how you can use nested virtualisation to test your own product in a repeatable way.

    Nix benefits particularly from being able to create a clean, isolated environment within a CI pipeline, to get a repeatable build. Graham Christensen from Determinate Systems reached out to collaborate on testing their Nix installer in actuated.

    -

    He didn't expect it to run, but when it worked first time, he remarked: "Nice, perfect!"

    +

    He didn't expect it to run, but when it worked first time, he remarked: "Perfect! I'm impressed and happy that our action works out of the box."

    jobs:
       specs:
         name: ci