Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

content: Add attested build environments level requirements #1051

102 changes: 102 additions & 0 deletions docs/spec/v1.1/levels.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ tracks without invalidating previous levels.
| [Build L1] | Provenance showing how the package was built | Mistakes, documentation
| [Build L2] | Signed provenance, generated by a hosted build platform | Tampering after the build
| [Build L3] | Hardened build platform | Tampering during the build
| [Build L4] | Attested build environment | Tampering before the build
marcelamelara marked this conversation as resolved.
Show resolved Hide resolved
marcelamelara marked this conversation as resolved.
Show resolved Hide resolved

<!-- For comparison: a future Build L4's focus might be reproducibility or
hermeticity or completeness of provenance -->
Expand Down Expand Up @@ -231,14 +232,115 @@ All of [Build L2], plus:
</dl>
</section>

<section id="build-l4">

### Build L4: Attested build environment

<dl class="as-table">
<dt>Summary<dd>

Compromising a build requires exploiting a vulnerability in the build
platform's supply chain or physical access to the underlying compute
platform.

In practice, this means that builds are configured to run on a build platform
that supports hardware capabilities for attesting to the state of the
compute environment executing builds.

<dt>Intended for<dd>

Software releases needing assurances about the integrity of the environment
used to create the release (e.g., specific compute platform, pre-build
tamper detection).
Build L4 usually requires significant changes to existing build platforms.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no prior examples to be able to assess the truthfulness of this statement. Significant changes could be required for any increased level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. It's true that L3 already has a very similar statement, and we still wanted to be explicit about the fact that this L4 would require additional significant changes on top of L3. We can be more precise in what we mean here: for example, one of the significant requirements is hardware with very particular features (e.g., TPM or TEE support). Would that be more helpful here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sympathetic to view that we can just repeat the language from L3 (unless we also want to rewrite the L3 'intended for' section). Maybe with the caveat "significant changes to existing L3 builds platforms"?

I think the requirements below do a fine job of getting into the details.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

Suggested change
Build L4 usually requires significant changes to existing build platforms.
Build L4 may require significant changes to existing build platforms.

and then list some of the requirements?


<dt>Requirements<dd>

All of [Build L3], plus:

- Software producer:
- MUST run builds on a hosted build platform that meets Build L4
requirements.
- SHOULD verify the build platform's attestations prior to a build and
produce a [verification summary] about the check.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would this look like? I would expect that the build system would create a build environment attestation on the artifact. I feel like it is a lot to ask for a producer to verify it and publish the VSA. Or does this not have to be a human/manual process?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main goal of the proposed requirements is to make tampering prior to a build detectable. That is, at any point from when a build image is itself created/released to the point where a VM is deployed and waiting for a new build job to come in. This means the build platform will actually generate its attestations before and independently of the artifact. We have a figure showing this sequence which I think would be helpful in clarifying things.

I'll note that the point of using hardware-based integrity measurements and attestation for this is to reduce the amound of manual self-attestation and verification that needs to happen on the part of the build platform and producer. I'll also note that this requirement is a SHOULD, so producer's aren't strictly required to check these attestations.


- Build platform:
- Each build image (i.e., VM or container) made available to software
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The VMs must be built on a SLSA Build L3+ platform as well? What does this mean in practice?

Copy link
Contributor Author

@marcelamelara marcelamelara May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we're using the creation process of the VM image to bootstrap a first degree of trust in the build environment. Today we're already placing trust in GHA to configure an L3 platform, so say GHA VM images were themselves built on GHA, this requirement would give us assurances that the VM images were built with the same L3 integrity. It's a bit recursive, much like building gcc with gcc is, but the guarantees provided by the rest of the requirements become significantly weakened if we didn't have integrity for the build environment's build process. This is because the Provenance of the VM image gives us the good known value that can be later check against when the VM is deployed.

Copy link
Contributor Author

@marcelamelara marcelamelara May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should also add that the conditions for meeting requirement are meant to be binary. That is, L4 is achieved iff the VM image is built on a SLSA Build L3+ platform. We believe this is necessary to get around the issue of resolving what the transitive SLSA level would be.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This text also presupposes that all builders run in VMs. Are we categorically rejecting remote attestations from physical TPMs on non-virtual machines?

Copy link
Contributor Author

@marcelamelara marcelamelara Aug 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deeglaze Can you please clarify, when you say non-VM, do you mean that a build could be running inside a container (backed by a physical TPM), or a bare metal environment, or either? There are a few reasons why we're preferring VM-based environments, but I'd like to track the discussion around other settings.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean a task that is launched by a trusted orchestration daemon that establishes some resource limitations and filesystem access restrictions as Linux allows, but not necessarily following an OCI container description. At Google we have build servers that only run builds but still run on Borg. All the node software is accounted for with Titan TPMs. There’s no strong reason for us to require builds to run in VMs since we have everything measured and can account for the machine access by the identity and access management system and (measured) software-enforced ACLs. The build environment is managed by this production identity system before getting to the build job that then measures the inputs for slsa L3 but without incorporating the measurement into pcrs.

The hardware attestation of prodID and software attestation by borg Id (BCID) protect our build environment integrity in an auditable manner, but without incorporating the entirety of our production ecosystem in the build attestation. The new slsa level should allow for ecosystem measurements to be held back if the operational security and physical security of the servers can ve attested to implicitly with the slsa signing key release mechanism.

Now if you’re saying that the build ecosystem integrity is only a means to an end of requiring the build to be run within confidential computing technologies, I’d say that is not the most important goal to reach. If you want the build ecosystem to satisfy holistic properties like “no humans may access the environment” then you have to talk about more security commitments of the software, not just its measurement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend removing these requirements. In my mind, the requirement should be relatively simple: there is a hardware-backed attestation (SEV-SNP, TDX, or equivalent) attesting to:

  • the entire initial state of the build environment (bootloader, kernel, filesystem, etc)
  • all of the inputs to the build

The properties of those inputs, such as the VM, seem outside the scope. There are many inputs to the build, and I would assume that we should verify each of those separately as part of some "transitive SLSA" verification.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I realized looking at this again more recently that this list really enumerates low-level mechanisms for achieving the high-level properties, rather than stating those high-level properties. My thinking is that a lot of the content currently here would ultimately move to the requirements.md description.

The properties of those inputs, such as the VM, seem outside the scope. There are many inputs to the build, and I would assume that we should verify each of those separately as part of some "transitive SLSA" verification.

I generally agree with the notion that inputs/dependencies to the build should be verified separately. At the same time, the properties of the VM specifically are needed to check the initial state of the build environment. That is, in order for the platform or a strict producer to verify the initial state of the build environment it needs to have good known reference values to check against. The SLSA L3 Provenance for the build image's build provides this good known value, with strong integrity assurances, that can then be bootstrapped to gain trust in the integrity of the initial state of the build environment.

Copy link
Contributor Author

@marcelamelara marcelamelara May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a hardware-backed attestation (SEV-SNP, TDX, or equivalent) attesting to ... all of the inputs to the build

@MarkLodato Thinking about this some more as I reformulate the requirements. What do you mean by "inputs to the build"? Is this about the completeness of external parameters in the Provenance? If so, one way to accomplish this requirement is to sign the Provenance itself with the hardware root of trust. Is this what you have in mind?

producers MUST be built on a SLSA Build L3+ platform. The generated
SLSA Provenance MUST be distributed to allow for independent
verification.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provenance for what must be distributed the build image? This is already a requirement for build L1-3: https://slsa.dev/spec/v1.0/requirements#distribute-provenance . Is the intention for this provenance to be verified somehow?

- Distribution of SLSA Provenance for pre-installed software within the
build image MAY be best-effort.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this distribution happen? Are we trying to add recursive SLSA into this requirement? Is that too much to do?

Copy link
Contributor Author

@marcelamelara marcelamelara May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We aren't trying to add recursive SLSA in this requirement. The intent here is to clarify that while SLSA Provenance is required for the build image creation, SLSA Provenance for pre-installed software (e.g., the Linux kernel, packages etc) is not expected of the build platform.

- The boot process of each build environment MUST be measured and
attested using a [TCG-compliant measured boot] mechanism. The
attestation MUST be authenticated and distributed for independent
verification.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes are proposed to be part of the build track but we are now adding additional required attestations into the mix (this attestation + the producer's VSA mentioned earlier). I know that we do not have a 1:1 relationship between attestations and tracks, but this seems to be muddling the simplicity of the build track. Would this proposal be better as its own track?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You raise a valid point about preserving the simplicity build track, and the types of attestations being a part of that. The intent here is to actually encapsulate the hardware-based attestations inside an in-toto attestation, to at least ensure we remain within the SLSA attestation model. So at Build L4 there would be two types of in-toto attestations that would be generated: SLSA Provenance and SCAI encapsulating the hardware-based attestations. We think this approach helps us keep the attestation types limited to a select few. I'll add a TODO item to include examples of these attestations to illustrate what we would expect, and that'll hopefully clarify some of these questions.

On the question of converting this workstream into its own track, our original proposal went down that path. But we got some very convincing feedback from some in the community that that would actually introduce more complexity than is necessary in SLSA overall when the Build Track already pertains to properties of the build platform.

- The initial state of the build environment's disk image MUST be
integrity measured and attested. The attestation MUST be
authenticated and distributed for independent verification.
- Read-write block devices or file system paths MUST be encrypted
with a key that is only accessible within the build image.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be included in the attestation?

Can you clarify the presence of the key? I assume that you are not trying to say that the encryption key should be present in the image used to run the VM/container as then it would be reused for multiple running instances of the build.

- Before launching a new environment based on a build image (i.e., VM
or container instance), its SLSA Provenance MUST be verified.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the criteria for verification here? Is it just the identity or is there more verification needed than that? Are we supposed to verify that it has met some SLSA level?

Copy link
Contributor Author

@marcelamelara marcelamelara May 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main aspect to verify here is the identity/hash of the build image against what's recorded in the Provenance. Checking the SLSA level of the Provenance here gives extra assurances about the trustworthiness of the hash. In practice, this check would follow the standard verification flow. I'll revise this requirement to make the intent clearer.

- Before making a build environment available for a build request:
- The boot process and state of disk image MUST be verified, and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This verification feels different than the previously mentioned one. Is it?

cryptographically bound to the build image's valid SLSA provenance
to establish a verifiable integrity chain between a build image and
a build environment.
- A unique immutable build environment identifier (e.g.,
cryptographic keypair) MUST be generated and cryptographically bound
to the build environment's integrity chain.
- These bindings MUST be authenticated and distributed for
independent verification.
- Before executing a tenant's build request (e.g., GHA build job):
- The build environment's integrity chain and uniqueness of its
immutable identifier MUST be verified.
- A unique immutable build request identifier (e.g., GHA build job
ID) MUST be generated and cryptographically bound to a valid build
environment integrity chain.
- All build platform generated attestations and cryptographic bindings
MUST be backed by a hardware root of trust (e.g., [TPM] or [trusted
execution environment]). Note: Virtual hardware (e.g., vTPM) MAY be
used to meet this requirement.
- Runtime changes to the build environment's disk image SHOULD be
observable at runtime by the executing build request.

<dt>Benefits<dd>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternate framing would be that it removes almost all of the build platform from the root of trust. All that's left is the hardware vendor and physical access.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a high level, this is true, and we should probably keep this section more concise than it currently is. We did want to capture some of the nuance of doing this practice, though. For example, you actually need to still place a fair amount of trust in the cloud provider that's hosting the VMs running the builds to not tamper with components like the host OS or the vTPM implementation being used to check the integrity of the build environment. The other part we wanted to emphasize is the machine-checkable aspect of relying on the attestable hardware, compared to the expectations for verification at L3. Maybe this nuance doesn't need to be covered in such detail here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the importance of capturing the nuance. I think what Mark might be getting at is that the benefits as currently enumerated are a bit abstract.

E.g. "Greatly reduces trust in a hosted build platform by increasing observability into the level of integrity of the build environment."

Could this instead be something like "Greatly reduces the TCB of a hosted build platform by preventing tampering with the build execution environment, leaving only the OS and vTPM in scope" or something like that?


All of [Build L3], plus:

- Greatly reduces trust in a hosted build platform by increasing
observability into the level of integrity of the build environment.
- Provides machine-checkable information about the build image’s provenance
beyond self-attestation by the build platform operator.
- Provides cryptographic, hardware-rooted evidence about:
- The initial state of the compute environment executing a build, such
as its boot settings (e.g., measured boot enabled), the bootloader and
kernel image.
- The initial state of the environment's disk image and its
configuration.
- The build environment that executed a particular run of a build.
- The use of [confidential computing] to meet these
requirements provides builds with additional data and code
confidentiality and tamper-evidence properties during build execution.

</dl>
</section>

<!-- Link definitions -->

[build l0]: #build-l0
[build l1]: #build-l1
[build l2]: #build-l2
[build l3]: #build-l3
[build l4]: #build-l4
[confidential computing]: https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/Common-Terminology-for-Confidential-Computing.pdf
[future versions]: future-directions.md
[hosted]: requirements.md#isolation-strength
[previous version]: ../v0.1/levels.md
[provenance]: terminology.md
[TCG-compliant measured boot]: https://trustedcomputinggroup.org/resource/tcg-efi-platform-specification/
[TPM]: https://trustedcomputinggroup.org/resource/tpm-library-specification/
[trusted execution environment]: https://csrc.nist.gov/glossary/term/trusted_execution_environment
[verification]: verifying-artifacts.md
[verification summary]: verification_summary.md
32 changes: 32 additions & 0 deletions docs/spec/v1.1/terminology.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,38 @@ of build types](/provenance/v1#index-of-build-types).

</details>

### Build environment model

Following the [build model](#build-model), we model a *build environment*
as a single instance of an execution context that runs a tenant's build
on a hosted build platform. Specifically, a build environment comprises
resources from the *compute platform* and a *build image*.

A typical build environment will go through the following lifecycle:

1. A hosted build platform creates different build images through a separate
build process. For SLSA Build L4, the build platform outputs provenance
describing this process.
2. The hosted build platform provisions resources on the compute platform
and a build image launch a new build environment. For SLSA Build L4, the
build platform validates the *measurement* of the *boot process*.
3. When a new *build request* arrives at the hosted build platform, the
platform assigns the request to a pre-provisioned build environment.
For SLSA Build L4, the tenant may validate the measurement of the build
environment.
4. Finally, the build environment executes the tenant's build steps.

| Primary Term | Description
| --- | ---
| Build image | The run-time context within a build environment, such as the VM or container image. Individual components of a build image are provided by both the hosted build platform and tenant, and include the build executor, platform-provided pre-installed guest OS and packages, and the tenant’s build steps and external parameters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definition of build image as including parts from a tenant seems contrary to prior uses of the term to mean specifically the image of the build environment.

I would advise checking prior instances of "build image" and disambiguate them with "build environment image" and "tenant build image". A build platform will attest to itself, but it may also mount a container including a tenant's build toolchain for executing a build request, so that's an important distinction to make.

| Build executor | The platform-provided program dedicated to executing the tenant’s build definition, i.e., running the build, within the build image.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarify that the build executor must be a measured component of the build environment image.

| Build request | The process of assigning a build to a pre-provisioned build environment on a hosted build process.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always think of requests as messages. Perhaps,

"A user-provided message to the build platform that is used to assign a build to a pre-provisioned build environment on a hosted build process."

But this could be splitting hairs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. This isn't really about the exchanged messages, but about the action of assigning and dispatching a tenant's build process to a pre-deployed build environment. More recently, I've been using the term "build environment dispatch" for this step in the build environment's lifecycle, which is hopefully a little clearer.

| Compute platform | The compute system and infrastructure, i.e., the host system (hypervisor and/or OS) and hardware, underlying a build platform. In practice, the compute platform and the build platform may be managed by the same or distinct organizations.
| Boot process | In the context of builds, the process of loading and executing the layers of firmware and software needed to start up a build environment.
| Measurement | The cryptographic hash of some system state in the build environment, including software binaries, configuration, or initialized run-time data. Software layers that are commonly measured include the bootloader, kernel, and kernel cmdline.

TODO: Disambiguate similar terms (e.g., build job, build runner)

### Package model

Software is distributed in identifiable units called <dfn>packages</dfn>
Expand Down