Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checking attestors for duplicates #332

Closed

Conversation

ChaosInTheCRD
Copy link
Collaborator

@ChaosInTheCRD ChaosInTheCRD commented Dec 15, 2023

Do Not Merge until after: #331

This PR checks to see if a default attestor has been added as an argument, or if the same attestor was added multiple times. This PR uses in-toto/go-witness#104 so must be merged once these changes get released.

@ChaosInTheCRD ChaosInTheCRD reopened this Dec 15, 2023
@ChaosInTheCRD ChaosInTheCRD marked this pull request as draft December 15, 2023 13:35
Signed-off-by: Tom Meadows <tom@tmlabs.co.uk>
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
@ChaosInTheCRD ChaosInTheCRD marked this pull request as ready for review December 18, 2023 17:46
@ChaosInTheCRD ChaosInTheCRD changed the title WIP: Checking attestors for duplicates Checking attestors for duplicates Dec 18, 2023
} else {
attestor, err := attestation.AddAttestor(a)
if err != nil {
return fmt.Errorf("failed to create attestor: %w", err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to fail at the first attestor, or would it be better to continue trying to add the subsequent attestors and give the user an error with all failed attestors?
My question is, because we do it for duplicated ones, we warn the user. Is there any reason to fail at the first attestors?

Example: Let's say the user sends attestors a, b, c, d, e, f
a and c are duplicated
d return error.
b, e, f are good.

We will warn the user about a, include b, warn the user about c, and fail in e,

Feel free to ignore if I'm missing some context here :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm, so I don't feel really strongly either way, but I suppose you're asking 'if failing to create an attestor fails, should we fail?'.

If the AddAttestor function fails, I think we should stop. I don't think a witness run should continue if not all the expected attestations are going to be generated. I think that I am confident that the command should fail on failed attestor creation.

Going back to the original problem that this PR removes, I tackled this by warning the user that they have repeated themselves / declared an attestor that has already been declared (e.g., by default). Possibly it could be a better practice to just fail in this situation. After all, it may be better to be simple and say "you've duplicated attestor defintion, please fix the invocation".

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkjell @mikhailswift, any thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To throw a 🔧 in the conversation, I create #340 to capture thoughts around a future idea that may be counter to this one.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkjell I think that what you mention in #340 makes sense, but I don't think it collides too much with this due to the fact that (at least for now) there is no reason why you would want to run an attestor more than once. I think that we should merge this for now, and we could potentially change it in future if/when we implement #340.

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Signed-off-by: Tom Meadows <tom@tmlabs.co.uk>
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
# NOTE: Removed autobuild step as it was leading to hanging in Github Actions
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the autobuild step from here (as the comments instructed) as it is leading to behaviour that left the step indefinitely hanging (see here). In my opinion such exhaustive use of compute time should be avoided at all costs.

I think it would be nice to get back to using the autobuild scripting, however it does introduce some guess work and ambiguity that can be eradicated by simply pointing the workflow at make or make build. I also don't believe that this change sacrifices any functionality.

@jkjell @kairoaraujo would be good to get opinions here. I also recognise that this PR may not be the place for such a change, if you want I can go ahead and move it to a separate PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this was from the job or because we're an OSS project but, the billable time was 0 s for the failure: https://github.com/in-toto/witness/actions/runs/7251758704/usage. It looks like the only thing the autobuild did was call make. So, that makes (😂) me think we could hit the same issue with a direct make command too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But, yeah, if we want this change ➕ to a separate PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I see what you mean by us being an OSS project, still gives me the heebiejeebies.

As for what caused it I am still not totally sure, but the autobuild logic actually performs make (which fails obviously) and continues anyway:

 2023/12/18 17:47:27 Running /usr/bin/make failed, continuing anyway: exit status 2
  2023/12/18 17:47:27 Build failed, continuing to install dependencies.
  2023/12/18 17:47:27 Installing dependencies using `go get -v ./...` in `.`.

Whatever the matter is, I feel it's more deterministic to do it this way, so I reckon we should split this out ot a separate PR regardless. I don't think we should trust any automation that we don't necessarily need? If it wants to know how we build, we can just tell it 😄 .

Signed-off-by: Tom Meadows <tom@tmlabs.co.uk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants