-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable e2e GHA test #434
Enable e2e GHA test #434
Conversation
kn client version 1.10.2 not available
To reduce serving component resource to fit in gha runner
@naveenrajm7: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @naveenrajm7. Thanks for your PR. I'm waiting for a knative-sandbox member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Addresses comments, fixed failing e2e test. Currently list of deployments and amount of resources are hard-coded . if I there are other ways to explore let me know. /re-test |
@@ -219,6 +223,21 @@ func retryingApply(path string) error { | |||
return err | |||
} | |||
|
|||
func reduceResources() error { | |||
var err error | |||
err = runCommand(exec.Command("kubectl", "set", "resources", "deployment", "activator", "--requests=cpu=150m,memory=30Mi", "--limits=cpu=500m,memory=300Mi", "-n", "knative-serving")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kind node seems to share the whole host cpu and mem so given that the sum of the Serving deployments cpu requests (including Kourier) are ~=1cpu one thing to explore is limit Kind resource allocation., assuming we want to use one worker node (because of the quickstart). At the Serving side we use kind-worker-count=4
and setup Kind with chainguard-dev/actions/setup-kind@main
(which comes with metallb, local registry etc though).
Otherwise I dont see many options other than adjusting our deployment resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this valuable suggestion. If I can limit resource usage from the kind point of view itself, then I can avoid hard reduction of deployment resources. I will try this.
Test failure is legit ... we don't have a matrix in this action |
Didn't change this part of the workflow, will check why the matrix keyword was used in the first place. |
No matrix is used in e2e test
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: naveenrajm7 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
if err := reduceResources(); err != nil { | ||
return fmt.Errorf("reduce: %w", err) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also run this for eventing? otherwise, I think this is looking pretty good
This Pull Request is stale because it has been open for 90 days with |
Changes
/kind
Fixes #392
Release Note
Docs