-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable e2e GHA test #434
Enable e2e GHA test #434
Changes from all commits
9c115b8
9378df0
d215913
488102d
6033fdc
4471751
001af99
925e183
81d29e2
10abb5d
6fe33e2
88fb81c
3b34b41
5e1475f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -126,6 +126,10 @@ func Serving() error { | |
return fmt.Errorf("wait: %w", err) | ||
} | ||
|
||
if err := reduceResources(); err != nil { | ||
return fmt.Errorf("reduce: %w", err) | ||
} | ||
|
||
if err := waitForPodsReady("knative-serving"); err != nil { | ||
return fmt.Errorf("core: %w", err) | ||
} | ||
|
@@ -219,6 +223,21 @@ func retryingApply(path string) error { | |
return err | ||
} | ||
|
||
func reduceResources() error { | ||
var err error | ||
err = runCommand(exec.Command("kubectl", "set", "resources", "deployment", "activator", "--requests=cpu=150m,memory=30Mi", "--limits=cpu=500m,memory=300Mi", "-n", "knative-serving")) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Kind node seems to share the whole host cpu and mem so given that the sum of the Serving deployments cpu requests (including Kourier) are ~=1cpu one thing to explore is limit Kind resource allocation., assuming we want to use one worker node (because of the quickstart). At the Serving side we use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for this valuable suggestion. If I can limit resource usage from the kind point of view itself, then I can avoid hard reduction of deployment resources. I will try this. |
||
|
||
deployments := []string{"autoscaler", "controller", "webhook"} | ||
for i, deploy := range deployments { | ||
err = runCommand(exec.Command("kubectl", "set", "resources", "deployment", deploy, "--requests=cpu=50m,memory=50Mi", "--limits=cpu=500m,memory=500Mi", "-n", "knative-serving")) | ||
if err != nil { | ||
fmt.Printf("Error %d\n", i) | ||
} | ||
} | ||
|
||
return err | ||
} | ||
|
||
// waitForCRDsEstablished waits for all CRDs to be established. | ||
func waitForCRDsEstablished() error { | ||
return runCommand(exec.Command("kubectl", "wait", "--for=condition=Established", "--all", "crd")) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also run this for eventing? otherwise, I think this is looking pretty good