Skip to content

Commit

Permalink
Add docs on running tests, do not wait for all providers in hosted test
Browse files Browse the repository at this point in the history
Signed-off-by: Kyle Squizzato <ksquizzato@mirantis.com>
  • Loading branch information
squizzi committed Sep 10, 2024
1 parent 5596db9 commit 705dc1b
Show file tree
Hide file tree
Showing 4 changed files with 71 additions and 10 deletions.
30 changes: 28 additions & 2 deletions docs/aws/hosted-control-plane.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,12 @@ reused with a management cluster.
If you deployed your AWS Kubernetes cluster using Cluster API Provider AWS (CAPA)
you can obtain all the necessary data with the commands below or use the
template found below in the
[HMC ManagedCluster manifest generation](#hmc-managed-cluster-manifest-generation) section.
[HMC ManagedCluster manifest
generation](#hmc-managed-cluster-manifest-generation) section.

If using the `aws-standalone-cp` template to deploy a hosted cluster it is
recommended to use a `t3.large` or larger instance type as the `hmc-controller`
and other provider controllers will need a large amount of resources to run.

**VPC ID**

Expand Down Expand Up @@ -89,7 +94,7 @@ Grab the following `ManagedCluster` manifest template and save it to a file name
apiVersion: hmc.mirantis.com/v1alpha1
kind: ManagedCluster
metadata:
name: aws-hosted-cp
name: aws-hosted
spec:
template: aws-hosted-cp
config:
Expand All @@ -109,3 +114,24 @@ Then run the following command to create the `managedcluster.yaml`:
```
kubectl get awscluster cluster -o go-template="$(cat managedcluster.yaml.tpl)" > managedcluster.yaml
```
## Deployment Tips
* Ensure HMC templates and the controller image are somewhere public and
fetchable.
* For installing the HMC charts and templates from a custom repository, load
the `kubeconfig` from the cluster and run the commands:

```
KUBECONFIG=kubeconfig IMG="ghcr.io/mirantis/hmc/controller-ci:v0.0.1-179-ga5bdf29" REGISTRY_REPO="oci://ghcr.io/mirantis/hmc/charts-ci" make dev-apply
KUBECONFIG=kubeconfig make dev-templates
```
* The infrastructure will need to manually be marked `Ready` to get the
`MachineDeployment` to scale up. You can patch the `AWSCluster` kind using
the command:
```
KUBECONFIG=kubeconfig kubectl patch AWSCluster <hosted-cluster-name> --type=merge --subresource status --patch 'status: {ready: true}' -n hmc-system
```
For additional information on why this is required [click here](https://docs.k0smotron.io/stable/capi-aws/#:~:text=As%20we%20are%20using%20self%2Dmanaged%20infrastructure%20we%20need%20to%20manually%20mark%20the%20infrastructure%20ready.%20This%20can%20be%20accomplished%20using%20the%20following%20command).
23 changes: 23 additions & 0 deletions docs/dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,3 +83,26 @@ export KUBECONFIG=~/.kube/config
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <managedcluster-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
```
## Running E2E tests locally
E2E tests can be ran locally via the `make test-e2e` target. In order to have
CI properly deploy a non-local registry will need to be used and the Helm charts
and hmc-controller image will need to exist on the registry, for example, using
GHCR:
```
IMG="ghcr.io/mirantis/hmc/controller-ci:v0.0.1-179-ga5bdf29" \
REGISTRY_REPO="oci://ghcr.io/mirantis/hmc/charts-ci" \
make test-e2e
```
Optionally, the `NO_CLEANUP=1` env var can be used to disable `After` nodes from
running within some specs, this will allow users to debug tests by re-running
them without the need to wait a while for an infrastructure deployment to occur.
For subsequent runs the `MANAGED_CLUSTER_NAME=<cluster name>` env var should be
passed to tell the test what cluster name to use so that it does not try to
generate a new name and deploy a new cluster.
Tests that run locally use autogenerated names like `12345678-e2e-test` while
tests that run in CI use names such as `ci-1234567890-e2e-test`. You can always
pass `MANAGED_CLUSTER_NAME=` from the get-go to customize the name used by the
test.
19 changes: 13 additions & 6 deletions test/e2e/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,23 @@ const (
hmcControllerLabel = "app.kubernetes.io/name=hmc"
)

func verifyControllersUp(kc *kubeclient.KubeClient) error {
// verifyControllersUp validates that controllers for the given providers list
// are running and ready. Optionally specify providers to check for rather than
// waiting for all providers to be ready.
func verifyControllersUp(kc *kubeclient.KubeClient, providers ...managedcluster.ProviderType) error {
if err := validateController(kc, hmcControllerLabel, "hmc-controller-manager"); err != nil {
return err
}

for _, provider := range []managedcluster.ProviderType{
managedcluster.ProviderCAPI,
managedcluster.ProviderAWS,
managedcluster.ProviderAzure,
} {
if providers == nil {
providers = []managedcluster.ProviderType{
managedcluster.ProviderCAPI,
managedcluster.ProviderAWS,
managedcluster.ProviderAzure,
}
}

for _, provider := range providers {
// Ensure only one controller pod is running.
if err := validateController(kc, managedcluster.GetProviderLabel(provider), string(provider)); err != nil {
return err
Expand Down
9 changes: 7 additions & 2 deletions test/e2e/e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import (
"os"
"os/exec"
"path/filepath"
"strings"
"time"

. "github.com/onsi/ginkgo/v2"
Expand Down Expand Up @@ -157,7 +158,7 @@ var _ = Describe("controller", Ordered, func() {

templateBy(managedcluster.TemplateAWSHostedCP, "validating that the controller is ready")
Eventually(func() error {
err := verifyControllersUp(standaloneClient)
err := verifyControllersUp(standaloneClient, managedcluster.ProviderCAPI, managedcluster.ProviderAWS)
if err != nil {
_, _ = fmt.Fprintf(
GinkgoWriter, "[%s] controller validation failed: %v\n",
Expand Down Expand Up @@ -233,7 +234,7 @@ func collectLogArtifacts(kc *kubeclient.KubeClient, clusterName string, provider
if err != nil {
utils.WarnError(fmt.Errorf("failed to parse host from kubeconfig: %w", err))
} else {
host = hostURL.Host
host = strings.ReplaceAll(hostURL.Host, ":", "_")
}

for _, providerType := range providerTypes {
Expand Down Expand Up @@ -287,5 +288,9 @@ func collectLogArtifacts(kc *kubeclient.KubeClient, clusterName string, provider

func noCleanup() bool {
noCleanup := os.Getenv(managedcluster.EnvVarNoCleanup)
if noCleanup != "" {
By(fmt.Sprintf("skipping After nodes as %s is set", managedcluster.EnvVarNoCleanup))
}

return noCleanup != ""
}

0 comments on commit 705dc1b

Please sign in to comment.