Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky unit test - fails to update DRCluster manifest work #936

Merged
merged 1 commit into from
Jun 23, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 13 additions & 7 deletions controllers/drcluster_controller_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
)

Expand Down Expand Up @@ -220,17 +221,22 @@ func updateDRClusterManifestWorkStatus(clusterNamespace string) {
},
}

mw.Status = DRClusterStatusConditions
retryErr := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
var err error

err = apiReader.Get(context.TODO(), manifestLookupKey, mw)
if err != nil {
return err
}

err := k8sClient.Status().Update(context.TODO(), mw)
if err != nil {
// try again
Expect(apiReader.Get(context.TODO(), manifestLookupKey, mw)).NotTo(HaveOccurred())
mw.Status = DRClusterStatusConditions

err = k8sClient.Status().Update(context.TODO(), mw)
}

Expect(err).NotTo(HaveOccurred())
return err
})

Expect(retryErr).NotTo(HaveOccurred())
}

var _ = Describe("DRClusterController", func() {
Expand Down
24 changes: 21 additions & 3 deletions controllers/util/mw_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"

dto "github.com/prometheus/client_model/go"
Expand Down Expand Up @@ -502,11 +503,28 @@ func (mwu *MWUtil) createOrUpdateManifestWork(
}

if !reflect.DeepEqual(foundMW.Spec, mw.Spec) {
mw.Spec.DeepCopyInto(&foundMW.Spec)

mwu.Log.Info("ManifestWork exists.", "name", mw.Name, "namespace", foundMW.Namespace)

return mwu.Client.Update(mwu.Ctx, foundMW)
retryErr := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
var err error

err = mwu.Client.Get(mwu.Ctx,
types.NamespacedName{Name: mw.Name, Namespace: managedClusternamespace},
foundMW)
if err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of the Get operation here?

Also, if the error we got was a conflict err, should we not retry the update after getting it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inside the RetryOnConflict function we need to do 3 steps:

  1. Reload manifest work resource
  2. Update it's status (deepcopy)
  3. Update the manifest work resource

Get function can't have conflicts, if we failed in it - no need to proceed, it's real error
Update function return value is inspected by RetryOnConflict function itself and the decision if exit or proceed is taken by it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, implementing this pattern is good for the ManifestWork. Because it can be modified by an external entity at runtime and we have no control over it.

return err
}

mw.Spec.DeepCopyInto(&foundMW.Spec)

err = mwu.Client.Update(mwu.Ctx, foundMW)

return err
})

if retryErr != nil {
return retryErr
}
}

return nil
Expand Down