Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: redis-restore-by-keys #8129

Merged

Conversation

Chiwency
Copy link
Contributor

Resolve #8128

Support redis restore by multiply key patterns.

We can apply the key restoration by

kbcli cluster restore --backup "backup_name" --restore-keys  "a*,b*"

"a*" and "b*" represent the patterns of keys we want to restore, and split by comma.
for the details of the key pattern, refer to Redis/KEYS

Implementation

  1. We inject the key patterns into the restore pod as the environment variable DP_RESTORE_KEY_PATTERNS.
  2. Let the prepareData job pull full data to a different directory.
  3. Then in the postReady job, we start a local redis instance based on the full data, and SCAN keys
    by pattern then MIGRATE the selected keys to the target redis instance.

@github-actions github-actions bot added the size/XL Denotes a PR that changes 500-999 lines. label Sep 11, 2024
@apecloud-bot apecloud-bot added the pre-approve Fork PR Pre Approve Test label Sep 11, 2024
@@ -360,6 +363,9 @@ func (r *RestoreManager) initFromAnnotation(synthesizedComponent *component.Synt
if doReadyRestoreAfterClusterRunning == "true" {
r.doReadyRestoreAfterClusterRunning = true
}
if env := backupSource[constant.EnvForRestore]; env != "" {
json.Unmarshal([]byte(env), &r.env)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return the error when unmarshal fails. It should rarely happen.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a error occurs, the r.env will be remained empty, should we still need to handle this err?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the env cannot be deserialized, we may not be able to restore the data as expected by the user (e.g., ignoring user-specified restore keys). The result of the restoration is unpredictable, so it is reasonable to fail the restoration directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get

@zjx20
Copy link
Contributor

zjx20 commented Sep 11, 2024

Please run make manifests and then commit the changes, to pass the CI check:
https://github.com/apecloud/kubeblocks/actions/runs/10807254887/job/29977670789?pr=8129

@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Sep 11, 2024
@Chiwency Chiwency closed this Sep 11, 2024
@Chiwency Chiwency reopened this Sep 11, 2024
@github-actions github-actions bot added this to the Release 0.9.2 milestone Sep 11, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Sep 11, 2024
@apecloud-bot apecloud-bot added pre-approve Fork PR Pre Approve Test and removed pre-approve Fork PR Pre Approve Test labels Sep 11, 2024
@zjx20
Copy link
Contributor

zjx20 commented Sep 11, 2024

@wangyelei @ldming Please take a look.

@wangyelei wangyelei merged commit a3d0a32 into apecloud:release-0.9 Sep 19, 2024
22 checks passed
@zjx20
Copy link
Contributor

zjx20 commented Sep 19, 2024

@Chiwency Thanks for the contribution! Please issue another PR to merge the changes into the main branch.

Chiwency added a commit to Chiwency/kubeblocks that referenced this pull request Sep 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pre-approve Fork PR Pre Approve Test size/XL Denotes a PR that changes 500-999 lines.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants