Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport PR #2702 to release/v1.7 for feat: Implement delete expired index job #2722

Merged

Conversation

vdaas-ci
Copy link
Collaborator

@vdaas-ci vdaas-ci commented Oct 26, 2024

Description

Delete expired index from Agent using k8s Job.
The user sets the index_id in config and deletes the index.

This feature is currently implementing.

Related Issue

close: #2647

Versions

  • Vald Version: v1.7.13
  • Go Version: v1.23.2
  • Rust Version: v1.81.0
  • Docker Version: v27.3.1
  • Kubernetes Version: v1.31.1
  • Helm Version: v3.16.2
  • NGT Version: v2.2.4
  • Faiss Version: v1.9.0

Checklist

Special notes for your reviewer

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a new index deletion job and associated configurations, including a Kubernetes CronJob and ConfigMap for managing deletion operations.
    • Added new GitHub Actions workflows for building and scanning Docker images related to index deletion.
  • Improvements

    • Enhanced spell-checking configuration by ignoring specific paths.
    • Improved Docker image build process with new targets and configurations.
    • Added comprehensive configuration details for the index deletion service, including logging, server settings, and observability options.
  • Bug Fixes

    • Minor formatting corrections in workflow files for consistency.

* feat: Implement delete expired index job

* fix: Replace agent Remove RPC to Vald Remove RPC

* fix: service name

* fix: typo

* fix: log method

* fix: variable name

* fix: use internal package

* fix: Change struct field name
Copy link

cloudflare-workers-and-pages bot commented Oct 26, 2024

Deploying vald with  Cloudflare Pages  Cloudflare Pages

Latest commit: bb8e3b9
Status: ✅  Deploy successful!
Preview URL: https://94ed9aec.vald.pages.dev
Branch Preview URL: https://backport-release-v1-7-feat-i.vald.pages.dev

View logs

Copy link
Contributor

coderabbitai bot commented Oct 26, 2024

📝 Walkthrough

Walkthrough

The pull request introduces several changes focused on enhancing the index deletion functionality within the Vald system. Key modifications include the addition of new GitHub Actions workflows for managing Docker images related to index deletion, the implementation of a Kubernetes CronJob for periodic index deletion, and the creation of configuration files and service structures to support the deletion process. Additionally, updates to spell-check configurations and Makefile entries are included to accommodate these new features.

Changes

File Path Change Summary
.cspell.json Added path to ignorePaths: "**/cmd/index/job/deletion/index-deletion"
.github/workflows/dockers-index-deletion-image.yaml New workflow for building Docker image related to index deletion.
.github/workflows/dockers-image-scan.yaml Added job index-deletion for scanning Docker images.
.github/workflows/dockers-index-deletion.yaml New workflow for building Docker image with job build for index deletion.
.github/workflows/dockers-release-branch-image.yaml Added job index-deletion dependent on dump-contexts-to-log.
Makefile Added variable INDEX_DELETION_IMAGE = $(NAME)-index-deletion.
Makefile.d/build.mk Added target cmd/index/job/deletion/index-deletion and artifact artifacts/vald-index-deletion-$(GOOS)-$(GOARCH).zip.
Makefile.d/docker.mk Added phony targets docker/name/index-deletion and docker/build/index-deletion.
cmd/index/job/deletion/main.go New file implementing main entry point for index deletion job.
cmd/index/job/deletion/sample.yaml New YAML configuration file for index deletion job settings.
dockers/index/job/deletion/Dockerfile New multi-stage Dockerfile for index-deletion application.
hack/cspell/main.go Added entry to mandatoryIgnorePaths: "**/cmd/index/job/creation/index-deletion".
internal/config/index_deleter.go New file defining IndexDeleter struct for deletion configurations.
k8s/index/job/deletion/configmap.yaml New ConfigMap vald-index-deletion-config for managing deletion job settings in Kubernetes.
k8s/index/job/deletion/cronjob.yaml New CronJob vald-index-deletion for periodic index deletion.
pkg/index/job/deletion/config/config.go New configuration management structure for deletion job.
pkg/index/job/deletion/service/deleter.go New file defining Deleter interface and its implementation for index deletion.
pkg/index/job/deletion/service/options.go New file with functional options for configuring index.
pkg/index/job/deletion/usecase/deletion.go New file managing the deletion job lifecycle with methods for start and stop operations.

Assessment against linked issues

Objective Addressed Explanation
Allow old indexes to be deleted via a Kubernetes Job (2647)
Create a Kubernetes Job to execute the deletion RPC (2647)

Possibly related PRs

Suggested labels

priority/low, type/ci, actions/e2e-deploy, actions/fossa, actions/e2e-chaos, actions/e2e-profiling, actions/e2e-max-dim, actions/backport/release/v1.7

Suggested reviewers

  • kpango
  • datelier

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 24

🧹 Outside diff range and nitpick comments (17)
pkg/index/job/deletion/service/options.go (1)

1-48: Consider implementing validation helper package.

Given the similar validation patterns needed across options, consider creating a shared validation package that can be reused across the codebase. This would ensure consistent validation rules and error messages.

Additionally, consider adding:

  1. Constants for default values
  2. Package-level documentation explaining the overall purpose and usage
  3. Examples in documentation showing common usage patterns
cmd/index/job/deletion/main.go (1)

34-59: Well-structured implementation aligned with PR objectives.

The implementation provides a solid foundation for the index deletion job with proper separation of concerns:

  • Configuration management via config.NewConfig
  • Business logic isolation in usecase.New
  • Error handling and recovery mechanisms

Consider documenting the following aspects in the PR description or README:

  1. Expected configuration format and options
  2. Kubernetes Job/CronJob scheduling recommendations
  3. Monitoring and alerting considerations for failed deletions
pkg/index/job/deletion/config/config.go (2)

25-37: Consider enhancing struct documentation.

The Data struct could benefit from more detailed documentation:

  1. Add examples of when/how this configuration is used
  2. Document the relationship between these configuration components
  3. Add validation rules or constraints for each field

Example enhancement:

// Data represents the configuration for the index deletion job.
// It combines global application settings, server configurations,
// observability settings, and deletion-specific configurations
// required to run the index deletion service.

24-37: Consider adding scheduling configuration for periodic deletion.

Based on the PR objectives mentioning periodic execution, consider adding configuration fields for:

  1. Schedule parameters (cron expression)
  2. One-time vs periodic execution mode
  3. Retry policies and timeout settings

This would align better with the requirement to support both periodic and one-time deletion tasks.

internal/config/index_deleter.go (2)

39-40: Fix incorrect field comment.

The comment for DeletionPoolSize incorrectly mentions "indexing" instead of "deletion".

-	// DeletionPoolSize represents batch pool size for indexing.
+	// DeletionPoolSize represents batch pool size for deletion operations.

17-47: Consider adding expiration-related configuration fields.

To better support the PR's objective of managing expired indexes, consider adding fields for:

  1. Expiration criteria (e.g., age-based, usage-based)
  2. Schedule configuration for periodic deletion
  3. Dry-run mode for safety

Example fields:

type IndexDeleter struct {
    // ... existing fields ...
    
    // ExpirationAge defines how old (in hours) an index must be to be considered expired
    ExpirationAge int64 `json:"expiration_age" yaml:"expiration_age"`
    
    // LastAccessThreshold defines the threshold (in hours) of last access time for considering an index expired
    LastAccessThreshold int64 `json:"last_access_threshold" yaml:"last_access_threshold"`
    
    // DryRun if true, only logs what would be deleted without actually deleting
    DryRun bool `json:"dry_run" yaml:"dry_run"`
}
.github/workflows/dockers-index-deletion.yaml (1)

76-80: Consider pinning the reusable workflow version.

While the current configuration is correct, consider pinning the reusable workflow to a specific SHA for better stability and predictability.

  build:
-   uses: ./.github/workflows/_docker-image.yaml
+   uses: ./.github/workflows/_docker-image.yaml@v1.7.0
    with:
      target: index-deletion
    secrets: inherit
cmd/index/job/deletion/sample.yaml (1)

16-17: Consider making version and timezone configurable

  • The version v0.0.0 appears to be a placeholder. Consider using a templated value that matches the actual release version.
  • Hardcoding timezone to JST might cause issues for deployments in different regions. Consider making it configurable through environment variables.
-version: v0.0.0
-time_zone: JST
+version: ${VALD_VERSION}
+time_zone: ${TIMEZONE:-UTC}
Makefile.d/docker.mk (1)

352-355: Consider adding documentation for the new image.

While the implementation is correct, consider adding documentation about the index-deletion image's purpose and configuration in the project's documentation, similar to other index-related jobs.

k8s/index/job/deletion/configmap.yaml (2)

32-35: Consider using INFO level logging for production.

Debug level logging can impact performance and generate excessive logs in production environments.

   logging:
     format: raw
-    level: debug
+    level: info
     logger: glg

202-207: Remove TLS paths when TLS is disabled.

Since TLS is disabled (enabled: false), having paths configured could be misleading for maintenance.

   tls:
-    ca: /path/to/ca
-    cert: /path/to/cert
     enabled: false
     insecure_skip_verify: false
-    key: /path/to/key
dockers/index/job/deletion/Dockerfile (1)

19-19: Remove unused build argument UPX_OPTIONS

The build argument UPX_OPTIONS is declared but not utilized in the Dockerfile. Removing unused arguments helps keep the Dockerfile clean and maintainable.

Apply this diff to remove the unused argument:

-ARG UPX_OPTIONS=-9
pkg/index/job/deletion/usecase/deletion.go (3)

73-79: Add a comment explaining the reversal of addrs in OnDiscoverFunc.

The purpose of reversing the addrs slice in the OnDiscoverFunc is not immediately clear. Adding a comment to explain why the addresses are reversed would improve code readability and maintainability.


139-189: Handle potential nil channels in the select statement.

In the Start method, if r.observability is nil, the oech channel remains nil. Including a nil channel in a select statement can potentially lead to blocking behavior. While Go's select ignores cases with nil channels, explicitly handling this can improve code clarity.

Consider modifying the select statement:

 for {
 	select {
 	case <-ctx.Done():
 		return ctx.Err()
-	case err = <-oech:
+	case err = <-oech:
+		if oech != nil {
+			err = <-oech
+		}
 	case err = <-sech:
 	case err = <-cech:
 	}

Alternatively, initialize oech with a closed channel when r.observability is nil to avoid the nil channel issue.


193-214: Remove unnecessary methods PreStop and PostStop if not used.

The methods PreStop and PostStop currently do nothing and may not be necessary. If they are not required by the runner.Runner interface, consider removing them to simplify the code.

If these methods are required by the interface but intentionally left empty, consider adding comments to explain this.

// PreStop is called before Stop. Currently, no pre-stop actions are required.
func (*run) PreStop(_ context.Context) error {
	return nil
}

// PostStop is called after Stop. Currently, no post-stop actions are required.
func (*run) PostStop(_ context.Context) error {
	return nil
}
pkg/index/job/deletion/service/deleter.go (2)

63-69: Improve variable naming for clarity in delDuplicateAddrs.

The variable exist used to track addresses can be renamed to seen for better readability and to align with Go naming conventions. This makes the purpose of the variable clearer to other developers.

Consider renaming exist to seen:

func delDuplicateAddrs(targetAddrs []string) []string {
    addrs := make([]string, 0, len(targetAddrs))
-   exist := make(map[string]bool)
+   seen := make(map[string]bool)

    for _, addr := range targetAddrs {
-       if !exist[addr] {
+       if !seen[addr] {
            addrs = append(addrs, addr)
-           exist[addr] = true
+           seen[addr] = true
        }
    }
    return addrs
}

79-85: Simplify the deferred span ending in Start method.

Since span is initialized just before the defer statement and is guaranteed to be non-nil, the nil check is unnecessary. Removing it simplifies the code.

Simplify the deferred function:

func (idx *index) Start(ctx context.Context) error {
    ctx, span := trace.StartSpan(ctx, apiName+"/service/index.Delete")
    defer func() {
-       if span != nil {
-           span.End()
-       }
+       span.End()
    }()
    // ...
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 1dd771f and bb8e3b9.

⛔ Files ignored due to path filters (2)
  • hack/actions/gen/main.go is excluded by !**/gen/**
  • hack/docker/gen/main.go is excluded by !**/gen/**
📒 Files selected for processing (21)
  • .cspell.json (1 hunks)
  • .gitfiles (1 hunks)
  • .github/actions/detect-docker-image-tags/action.yaml (1 hunks)
  • .github/workflows/dockers-image-scan.yaml (1 hunks)
  • .github/workflows/dockers-index-deletion-image.yaml (1 hunks)
  • .github/workflows/dockers-index-deletion.yaml (1 hunks)
  • .github/workflows/dockers-release-branch-image.yaml (1 hunks)
  • Makefile (1 hunks)
  • Makefile.d/build.mk (4 hunks)
  • Makefile.d/docker.mk (3 hunks)
  • cmd/index/job/deletion/main.go (1 hunks)
  • cmd/index/job/deletion/sample.yaml (1 hunks)
  • dockers/index/job/deletion/Dockerfile (1 hunks)
  • hack/cspell/main.go (1 hunks)
  • internal/config/index_deleter.go (1 hunks)
  • k8s/index/job/deletion/configmap.yaml (1 hunks)
  • k8s/index/job/deletion/cronjob.yaml (1 hunks)
  • pkg/index/job/deletion/config/config.go (1 hunks)
  • pkg/index/job/deletion/service/deleter.go (1 hunks)
  • pkg/index/job/deletion/service/options.go (1 hunks)
  • pkg/index/job/deletion/usecase/deletion.go (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • .cspell.json
  • hack/cspell/main.go
🔇 Additional comments (29)
pkg/index/job/deletion/service/options.go (3)

1-7: LGTM! Package structure and imports are well-organized.


10-19: LGTM! Robust error handling for discoverer client.

The implementation properly validates the client and uses appropriate error types.


8-9: Verify the index type definition.

The index type used in the Option function signature is not visible in this file. Let's verify its definition.

✅ Verification successful

Since the file under review is pkg/index/job/deletion/service/options.go, and we found the corresponding index type definition in pkg/index/job/deletion/service/deleter.go within the same package, I can now generate the final response:

The index type is properly defined within the same package

The index type referenced in the Option function is defined in deleter.go within the same package. It's a struct containing client configuration, target addresses, and concurrency settings, which is appropriate for the deletion service context.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for index type definition in the package
ast-grep --pattern 'type index struct {
  $$$
}'

Length of output: 3094

cmd/index/job/deletion/main.go (3)

14-26: LGTM! Well-organized imports.

The package structure and imports are clean and include all necessary dependencies for the index deletion job implementation.


40-58: LGTM! Robust error handling implementation.

The implementation includes:

  • Proper error wrapping with context
  • Type assertion checks
  • Appropriate fatal logging for unrecoverable errors

28-32: Verify version compatibility with the backport.

The maxVersion is set to "v0.0.10" which seems unusually low for a backport to release/v1.7. Please verify if this version range is appropriate.

✅ Verification successful

Based on the gathered information, I can now provide a final response about the version compatibility concern:

The version range is appropriate for this component.

The maxVersion = "v0.0.10" in the index deletion job is correct because:

  1. From internal/info/info.go, we can see that this is a standalone component with its own versioning scheme, separate from the main Vald version.

  2. The version check is handled by internal/version/version.go which uses semantic versioning to ensure version compatibility between components.

  3. The version range (v0.0.0 to v0.0.10) is intentionally kept at 0.0.x level to indicate this is an internal component that follows its own versioning scheme independent of the main Vald release version.

  4. The version constraints are properly validated using hashicorp/go-version library to ensure semantic version compatibility.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check version compatibility with the target branch
# Test: Get the version from the target branch. Expect: Version should be compatible with maxVersion

# Get the current branch version
git branch --show-current

# Check version tags in the repository
git tag | grep -E "v1\.7\."

Length of output: 73


Script:

#!/bin/bash
# Let's try a different approach to understand the version context

# Check the repository's version information in common locations
echo "=== Checking version in go.mod ==="
cat go.mod

echo -e "\n=== Checking README for version info ==="
cat README.md

echo -e "\n=== Looking for version-related constants in the codebase ==="
rg "const.*version" -i --type go -A 3

echo -e "\n=== Checking other files in the same directory for version context ==="
fd . "cmd/index/job/deletion" --type f --exec cat {}

Length of output: 70806


Script:

#!/bin/bash
# Let's check the version information in the package and configuration

# Check version information in pkg/index/job/deletion
echo "=== Checking version info in pkg/index/job/deletion ==="
fd . "pkg/index/job/deletion" --type f --exec rg "version" -i {}

# Check version information in internal/info
echo -e "\n=== Checking version info in internal/info ==="
fd . "internal/info" --type f --exec cat {}

# Check version information in internal/version
echo -e "\n=== Checking version info in internal/version ==="
fd . "internal/version" --type f --exec cat {}

Length of output: 53959

pkg/index/job/deletion/config/config.go (2)

1-20: LGTM! Clean package structure and imports.

The package declaration and imports are well-organized, following Go best practices with appropriate copyright headers.


39-71: 🛠️ Refactor suggestion

Improve error handling and add configuration validation.

Several improvements could enhance the robustness of the configuration loading:

  1. The Bind() errors are currently discarded. Consider logging or handling these errors.
  2. Add validation for actual configuration values before returning.
  3. Consider sanitizing the file path input for security.

Let's verify if there are any existing validation patterns in the codebase:

Consider applying this enhancement:

 func NewConfig(path string) (cfg *Data, err error) {
+	if err := validatePath(path); err != nil {
+		return nil, err
+	}
 	cfg = new(Data)
 
 	if err = config.Read(path, &cfg); err != nil {
 		return nil, err
 	}
 
 	if cfg != nil {
-		_ = cfg.GlobalConfig.Bind()
+		if err := cfg.GlobalConfig.Bind(); err != nil {
+			return nil, fmt.Errorf("failed to bind global config: %w", err)
+		}
 	} else {
 		return nil, errors.ErrInvalidConfig
 	}
+
+	if err := cfg.validate(); err != nil {
+		return nil, err
+	}
 	return cfg, nil
 }
+
+func (d *Data) validate() error {
+	// Add validation logic here
+	return nil
+}
.github/workflows/dockers-index-deletion-image.yaml (3)

76-80: LGTM: Proper reusable workflow implementation

The job configuration correctly:

  • Uses a reusable workflow template
  • Specifies the target as 'index-deletion'
  • Inherits secrets from the calling context

32-74: Verify path patterns completeness

The path patterns appear comprehensive but should be verified against the actual index deletion feature requirements.

Let's verify the path patterns:

#!/bin/bash
# Description: Verify if all relevant paths for the index deletion feature are included
# Expected: No additional critical paths found

# Check for potential missing paths related to index deletion
echo "Searching for files related to index deletion..."
fd -t f . | rg -i "index.*deletion" | rg -v -f <(rg '^    - ' .github/workflows/dockers-index-deletion-image.yaml | sed 's/    - //')

# Check for potential configuration files
echo "Checking for config files..."
fd -t f "config.*yaml" | rg -i "index.*deletion"

31-74: Security consideration: Validate PR trigger configuration

The workflow is triggered on both pull_request and pull_request_target events with identical path filters. The pull_request_target event runs in the context of the base repository and has access to secrets, which could pose a security risk if not properly configured.

Let's verify the security implications:

Consider:

  1. Using only pull_request event if secrets access isn't required
  2. If pull_request_target is necessary, implement additional security checks in the shared workflow
✅ Verification successful

I need to examine the shared workflow and action files to understand how they handle secrets and PR triggers before making a final determination about the security implications.


PR trigger configuration is secure with proper controls

The workflow implementation is secure based on the following evidence:

  1. The shared workflow (_docker-image.yaml) has explicit security controls:

    • For forked repositories, pull_request_target only runs if the PR has the ci/approved label
    • For non-forked repositories, it uses regular pull_request event
    • The build job has a strict conditional check combining these rules:
    (github.event.pull_request.head.repo.fork == true &&
     github.event_name == 'pull_request_target' &&
     contains(github.event.pull_request.labels.*.name, 'ci/approved'))
  2. Additional security measures:

    • Docker builds from forks are limited to linux/amd64 platform only
    • Extra tags are not added for PR builds
    • Secrets are only used in the build job which has the security conditionals
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if other Docker image workflows follow the same pattern
# Description: Looking for potential security precedents or inconsistencies
# Expected: Similar dual trigger patterns in other Docker workflows

echo "Checking other Docker workflow files for PR trigger patterns..."
rg -l 'pull_request_target:' .github/workflows/dockers-*-image.yaml

echo "Checking for secret usage in the shared workflow..."
rg 'secrets:' .github/workflows/_docker-image.yaml

Length of output: 461


Script:

#!/bin/bash
# Let's check the shared workflow file to understand how it handles secrets and PR triggers
echo "Content of shared workflow file:"
cat .github/workflows/_docker-image.yaml

# Also check the docker-build action for additional context
echo -e "\nContent of docker-build action:"
cat .github/actions/docker-build/action.yaml

# Check if there are any explicit security checks for PR_target
echo -e "\nChecking for PR security measures:"
rg -A 5 'if: github.event_name == .pull_request_target.' .github/workflows/

Length of output: 10205

.github/workflows/dockers-image-scan.yaml (1)

73-76: LGTM! Good security practice including the new image in scanning workflow.

The addition of the index-deletion job follows the established pattern and ensures the new Docker image will undergo the same security scanning as other images. This is crucial for maintaining the security posture of the system.

.github/workflows/dockers-release-branch-image.yaml (1)

91-96: LGTM! Well-integrated job definition.

The new index-deletion job follows the established pattern and is properly integrated with the existing workflow structure. It correctly depends on the dump-contexts-to-log job and uses the shared workflow file, which maintains consistency with other jobs in the pipeline.

k8s/index/job/deletion/cronjob.yaml (2)

54-77: ⚠️ Potential issue

Improve init containers reliability and configurability

The init containers have several areas for improvement:

  1. Hardcoded namespace default.svc.cluster.local limits deployment flexibility
  2. Infinite loop without timeout could cause stuck pods
  3. Consider using kubernetes.io/startup-probe instead of custom HTTP checks

Suggested improvements:

            - name: wait-for-agent
              image: busybox:stable
              imagePullPolicy: Always
              command:
                - /bin/sh
                - -e
                - -c
                - |
+                 timeout=300
+                 start_time=$(date +%s)
                  until [ "$(wget --server-response --spider --quiet \
-                   http://vald-agent.default.svc.cluster.local:3001/readiness \
+                   http://vald-agent.${MY_POD_NAMESPACE}.svc.cluster.local:3001/readiness \
                    2>&1 | awk 'NR==1{print $2}')" == "200" ]; do
+                   current_time=$(date +%s)
+                   elapsed=$((current_time - start_time))
+                   if [ $elapsed -gt $timeout ]; then
+                     echo "Timeout waiting for agent"
+                     exit 1
+                   fi
                    echo "waiting for agent to be ready..."
                    sleep 2;
                  done

Apply similar changes to the discoverer init container.

#!/bin/bash
# Check if the namespace is referenced elsewhere in the codebase
rg -l "default[.]svc[.]cluster[.]local" 

142-146: Add ConfigMap validation

The job depends on the vald-index-deletion-config ConfigMap but there's no guarantee it exists when the job runs.

Consider:

  1. Adding documentation about the required ConfigMap
  2. Using Helm dependencies to ensure the ConfigMap is created first
  3. Adding validation in the init container to check for ConfigMap existence:
          initContainers:
+           - name: validate-config
+             image: busybox:stable
+             command:
+               - /bin/sh
+               - -c
+               - |
+                 if [ ! -f /etc/server/config.yaml ]; then
+                   echo "ConfigMap not mounted properly"
+                   exit 1
+                 fi
+             volumeMounts:
+               - name: vald-index-deletion-config
+                 mountPath: /etc/server/
cmd/index/job/deletion/sample.yaml (2)

200-230: Enable and configure observability for production readiness

  1. Observability is disabled by default (enabled: false), which might hinder monitoring in production.
  2. Pod and node attributes contain placeholder values that need to be replaced.

Let's verify the observability configuration in other components:

#!/bin/bash
# Check observability settings across configurations
rg -g '*.{yaml,yml}' 'observability:\s*\n\s*enabled:'

Recommendations:

  1. Enable observability by default for production environments
  2. Document the required environment variables for the placeholder values
  3. Consider adding alerts for critical metrics
observability:
-  enabled: false
+  enabled: true
   otlp:
     attribute:
-      namespace: "_MY_POD_NAMESPACE_"
-      pod_name: "_MY_POD_NAME_"
-      node_name: "_MY_NODE_NAME_"
+      namespace: ${POD_NAMESPACE}
+      pod_name: ${POD_NAME}
+      node_name: ${NODE_NAME}

71-79: Security and configuration concerns in deleter section

  1. Using the "default" namespace is not recommended for production deployments.
  2. The index_id is a critical field but lacks validation or documentation about its format.
  3. The concurrency setting might need documentation about its impact on performance and resource usage.

Let's verify the namespace usage in other configuration files:

Consider:

  • Using a dedicated namespace for the deletion job
  • Adding validation for the index_id format
  • Documenting the impact of concurrency settings on resource usage
Makefile.d/build.mk (2)

27-27: LGTM! Build configuration follows best practices.

The build configuration for the index deletion binary is correctly implemented:

  • Uses static linking (CGO_ENABLED=0)
  • Follows consistent pattern with other index jobs
  • Properly integrated into the build system

Also applies to: 84-87


136-136: LGTM! Zip artifact configuration is properly implemented.

The zip artifact configuration for the index deletion binary:

  • Follows the established pattern for artifact creation
  • Maintains consistency with other index job artifacts
  • Correctly handles directory creation and zip file generation

Also applies to: 196-199

Makefile.d/docker.mk (2)

39-39: LGTM! Consistent integration with existing build targets.

The index-deletion target is properly integrated into both the main build list and xpanes build list, maintaining alphabetical ordering.

Also applies to: 67-67


346-355: LGTM! Verify Dockerfile existence.

The new targets follow the established pattern and are properly defined. However, let's verify the existence of the referenced Dockerfile.

✅ Verification successful

Dockerfile exists at the expected location - good to go!

The new Makefile targets are properly defined and the referenced Dockerfile exists at the expected path dockers/index/job/deletion/Dockerfile, confirming that the implementation is complete and correct.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify the existence of the Dockerfile
if [ -f "dockers/index/job/deletion/Dockerfile" ]; then
    echo "✓ Dockerfile exists at dockers/index/job/deletion/Dockerfile"
else
    echo "✗ Missing Dockerfile at dockers/index/job/deletion/Dockerfile"
    exit 1
fi

Length of output: 181

k8s/index/job/deletion/configmap.yaml (3)

16-26: LGTM! Metadata follows Kubernetes best practices.

The ConfigMap metadata includes appropriate labels for Kubernetes and Helm management.


208-238: Consider enabling observability for monitoring deletion operations.

Since this job performs critical data deletion operations, having observability enabled would help in monitoring and debugging the deletion process.

   observability:
-    enabled: false
+    enabled: true
     otlp:
       collector_endpoint: ""
     trace:
-      enabled: false
+      enabled: true

Run this script to verify if OpenTelemetry collector is deployed in the cluster:

✅ Verification successful

Let me check the collector configuration and documentation to provide a more informed response.


Based on the documentation and configuration files, I can now provide the final response:

OpenTelemetry collector is available, enabling observability is recommended

The OpenTelemetry collector is properly configured in the cluster with support for both metrics and traces. The collector is set up as a DaemonSet to collect telemetry data from all nodes, and exports to:

  • Jaeger for distributed tracing
  • Prometheus for metrics visualization

To enable observability, update the configuration as suggested and set the collector endpoint to:

otlp:
  collector_endpoint: "opentelemetry-collector-collector.default.svc.cluster.local:4317"
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if OpenTelemetry collector exists in the cluster
# Expected: Find references to otel-collector deployment or service

rg -l 'otel.*collector|opentelemetry.*collector'

Length of output: 681


Script:

#!/bin/bash
# Check collector configuration and documentation
cat k8s/metrics/otel/collector.yaml
cat k8s/metrics/otel/README.md
cat docs/user-guides/observability-configuration.md

Length of output: 11653


239-248: ⚠️ Potential issue

Update sample index_id and consider increasing concurrency.

  1. The index_id is set to "sample" which needs to be updated with a real value or made configurable.
  2. Consider increasing concurrency for better performance, especially when dealing with multiple indexes.
   deleter:
-    index_id: "sample"
+    index_id: ${INDEX_ID}
-    concurrency: 1
+    concurrency: 5

Let's verify if this index_id is referenced elsewhere:

Makefile (1)

44-44: LGTM: Variable definition follows conventions.

The addition of INDEX_DELETION_IMAGE follows the established naming pattern and is correctly placed with other image-related variables.

Let's verify the variable usage in the codebase:

✅ Verification successful

Variable is properly defined and utilized in build system

The INDEX_DELETION_IMAGE variable is correctly used in docker.mk for:

  • Image name generation via echo command
  • Docker build target configuration

The implementation follows the established pattern used for other Docker images in the build system.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for references to the new index deletion image variable
# Test: Search for INDEX_DELETION_IMAGE usage in Makefile.d/docker.mk and other build files
rg "INDEX_DELETION_IMAGE" Makefile.d/

Length of output: 153


Script:

#!/bin/bash
# Let's check the context of how INDEX_DELETION_IMAGE is used in docker.mk
rg -A 3 "INDEX_DELETION_IMAGE" Makefile.d/docker.mk

Length of output: 252

.gitfiles (1)

99-99: LGTM: Addition of index deletion workflow file.

The addition of .github/workflows/dockers-index-deletion-image.yaml aligns with the PR objectives to implement index deletion functionality and follows the repository's established pattern for Docker image workflow files.

pkg/index/job/deletion/service/deleter.go (3)

197-202: Ensure consistency in error handling when skipping certain errors.

When the gRPC status code is FailedPrecondition or NotFound, the function logs a warning and returns nil, effectively skipping the error. Confirm that this behavior is intentional and that these errors should not be propagated.

If skipping these errors is intended, consider documenting this behavior to make it clear to future maintainers. If not, adjust the error handling to propagate these errors appropriately.


215-218: Review error aggregation logic for completeness.

The doDeleteIndex method returns errors.Join(err, errs) to aggregate errors from different sources. Ensure that this aggregation correctly represents all encountered errors without omitting any critical information.

Confirm that errors.Join(err, errs) effectively combines the error from OrderedRangeConcurrent and the accumulated errs. If necessary, consider using an error type that preserves individual errors for detailed reporting.


154-214: ⚠️ Potential issue

Potential data race when accumulating errors concurrently.

In the doDeleteIndex method, the errs variable is accessed and modified by multiple goroutines. Although a mutex emu is used, ensure that all accesses to errs are properly synchronized to prevent data races.

Verify that there are no unsynchronized accesses to errs. Alternatively, consider using an errgroup.Group to manage concurrency and error collection more safely.

To confirm, you can check for data races using Go's race detector during testing.

Comment on lines +21 to +30
// WithIndexingConcurrency returns Option that sets indexing concurrency.
func WithIndexingConcurrency(num int) Option {
return func(idx *index) error {
if num <= 0 {
return errors.NewErrInvalidOption("indexingConcurrency", num)
}
idx.concurrency = num
return nil
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding an upper bound check for concurrency.

While the function correctly validates for non-positive values, consider adding an upper bound check to prevent potential resource exhaustion.

 func WithIndexingConcurrency(num int) Option {
 	return func(idx *index) error {
 		if num <= 0 {
 			return errors.NewErrInvalidOption("indexingConcurrency", num)
 		}
+		// Consider system resources and practical limits
+		const maxConcurrency = 1000 // adjust based on your requirements
+		if num > maxConcurrency {
+			return errors.NewErrInvalidOption("indexingConcurrency exceeds maximum", num)
+		}
 		idx.concurrency = num
 		return nil
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// WithIndexingConcurrency returns Option that sets indexing concurrency.
func WithIndexingConcurrency(num int) Option {
return func(idx *index) error {
if num <= 0 {
return errors.NewErrInvalidOption("indexingConcurrency", num)
}
idx.concurrency = num
return nil
}
}
// WithIndexingConcurrency returns Option that sets indexing concurrency.
func WithIndexingConcurrency(num int) Option {
return func(idx *index) error {
if num <= 0 {
return errors.NewErrInvalidOption("indexingConcurrency", num)
}
// Consider system resources and practical limits
const maxConcurrency = 1000 // adjust based on your requirements
if num > maxConcurrency {
return errors.NewErrInvalidOption("indexingConcurrency exceeds maximum", num)
}
idx.concurrency = num
return nil
}
}

Comment on lines +32 to +40
// WithTargetAddrs returns Option that sets indexing target addresses.
func WithTargetAddrs(addrs ...string) Option {
return func(idx *index) error {
if len(addrs) != 0 {
idx.targetAddrs = append(idx.targetAddrs, addrs...)
}
return nil
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add address validation and deduplication.

The current implementation could benefit from:

  1. Validating address format
  2. Preventing duplicate addresses
  3. Checking for empty strings
 func WithTargetAddrs(addrs ...string) Option {
 	return func(idx *index) error {
 		if len(addrs) != 0 {
+			seen := make(map[string]struct{}, len(idx.targetAddrs))
+			for _, addr := range idx.targetAddrs {
+				seen[addr] = struct{}{}
+			}
+			for _, addr := range addrs {
+				if addr == "" {
+					return errors.NewErrInvalidOption("empty address", addr)
+				}
+				// Add basic address format validation
+				if !isValidAddr(addr) {
+					return errors.NewErrInvalidOption("invalid address format", addr)
+				}
+				if _, exists := seen[addr]; !exists {
+					idx.targetAddrs = append(idx.targetAddrs, addr)
+					seen[addr] = struct{}{}
+				}
+			}
-			idx.targetAddrs = append(idx.targetAddrs, addrs...)
 		}
 		return nil
 	}
 }

Committable suggestion was skipped due to low confidence.

Comment on lines +42 to +48
// WithTargetIndexID returns Option that sets target deleting index ID.
func WithTargetIndexID(indexID string) Option {
return func(idx *index) error {
idx.targetIndexID = indexID
return nil
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add index ID validation and enhance documentation.

The function should validate the index ID format and provide clear documentation about the expected format.

-// WithTargetIndexID returns Option that sets target deleting index ID.
+// WithTargetIndexID returns Option that sets target deleting index ID.
+// The index ID must be a non-empty string matching the format: ^[a-zA-Z0-9_-]+$
 func WithTargetIndexID(indexID string) Option {
 	return func(idx *index) error {
+		if indexID == "" {
+			return errors.NewErrInvalidOption("empty index ID", indexID)
+		}
+		// Add validation for index ID format
+		if !isValidIndexID(indexID) {
+			return errors.NewErrInvalidOption("invalid index ID format", indexID)
+		}
 		idx.targetIndexID = indexID
 		return nil
 	}
 }

Committable suggestion was skipped due to low confidence.

Comment on lines +37 to +38
context.Background(),
runner.WithName(name),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding a context timeout.

Using context.Background() without a timeout could potentially allow the deletion job to run indefinitely. Consider adding a reasonable timeout to ensure the job completes within expected boundaries.

-		return runner.Do(
-			context.Background(),
+		ctx, cancel := context.WithTimeout(context.Background(), 30*time.Minute)
+		defer cancel()
+		return runner.Do(
+			ctx,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
context.Background(),
runner.WithName(name),
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Minute)
defer cancel()
return runner.Do(
ctx,
runner.WithName(name),

Comment on lines +35 to +36
// Deletion represents auto indexing service configurations.
Deletion *config.IndexDeleter `json:"deleter" yaml:"deleter"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect documentation comment.

The comment for the Deletion field incorrectly refers to "auto indexing service" instead of "deletion service".

Apply this fix:

-	// Deletion represents auto indexing service configurations.
+	// Deletion represents index deletion service configurations.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Deletion represents auto indexing service configurations.
Deletion *config.IndexDeleter `json:"deleter" yaml:"deleter"`
// Deletion represents index deletion service configurations.
Deletion *config.IndexDeleter `json:"deleter" yaml:"deleter"`

Comment on lines +73 to +79
discoverer.WithOnDiscoverFunc(func(ctx context.Context, c discoverer.Client, addrs []string) error {
last := len(addrs) - 1
for i := 0; i < len(addrs)/2; i++ {
addrs[i], addrs[last-i] = addrs[last-i], addrs[i]
}
return nil
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Address unused parameters in OnDiscoverFunc.

The parameters ctx and c are not used within the OnDiscoverFunc. To avoid potential confusion or linting warnings, consider renaming them to _ to indicate they are intentionally unused.

Apply this diff to address the unused parameters:

 discoverer.WithOnDiscoverFunc(func(ctx context.Context, c discoverer.Client, addrs []string) error {
+	// ctx and c are unused in this function
+	// Reverse the order of addrs to prioritize certain addresses
+	
-	last := len(addrs) - 1
+	last := len(addrs) - 1
 	for i := 0; i < len(addrs)/2; i++ {
 		addrs[i], addrs[last-i] = addrs[last-i], addrs[i]
 	}
 	return nil
 }),

Committable suggestion was skipped due to low confidence.

Comment on lines +198 to +208
if r.observability != nil {
if err := r.observability.Stop(ctx); err != nil {
errs = errors.Join(errs, err)
}
}
if r.server != nil {
if err := r.server.Shutdown(ctx); err != nil {
errs = errors.Join(errs, err)
}
}
return errs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure all resources are properly closed in Stop method.

Currently, the Stop method checks and stops r.observability and r.server but does not explicitly stop r.indexer. If r.indexer holds resources or goroutines, they should be properly closed or cancelled to prevent resource leaks.

Apply this diff to stop the indexer:

 func (r *run) Stop(ctx context.Context) (errs error) {
+	if err := r.indexer.Stop(ctx); err != nil {
+		errs = errors.Join(errs, err)
+	}
 	if r.observability != nil {
 		if err := r.observability.Stop(ctx); err != nil {
 			errs = errors.Join(errs, err)
 		}
 	}
 	if r.server != nil {
 		if err := r.server.Shutdown(ctx); err != nil {
 			errs = errors.Join(errs, err)
 		}
 	}
 	return errs
 }

Ensure that the service.Deleter interface includes a Stop method to properly handle shutdown.

Committable suggestion was skipped due to low confidence.

Comment on lines +152 to +167
r.eg.Go(safety.RecoverFunc(func() (err error) {
defer func() {
p, err := os.FindProcess(os.Getpid())
if err != nil {
// using Fatal to avoid this process to be zombie
// skipcq: RVV-A0003
log.Fatalf("failed to find my pid to kill %v", err)
return
}
log.Info("sending SIGTERM to myself to stop this job")
if err := p.Signal(syscall.SIGTERM); err != nil {
log.Error(err)
}
}()
return r.indexer.Start(ctx)
}))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider more graceful shutdown instead of sending SIGTERM to self.

Sending a SIGTERM to the process within the deferred function might not be the most graceful way to signal shutdown. This approach can potentially lead to abrupt termination without proper cleanup. It would be better to implement a cancellation mechanism using contexts or error handling to control the lifecycle more gracefully.

Consider refactoring the code to use context cancellation:

 r.eg.Go(safety.RecoverFunc(func() (err error) {
-	defer func() {
-		p, err := os.FindProcess(os.Getpid())
-		if err != nil {
-			// using Fatal to avoid this process to be zombie
-			log.Fatalf("failed to find my pid to kill %v", err)
-			return
-		}
-		log.Info("sending SIGTERM to myself to stop this job")
-		if err := p.Signal(syscall.SIGTERM); err != nil {
-			log.Error(err)
-		}
-	}()
 	return r.indexer.Start(ctx)
 }))
 
+// In the main function or where context is managed, consider cancelling the context when r.indexer.Start completes.

This change avoids sending signals to the process and allows for a controlled shutdown using context cancellation.

Committable suggestion was skipped due to low confidence.

Comment on lines +96 to +125
if err != nil {
var attrs trace.Attributes
switch {
case errors.Is(err, errors.ErrGRPCClientConnNotFound("*")):
err = status.WrapWithInternal(
vald.RemoveRPCName+" API connection not found", err,
)
attrs = trace.StatusCodeInternal(err.Error())
case errors.Is(err, errors.ErrGRPCTargetAddrNotFound):
err = status.WrapWithInternal(
vald.RemoveRPCName+" API connection target address \""+strings.Join(idx.targetAddrs, ",")+"\" not found", err,
)
attrs = trace.StatusCodeInternal(err.Error())
default:
var (
st *status.Status
msg string
)
st, msg, err = status.ParseError(err, codes.Internal,
"failed to parse "+vald.RemoveRPCName+" gRPC error response",
)
attrs = trace.FromGRPCStatus(st.Code(), msg)
}
log.Warn(err)
if span != nil {
span.RecordError(err)
span.SetAttributes(attrs...)
span.SetStatus(trace.StatusError, err.Error())
}
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor error handling for maintainability in Start method.

The error handling logic in the Start method is complex and repetitive. Refactoring this logic into a separate helper function would improve readability and maintainability.

Create a helper function to handle error wrapping and attribute assignment:

func (idx *index) handleStartError(ctx context.Context, span trace.Span, err error) error {
    var attrs trace.Attributes
    switch {
    case errors.Is(err, errors.ErrGRPCClientConnNotFound("*")):
        err = status.WrapWithInternal(
            vald.RemoveRPCName+" API connection not found", err,
        )
        attrs = trace.StatusCodeInternal(err.Error())
    case errors.Is(err, errors.ErrGRPCTargetAddrNotFound):
        err = status.WrapWithInternal(
            vald.RemoveRPCName+" API connection target address \""+strings.Join(idx.targetAddrs, ",")+"\" not found", err,
        )
        attrs = trace.StatusCodeInternal(err.Error())
    default:
        var (
            st  *status.Status
            msg string
        )
        st, msg, err = status.ParseError(err, codes.Internal,
            "failed to parse "+vald.RemoveRPCName+" gRPC error response",
        )
        attrs = trace.FromGRPCStatus(st.Code(), msg)
    }
    log.Warn(err)
    if span != nil {
        span.RecordError(err)
        span.SetAttributes(attrs...)
        span.SetStatus(trace.StatusError, err.Error())
    }
    return err
}

Then update the Start method:

err := idx.doDeleteIndex(ctx, /*...*/)
if err != nil {
-   // Existing error handling logic
+   return idx.handleStartError(ctx, span, err)
}
return nil

Comment on lines +49 to +52
e := &errors.ErrCriticalOption{}
if errors.As(oerr, &e) {
log.Error(err)
return nil, oerr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Possible misuse of error variables in error handling.

In the New function, the error variable err is used in log.Error(err), but it might be more appropriate to use oerr, which contains the wrapped error with additional context. This ensures that the logged error message includes all relevant information.

Consider modifying the error logging to use oerr:

if errors.As(oerr, &e) {
-    log.Error(err)
+    log.Error(oerr)
    return nil, oerr
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
e := &errors.ErrCriticalOption{}
if errors.As(oerr, &e) {
log.Error(err)
return nil, oerr
e := &errors.ErrCriticalOption{}
if errors.As(oerr, &e) {
log.Error(oerr)
return nil, oerr

@vdaas-ci
Copy link
Collaborator Author

[WARNING:INTCFG] Changes in interal/config may require you to change Helm charts. Please check.

FROM ghcr.io/vdaas/vald/vald-buildbase:nightly AS builder
LABEL maintainer="vdaas.org vald team <vald@vdaas.org>"
# skipcq: DOK-DL3002
USER root:root
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [hadolint] <DL3002> reported by reviewdog 🐶
Last USER should not be root

WORKDIR ${GOPATH}/src/github.com/${ORG}/${REPO}
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
#skipcq: DOK-W1001, DOK-SC2046, DOK-SC2086, DOK-DL3008
RUN --mount=type=bind,target=.,rw \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [hadolint] <DL3008> reported by reviewdog 🐶
Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>

@vdaas-ci
Copy link
Collaborator Author

[CHATOPS:HELP] ChatOps commands.

  • 🙆‍♀️ /approve - approve
  • 🍱 /format - format codes and add licenses
  • /gen-test - generate test codes
  • 🏷️ /label - add labels
  • 🔚 2️⃣ 🔚 /label actions/e2e-deploy - run E2E deploy & integration test

Copy link

codecov bot commented Oct 26, 2024

Codecov Report

Attention: Patch coverage is 0% with 371 lines in your changes missing coverage. Please review.

Please upload report for BASE (release/v1.7@1dd771f). Learn more about missing BASE report.

Files with missing lines Patch % Lines
pkg/index/job/deletion/service/deleter.go 0.00% 146 Missing ⚠️
pkg/index/job/deletion/usecase/deletion.go 0.00% 131 Missing ⚠️
pkg/index/job/deletion/config/config.go 0.00% 27 Missing ⚠️
pkg/index/job/deletion/service/options.go 0.00% 25 Missing ⚠️
cmd/index/job/deletion/main.go 0.00% 22 Missing ⚠️
internal/config/index_deleter.go 0.00% 12 Missing ⚠️
hack/actions/gen/main.go 0.00% 4 Missing ⚠️
hack/docker/gen/main.go 0.00% 4 Missing ⚠️
Additional details and impacted files
@@               Coverage Diff               @@
##             release/v1.7    #2722   +/-   ##
===============================================
  Coverage                ?   23.95%           
===============================================
  Files                   ?      545           
  Lines                   ?    54324           
  Branches                ?        0           
===============================================
  Hits                    ?    13015           
  Misses                  ?    40535           
  Partials                ?      774           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kpango kpango merged commit 6faa127 into release/v1.7 Oct 27, 2024
202 of 205 checks passed
@kpango kpango deleted the backport/release/v1.7/feat/implement-delete-expired-index branch October 27, 2024 01:10
@coderabbitai coderabbitai bot mentioned this pull request Nov 20, 2024
This was referenced Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants