Only enforce scheduler_key unique index on first execution #6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently if a job is periodically scheduled but also long running (through multiple workloads), part of it's execution gets skipped. This is because we currently have a unique index on the scheduler_key, which gets copied over to subsequent workload. This means that if the next scheduled job is already enqueued (no matter how far in the future), any new workload from the currently executing job gets swallowed by the conflict on the unique index.
This is particularly an issue when using job-iteration which relies on interrupting and re-enqueueing many workloads for the same active job (over a long period of time)
The proposed change is to change the unique index to only affect workloads where the executions count is 0. This is acceptable because jobs will still respect execution_concurrency_key and enqueue_concurrency_key unique indexes. Retry policies can also be configured on the jobs themselves if needed.