Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add maxbackoff to limit the interval between retries #1529

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jseparovic
Copy link

No description provided.

@mnaberez
Copy link
Member

#1528 (comment)

When using supervisord to restart a process that is exiting upon start, it seems the backoff procedure just adds 1 second to the backoff forever.
Is it possible to add a config option such as maxbackoff=30 or similar so that we can force a restart attempt at least every 30 seconds even when the process it not starting continually.
It might be some external system that needs to be fixed in order for the process to start and this could take some time, but supervisor should keep retrying so the process comes back up in a timely manner when the external system is fixed.
If the external system is down for a day

Would like to use something like:
autorestart=true
startretries=2147483647
maxbackoff=30

So that supervisor will retry at least every 30 seconds.

@warrenspence
Copy link

I think this config option makes sense and I'd like to use it myself.

An issue with the PR is that you've set the default to 60 seconds which is effectively changing the existing behaviour for users currently providing a large startretries value.

I think if you set the default to 0 and then in supervisor/process.py only enforce the limit check if the maxbackoff is non 0 then the change would be safe to merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants