Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add heroku scaling commands to publication process #1295

Merged
merged 1 commit into from
Mar 26, 2024

Conversation

JPrevost
Copy link
Member

@JPrevost JPrevost commented Mar 26, 2024

Why are these changes being introduced:

  • the preservation job uses a lot of memory. Scaling the worker job where preservation is done to a dyno with additional memory will help give us a bit more headroom

Relevant ticket(s):

How does this address that need:

  • provides instructions on how and why to scale the worker dyno before and after processing the output queue for publishing jobs

Document any side effects to this change:

  • This doesn't really solve the problem, but masks it for our common thesis sizes. To fully address this problem we'll need to revisit how we create the preservation zip files

Developer

  • All new ENV is documented in README
  • All new ENV has been added to Heroku Pipeline, Staging and Prod
  • ANDI or Wave has been run in accordance to
    our guide and
    all issues introduced by these changes have been resolved or opened as new
    issues (link to those issues in the Pull Request details above)
  • Stakeholder approval has been confirmed (or is not needed)

Code Reviewer

  • The commit message is clear and follows our guidelines
    (not just this pull request message)
  • There are appropriate tests covering any new functionality
  • The documentation has been updated or is unnecessary
  • The changes have been verified
  • New dependencies are appropriate or there were no changes

Requires database migrations?

NO

Includes new or updated dependencies?

NO

Why are these changes being introduced:

* the preservation job uses a lot of memory. Scaling the worker job
  where preservation is done to a dyno with additional memory will help
  give us a bit more headroom

Relevant ticket(s):

* https://mitlibraries.atlassian.net/browse/ETD-598

How does this address that need:

* provides instructions on how and why to scale the worker dyno before
  and after processing the output queue for publishing jobs

Document any side effects to this change:

* This doesn't really solve the problem, but masks it for our common
  thesis sizes. To fully address this problem we'll need to revisit
  how we create the preservation zip files
@mitlib mitlib temporarily deployed to thesis-submit-pr-1295 March 26, 2024 17:00 Inactive
@jazairi jazairi self-assigned this Mar 26, 2024
@coveralls
Copy link

Coverage Status

coverage: 98.313%. remained the same
when pulling c041bcd on etd-598-publishing-memory-spikes
into 81d1964 on main.

Copy link
Member

@matt-bernhardt matt-bernhardt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems pretty straightforward and easy to follow.

@JPrevost JPrevost merged commit e34910d into main Mar 26, 2024
3 checks passed
@JPrevost JPrevost deleted the etd-598-publishing-memory-spikes branch March 26, 2024 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants