Skip to content
Naomi Dushay edited this page Dec 16, 2022 · 40 revisions

Preservation Catalog

Welcome to the Preservation Catalog wiki! Preservation Catalog, or "PresCat," is a Rails application that tracks, audits, and replicates archival artifacts associated with objects deposited into the Stanford Digital Repository.

See the sidebar to drill down and find the documentation you seek.

Note to wiki editors: if you add a page, please remember to add a link into the sidebar for easier discovery and navigation! Please also update the troubleshooting guide below if an added page is especially relevant to triaging production problems.

Note to readers: if you don't see what you're looking for in the sidebar's Table of Contents, browse/search Github's autogen ToC in the "Pages" section above the handmade ToC (this links to all pages in the wiki, and is collapsed by default).

Help! I don't know what I'm looking for, but I have to troubleshoot something.

  1. Look at Troubleshooting section in sidebar -->
  2. Sorry, you have to keep reading (or search the wiki page(s))

Asynchronous Job Failures

A job throws an error while executing.

How you might notice this problem

  • Honeybadger alerts about an uncaught error in code from app/jobs/
  • entries in a Resque web console failure queue -- e.g. zipmaker_failed, *_delivery_failed (e.g. s3_us_west_2_delivery_failed), zip_endpoint_events_failed, etc.

Useful troubleshooting links

Audit Failures

An audit job (scheduled or manually triggered) examines an on prem or cloud copy, and detects possibly missing or corrupt data.

Note: Here we mean detection of a problem with content, where the job doing the audit work completes execution successfully from an ActiveJob perspective.

How you might notice this problem

  • Honeybadger sends an alert stating an audit of a particular druid (e.g. checksum validation, part replication audit) determined that there is a problem with the Moab and/or its expected cloud copies.
  • An audit is run manually, and the status of the relevant CompleteMoabs or ZipParts is queried when the job completes, and is found not to be ok.

Useful troubleshooting links

Something is wrong with the job system (unexpected worker count, resque-pool stability problems, etc)

Too many workers

How you might notice this problem
  • nagios alert that worker count is too high ("feature-worker-count: FAILED TOO MANY WORKERS")
  • notice that worker count is too high when manually visiting the Resque web console or the okcomputer status page (/status/all)
Useful troubleshooting links

More than the expected number of Resque workers are running / too many resque workers / worker count high

Too few workers

  • is resque-pool currently running on all of the worker boxes?
  • at times we've under-resourced QA and stage relative to the number of workers running, causing worker processes and/or resque-pool to crash intermittently (especially when re-starting while workers have jobs in progress). consider allocating fewer workers or more computing resources, depending on whether all workers are actually being utilized.

resque-pool seems to be crashing periodically, not restarting correctly, etc

How you might notice this problem
  • only a fraction (e.g. 2/3) of the expected workers are up
  • worker counts fluctuate every few minutes or hours by the number of workers expected to be running on one VM
  • resque-pool hotswap will be invoked (either directly, or by way of deployment) and the worker count will be fine, but then some number of minutes or hours later, it is noticed that one or more worker VMs do not have resque-pool running anymore.
Useful troubleshooting links

Something is wrong with preservation storage I/O

E.g. reads or writes against the preservation storage roots are hanging indefinitely, even for small Moabs; or there are many file read attempts against preservation content that are failing immediately, e.g. with unexpected permission or file metadata errors.

See IO against Ceph backed preservation storage is hanging indefinitely (steps to address IO problems, and follow on cleanup)

Clone this wiki locally