Releases: timgit/pg-boss
10.0.2
What's Changed
- Update README.md - update example with create queue by @tlebeitsuk in #468
- feat(Typescript): Generic getJobById by @danilofuchs in #471
New Contributors
- @tlebeitsuk made their first contribution in #468
- @danilofuchs made their first contribution in #471
Full Changelog: 10.0.1...10.0.2
10.0.1
start()
resolves instance after started
Full Changelog: 10.0.0...10.0.1
10.0.0
-
MAJOR: Node 20 is now the minimum required version
-
MAJOR: PostgreSQL 13 is now the minimum required version. The dependency on the pgcrypto extension has been removed.
-
MAJOR: Automatic migration from v9 or lower is not currently available. A new partitioned job table was created, as well as new queue storage policies that make it difficult to honor the previous unique constraints in an upgrade scenario. The safest option is to manually move jobs from pgboss 9 to v10 via the API, or at least use the API to prepare for a manual migration via
INSERT ... SELECT
. -
MAJOR: Job retries are now opt-out instead of opt-in. The default
retryLimit
is now 2 retries. This will cause an issue for any job handlers that aren't idempotent. Consider settingretryLimit=0
on these queues if needed. -
MAJOR: Queues must now be created before API or direct SQL INSERT will work. See migration notes below. Each queue has a storage policy (see below) and represents a child table in a partitioning hierarchy. Additionally, queues store default retry and retention policies that will be auto-applied to all new jobs. See the docs for more on queue operation such as `createQueue().
standard
(default): Standard queues are the default queue policy, which supports all existing features. This will provision a dedicated job partition for all jobs with this name.short
: Short queues only allow 1 item to be queued (in created state), which replaces the previoussendSingleton()
andsendOnce()
functions.singleton
: Singleton queues only allow 1 item to be active, which replaces the previousfetch()
optionenforceSingletonQueueActiveLimit
.stately
: Stately queues are a combination ofshort
andsingleton
, only allowing 1 job to be queued and 1 job active.
-
MAJOR: The handler function in
work()
was standardized to always receive an array of jobs. One simple way to migrate a single job handler (batchSize=1) is a destructuring assignment like the following.// v9 await boss.work(queue, (job) => handler(job)) // v10 await boss.work(queue, ([ job ]) => handler(job))
-
MAJOR:
teamSize
,teamConcurrency
, andteamRefill
were removed fromwork()
to simplify worker polling use cases. As noted above,enforceSingletonQueueActiveLimit
was also removed. -
MAJOR: Dead letter queues replace completion jobs. Failed jobs will be added to optional dead letter queues after exhausting all retries. This is preferred over completion jobs to gain retry support via
work()
. Additionally, dead letter queues only make a copy of the job if it fails, instead of filling up the job table with numerous, mostly unneeded completion jobs.onComplete
option insend()
andinsert()
has been removedonComplete()
,offComplete()
, andfetchCompleted()
have been removeddeadLetter
option added tosend()
andinsert()
andcreateQueue()
-
MAJOR: Dropped the following API functions in favor of policy queues
sendOnce()
sendSingleton()
-
MAJOR: The following API functions now require name arguments
complete(name, id, data)
fail(name, id, data)
cancel(name, id)
getJobById(name, id)
-
MAJOR: The contract for
getJobById()
and theincludeMetadata
option forfetch()
andwork()
were standardized to the following.interface JobWithMetadata<T = object> { id: string; name: string; data: T; priority: number; state: 'created' | 'retry' | 'active' | 'completed' | 'cancelled' | 'failed'; retryLimit: number; retryCount: number; retryDelay: number; retryBackoff: boolean; startAfter: Date; startedOn: Date; singletonKey: string | null; singletonOn: Date | null; expireIn: PostgresInterval; createdOn: Date; completedOn: Date | null; keepUntil: Date; deadLetter: string, policy: string, output: object }
-
MAJOR: The columns in the job and archive table were renamed to standardize to snake case. A sample job table script showing these is below.
CREATE TABLE pgboss.job ( id uuid not null default gen_random_uuid(), name text not null, priority integer not null default(0), data jsonb, state pgboss.job_state not null default('created'), retry_limit integer not null default(0), retry_count integer not null default(0), retry_delay integer not null default(0), retry_backoff boolean not null default false, start_after timestamp with time zone not null default now(), started_on timestamp with time zone, singleton_key text, singleton_on timestamp without time zone, expire_in interval not null default interval '15 minutes', created_on timestamp with time zone not null default now(), completed_on timestamp with time zone, keep_until timestamp with time zone NOT NULL default now() + interval '14 days', output jsonb, dead_letter text, policy text, CONSTRAINT job_pkey PRIMARY KEY (name, id) ) PARTITION BY LIST (name)
-
MAJOR:
work()
optionsnewJobCheckInterval
andnewJobCheckIntervalSeconds
have been replaced bypollingIntervalSeconds
. The minimum value is 0.5, making 500ms the minimum allowed value. -
MAJOR:
stop()
optiondestroy
was renamed toclose
. Previously,destroy
was defaulted to false, to leave the internal connection database open which was created bystart()
. Now,close
will default to true. -
MAJOR:
noSupervisor
andnoScheduling
were renamed to a more intuitive naming convention.- If using
noSupervisor: true
to disable mainteance, instead usesupervise: false
- If using
noScheduling: true
to disable scheduled cron jobs, useschedule: false
- If using
-
MINOR: Added new function
deleteJob()
to provide fetch -> delete semantics when job throttling and/or storage is not desired. -
MINOR:
send()
andinsert()
cascade configuration from policy queues (if they exist) and then global settings in the constructor. Use the following table to help identify which settings are inherited and when.Setting API Queue Constructor retryLimit
send()
,insert()
,createQueue()
✔️ ✔️ retryDelay
send()
,insert()
,createQueue()
✔️ ✔️ retryBackoff
send()
,insert()
,createQueue()
✔️ ✔️ expireInSeconds
send()
,insert()
,createQueue()
✔️ ✔️ expireInMinutes
send()
,createQueue()
✔️ ✔️ expireInHours
send()
,createQueue()
✔️ ✔️ retentionSeconds
send()
,createQueue()
✔️ ✔️ retentionMinutes
send()
,createQueue()
✔️ ✔️ retentionHours
send()
,createQueue()
✔️ ✔️ retentionDays
send()
,createQueue()
✔️ ✔️ deadLetter
send()
,insert()
,createQueue()
✔️ -
MINOR: Added primary key to job archive to support replication use cases such as read replicas or high availability standbys.
-
MINOR: Added a new constructor option,
migrate:false
, to block an instance from attempting to migrate to the latest database schema version. This is useful if the configured credentials don't have schema modification privileges or complete control of when and how migrations are run is required. -
MINOR: The
expired
failed state has been consolidated intofailed
for simplicity. -
MINOR: Added
priority:false
option towork()
andfetch()
to opt out of priority sorting during job fetching. If a queue is very large and not using the priority feature, this may help job fetch performance. -
MINOR: Added a maintenance function,
maintain()
, if needed for serverless and/or externally scheduled maintenance use cases. -
MINOR: Added functions
isInstalled()
andschemaVersion()
-
MINOR:
stop()
will now wait for the default graceful stop timeout (30s) before resolving its promise. Thestopped
event will still emit. If you want to the original behavior, set the newwait
option tofalse
. -
MINOR: Added
id
property as an option tosend()
for pre-assigning the job id. Previously, onlyinsert()
supported pre-assignment. -
MINOR: Removed internal usage of md5() hashing function for those needing FIPS compliance.
Migration Notes
This section will contain notes and tips on different migration strategies from v9 and below to v10. Since auto-migration is not supported, there are a few manual options to get all of your jobs into v10 from v9.
API option
For each queue, use createQueue()
, fetch()
, insert()
to pull jobs from v9 and insert into v10.
SQL option
Something along the lines of INSERT INTO v10.job (...) SELECT ... FROM v9.job
is possible in SQL, but the queues have to be created first.
- Run
SELECT pgboss.create_queue(name, options)
. - Insert records into
pgboss.job
.
Hybrid option
- Create all required queues using the API
- Now that queues are created, iterate through each source queue, and insert only queued items (state=created) in SQL.
const queues = await boss.getQueues();
for (const queue of queues) {
try {
const sql = `
INSERT INTO ${targetSchema}.job (
id,
name,
priority,
...
9.0.3
- Ignore coverage files in npm package
Full Changelog: 9.0.2...9.0.3
9.0.2
What's Changed
- Update Typescript types for work() handler
- Update SKIP LOCKED link in README by @jleverenz in #397
- fix: add work overloads fix batchSize typing by @nomocas in #402
- GitHub Actions CI by @timgit in #407
- fix: start state handling by @simoneb in #406
New Contributors
- @jleverenz made their first contribution in #397
- @nomocas made their first contribution in #402
- @simoneb made their first contribution in #406
Full Changelog: 9.0.1...9.0.2
9.0.1
- Fixed Typescript type for work() async handler
Full Changelog: 9.0.0...9.0.1
9.0.0
What's Changed
- MAJOR: Limit 1 active singleton queue job by @adamhamlin in #368 (requires pg 11)
- MAJOR: Removed
job.done()
callback to standardize on async functions forwork()
- MINOR: All jobs returned in
work()
using thebatchSize
option are now auto-completed once the handler resolves - MAJOR: Node 16 is now the minimum required version
- MAJOR: PostgreSql 11 is now the minimum required version
Full Changelog: 8.4.2...9.0.0
8.4.2
What's Changed
- Prevent node from dying when DB unavailable by wrapping
getMaintenanceTime
by @shayneczyzewski in #374 - Meta monitor error handling by @timgit in #375
New Contributors
- @shayneczyzewski made their first contribution in #374
Full Changelog: 8.4.1...8.4.2
8.4.1
8.4.0
What's Changed
- Adds "archiveFailedAfterSeconds" option by @klesgidis in #364
- Failed archive config by @timgit in #366
Full Changelog: 8.3.1...8.4.0