The storage engine for Hypercore. Built on RocksDB.
npm install hypercore-storage
The following API is what Hypercore 11 binds to to do I/O.
const Storage = require('hypercore-storage')
Make a new storage engine.
Create a new core, returns a storage instance for that core.
Resume a previously make core. If it doesn't exist it returns null
.
Primitive for making atomic batches across ops. See below for core.atomize
on how to use it.
When you wanna flush your changes to the underlying storage, use await atom.flush()
.
Internally to "listen" for when that happens you can add an sync hook with atom.onflush(fn)
Check if a core exists.
List all cores. Stream data looks like this { discoveryKey, core }
where core contains the core header.
Close the storage instance.
Make a read batch on a core storage.
NOTE: a read batch DOES NOT flush until you call rx.tryFlush()
.
Returns the auth data around a core.
Returns the head of the merkle tree.
Returns an array of all named sessions.
Returns the core this has a dependency on.
Returns the various storage/replication hints.
Returns a block stored.
Returns a tree node stored.
Return a bitfield page.
Return a user stored buffer.
Flushes the read batch, non of the above promises will resolve until you call this.
Make a write batch on a core storage.
NOTE: all the apis below are sync as they just buffer mutations until you flush them.
Set the auth data around a core.
Set the head of the merkle tree.
Set an array of all named sessions.
Set the core this has a dependency on.
Set the various storage/replication hints.
Put a block at a specific index.
Delete a block at a specific index.
Delete blocks between two indexes.
Put a tree node (at its described index).
Delete a tree node at a specific index.
Delete blocks between two tree indexes.
Put a bitfield page at its described index.
Delete a bitfield page.
Delete bitfield pages between two indexes.
Put a user provided buffer at a user provided key.
Delete a user provided key.
Flushes the write batch.
Create a stream of all blocks.
Create a stream of all tree nodes.
Create a stream of all bitfield pages.
Create a stream of all user data.
Close the core storage engine.
Same as store.createAtom()
but here again for conveinience.
Atomize a core. Allows you to build up cross core atomic batches and operations.
An atomized core will not flush its changes until you call atom.flush()
, but you can still read your writes.
Create a named session on top of a core. A named session points back to the previous storage, but is otherwise independent and stored on disk, like a branch in git if you will.
Array containing the full list of dependencies for this core (ie tree of named sessions).
Apache-2.0