diff --git a/.changelog/config.toml b/.changelog/config.toml deleted file mode 100644 index de0fee50c2..0000000000 --- a/.changelog/config.toml +++ /dev/null @@ -1 +0,0 @@ -project_url = 'https://github.com/cometbft/cometbft' diff --git a/.changelog/epilogue.md b/.changelog/epilogue.md deleted file mode 100644 index 1e68f6b728..0000000000 --- a/.changelog/epilogue.md +++ /dev/null @@ -1,14 +0,0 @@ ---- - -CometBFT is a fork of [Tendermint -Core](https://github.com/tendermint/tendermint) as of late December 2022. - -## Bug bounty - -Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos). - -## Previous changes - -For changes released before the creation of CometBFT, please refer to the -Tendermint Core -[CHANGELOG.md](https://github.com/tendermint/tendermint/blob/a9feb1c023e172b542c972605311af83b777855b/CHANGELOG.md). diff --git a/.changelog/unreleased/.gitkeep b/.changelog/unreleased/.gitkeep deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/.changelog/unreleased/improvements/1210-close-evidence-db.md b/.changelog/unreleased/improvements/1210-close-evidence-db.md deleted file mode 100644 index e32bc87dbe..0000000000 --- a/.changelog/unreleased/improvements/1210-close-evidence-db.md +++ /dev/null @@ -1 +0,0 @@ -- `[node]` Close evidence.db OnStop ([cometbft/cometbft\#1210](https://github.com/cometbft/cometbft/pull/1210): @chillyvee) diff --git a/.changelog/unreleased/improvements/857-make-handshake-cancelable.md b/.changelog/unreleased/improvements/857-make-handshake-cancelable.md deleted file mode 100644 index 16b447f6d2..0000000000 --- a/.changelog/unreleased/improvements/857-make-handshake-cancelable.md +++ /dev/null @@ -1 +0,0 @@ -- `[node]` Make handshake cancelable ([cometbft/cometbft\#857](https://github.com/cometbft/cometbft/pull/857)) diff --git a/.changelog/v0.34.27/breaking-changes/152-rename-binary-docker.md b/.changelog/v0.34.27/breaking-changes/152-rename-binary-docker.md deleted file mode 100644 index 3870f96f92..0000000000 --- a/.changelog/v0.34.27/breaking-changes/152-rename-binary-docker.md +++ /dev/null @@ -1,2 +0,0 @@ -- Rename binary to `cometbft` and Docker image to `cometbft/cometbft` - ([\#152](https://github.com/cometbft/cometbft/pull/152)) diff --git a/.changelog/v0.34.27/breaking-changes/211-deprecate-tmhome.md b/.changelog/v0.34.27/breaking-changes/211-deprecate-tmhome.md deleted file mode 100644 index d2bded0f27..0000000000 --- a/.changelog/v0.34.27/breaking-changes/211-deprecate-tmhome.md +++ /dev/null @@ -1,3 +0,0 @@ -- The `TMHOME` environment variable was renamed to `CMTHOME`, and all - environment variables starting with `TM_` are instead prefixed with `CMT_` - ([\#211](https://github.com/cometbft/cometbft/issues/211)) diff --git a/.changelog/v0.34.27/breaking-changes/360-update-to-go-119.md b/.changelog/v0.34.27/breaking-changes/360-update-to-go-119.md deleted file mode 100644 index 97fafda93b..0000000000 --- a/.changelog/v0.34.27/breaking-changes/360-update-to-go-119.md +++ /dev/null @@ -1,2 +0,0 @@ -- Use Go 1.19 to build CometBFT, since Go 1.18 has reached end-of-life. - ([\#360](https://github.com/cometbft/cometbft/issues/360)) diff --git a/.changelog/v0.34.27/bug-fixes/383-txindexer-fix-slash-parsing.md b/.changelog/v0.34.27/bug-fixes/383-txindexer-fix-slash-parsing.md deleted file mode 100644 index c08824da9d..0000000000 --- a/.changelog/v0.34.27/bug-fixes/383-txindexer-fix-slash-parsing.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[state/kvindexer]` Resolved crashes when event values contained slashes, - introduced after adding event sequences. - (\#[383](https://github.com/cometbft/cometbft/pull/383): @jmalicevic) diff --git a/.changelog/v0.34.27/bug-fixes/386-quick-fix-needproofblock.md b/.changelog/v0.34.27/bug-fixes/386-quick-fix-needproofblock.md deleted file mode 100644 index d3d2f5b738..0000000000 --- a/.changelog/v0.34.27/bug-fixes/386-quick-fix-needproofblock.md +++ /dev/null @@ -1,6 +0,0 @@ -- `[consensus]` Short-term fix for the case when `needProofBlock` cannot find - previous block meta by defaulting to the creation of a new proof block. - ([\#386](https://github.com/cometbft/cometbft/pull/386): @adizere) - - Special thanks to the [Vega.xyz](https://vega.xyz/) team, and in particular - to Zohar (@ze97286), for reporting the problem and working with us to get to - a fix. diff --git a/.changelog/v0.34.27/bug-fixes/4-busy-loop-send-block-part.md b/.changelog/v0.34.27/bug-fixes/4-busy-loop-send-block-part.md deleted file mode 100644 index 414ec44cb1..0000000000 --- a/.changelog/v0.34.27/bug-fixes/4-busy-loop-send-block-part.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[consensus]` Fixed a busy loop that happened when sending of a block part - failed by sleeping in case of error. - ([\#4](https://github.com/informalsystems/tendermint/pull/4)) diff --git a/.changelog/v0.34.27/bug-fixes/9936-p2p-fix-envelope-sending.md b/.changelog/v0.34.27/bug-fixes/9936-p2p-fix-envelope-sending.md deleted file mode 100644 index fd38b79b9f..0000000000 --- a/.changelog/v0.34.27/bug-fixes/9936-p2p-fix-envelope-sending.md +++ /dev/null @@ -1,5 +0,0 @@ -- `[p2p]` Correctly use non-blocking `TrySendEnvelope` method when attempting to - send messages, as opposed to the blocking `SendEnvelope` method. It is unclear - whether this has a meaningful impact on P2P performance, but this patch does - correct the underlying behaviour to what it should be - ([tendermint/tendermint\#9936](https://github.com/tendermint/tendermint/pull/9936)) diff --git a/.changelog/v0.34.27/dependencies/160-tmdb-to-cometbftdb.md b/.changelog/v0.34.27/dependencies/160-tmdb-to-cometbftdb.md deleted file mode 100644 index e4c1351312..0000000000 --- a/.changelog/v0.34.27/dependencies/160-tmdb-to-cometbftdb.md +++ /dev/null @@ -1,3 +0,0 @@ -- Replace [tm-db](https://github.com/tendermint/tm-db) with - [cometbft-db](https://github.com/cometbft/cometbft-db) - ([\#160](https://github.com/cometbft/cometbft/pull/160)) \ No newline at end of file diff --git a/.changelog/v0.34.27/dependencies/165-bump-tmloadtest.md b/.changelog/v0.34.27/dependencies/165-bump-tmloadtest.md deleted file mode 100644 index 175163ac00..0000000000 --- a/.changelog/v0.34.27/dependencies/165-bump-tmloadtest.md +++ /dev/null @@ -1,2 +0,0 @@ -- Bump tm-load-test to v1.3.0 to remove implicit dependency on Tendermint Core - ([\#165](https://github.com/cometbft/cometbft/pull/165)) \ No newline at end of file diff --git a/.changelog/v0.34.27/dependencies/9787-btcec-dep-update.md b/.changelog/v0.34.27/dependencies/9787-btcec-dep-update.md deleted file mode 100644 index d155748e0c..0000000000 --- a/.changelog/v0.34.27/dependencies/9787-btcec-dep-update.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[crypto]` Update to use btcec v2 and the latest btcutil - ([tendermint/tendermint\#9787](https://github.com/tendermint/tendermint/pull/9787): - @wcsiu) diff --git a/.changelog/v0.34.27/features/9759-kvindexer-match-event.md b/.changelog/v0.34.27/features/9759-kvindexer-match-event.md deleted file mode 100644 index 281f6cd1fb..0000000000 --- a/.changelog/v0.34.27/features/9759-kvindexer-match-event.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[rpc]` Add `match_event` query parameter to indicate to the RPC that it - should match events _within_ attributes, not only within a height - ([tendermint/tendermint\#9759](https://github.com/tendermint/tendermint/pull/9759)) diff --git a/.changelog/v0.34.27/improvements/136-remove-tm-signer-harness.md b/.changelog/v0.34.27/improvements/136-remove-tm-signer-harness.md deleted file mode 100644 index 6eb6c2158c..0000000000 --- a/.changelog/v0.34.27/improvements/136-remove-tm-signer-harness.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[tools/tm-signer-harness]` Remove the folder as it is unused - ([\#136](https://github.com/cometbft/cometbft/issues/136)) \ No newline at end of file diff --git a/.changelog/v0.34.27/improvements/204-version-commit-hash.md b/.changelog/v0.34.27/improvements/204-version-commit-hash.md deleted file mode 100644 index 675a1a2924..0000000000 --- a/.changelog/v0.34.27/improvements/204-version-commit-hash.md +++ /dev/null @@ -1,2 +0,0 @@ -- Append the commit hash to the version of CometBFT being built - ([\#204](https://github.com/cometbft/cometbft/pull/204)) \ No newline at end of file diff --git a/.changelog/v0.34.27/improvements/314-prio-mempool-badtxlog.md b/.changelog/v0.34.27/improvements/314-prio-mempool-badtxlog.md deleted file mode 100644 index ba4ac031e2..0000000000 --- a/.changelog/v0.34.27/improvements/314-prio-mempool-badtxlog.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[mempool/v1]` Suppress "rejected bad transaction" in priority mempool logs by - reducing log level from info to debug - ([\#314](https://github.com/cometbft/cometbft/pull/314): @JayT106) diff --git a/.changelog/v0.34.27/improvements/56-rpc-cache-rpc-responses.md b/.changelog/v0.34.27/improvements/56-rpc-cache-rpc-responses.md deleted file mode 100644 index 344b3df93b..0000000000 --- a/.changelog/v0.34.27/improvements/56-rpc-cache-rpc-responses.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[e2e]` Add functionality for uncoordinated (minor) upgrades - ([\#56](https://github.com/tendermint/tendermint/pull/56)) \ No newline at end of file diff --git a/.changelog/v0.34.27/improvements/9733-consensus-metrics.md b/.changelog/v0.34.27/improvements/9733-consensus-metrics.md deleted file mode 100644 index 77d8c743ec..0000000000 --- a/.changelog/v0.34.27/improvements/9733-consensus-metrics.md +++ /dev/null @@ -1,4 +0,0 @@ -- `[consensus]` Add `consensus_block_gossip_parts_received` and - `consensus_step_duration_seconds` metrics in order to aid in investigating the - impact of database compaction on consensus performance - ([tendermint/tendermint\#9733](https://github.com/tendermint/tendermint/pull/9733)) diff --git a/.changelog/v0.34.27/improvements/9759-kvindexer-match-event.md b/.changelog/v0.34.27/improvements/9759-kvindexer-match-event.md deleted file mode 100644 index 8b5757cb8e..0000000000 --- a/.changelog/v0.34.27/improvements/9759-kvindexer-match-event.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[state/kvindexer]` Add `match.event` keyword to support condition evaluation - based on the event the attributes belong to - ([tendermint/tendermint\#9759](https://github.com/tendermint/tendermint/pull/9759)) diff --git a/.changelog/v0.34.27/improvements/9764-p2p-fix-logspam.md b/.changelog/v0.34.27/improvements/9764-p2p-fix-logspam.md deleted file mode 100644 index 78fa6844fe..0000000000 --- a/.changelog/v0.34.27/improvements/9764-p2p-fix-logspam.md +++ /dev/null @@ -1,4 +0,0 @@ -- `[p2p]` Reduce log spam through reducing log level of "Dialing peer" and - "Added peer" messages from info to debug - ([tendermint/tendermint\#9764](https://github.com/tendermint/tendermint/pull/9764): - @faddat) diff --git a/.changelog/v0.34.27/improvements/9776-consensus-vote-bandwidth.md b/.changelog/v0.34.27/improvements/9776-consensus-vote-bandwidth.md deleted file mode 100644 index 2bfdd05acf..0000000000 --- a/.changelog/v0.34.27/improvements/9776-consensus-vote-bandwidth.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[consensus]` Reduce bandwidth consumption of consensus votes by roughly 50% - through fixing a small logic bug - ([tendermint/tendermint\#9776](https://github.com/tendermint/tendermint/pull/9776)) diff --git a/.changelog/v0.34.27/summary.md b/.changelog/v0.34.27/summary.md deleted file mode 100644 index e4a13db501..0000000000 --- a/.changelog/v0.34.27/summary.md +++ /dev/null @@ -1,17 +0,0 @@ -*Feb 27, 2023* - -This is the first official release of CometBFT - a fork of [Tendermint -Core](https://github.com/tendermint/tendermint). This particular release is -intended to be compatible with the Tendermint Core v0.34 release series. - -For details as to how to upgrade to CometBFT from Tendermint Core, please see -our [upgrading guidelines](./UPGRADING.md). - -If you have any questions, comments, concerns or feedback on this release, we -would love to hear from you! Please contact us via [GitHub -Discussions](https://github.com/cometbft/cometbft/discussions), -[Discord](https://discord.gg/cosmosnetwork) (in the `#cometbft` channel) or -[Telegram](https://t.me/CometBFT). - -Special thanks to @wcsiu, @ze97286, @faddat and @JayT106 for their contributions -to this release! diff --git a/.changelog/v0.34.28/breaking-changes/558-tm10011.md b/.changelog/v0.34.28/breaking-changes/558-tm10011.md deleted file mode 100644 index d1b9fca4ab..0000000000 --- a/.changelog/v0.34.28/breaking-changes/558-tm10011.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[crypto/merkle]` Do not allow verification of Merkle Proofs against empty trees (`nil` root). `Proof.ComputeRootHash` now panics when it encounters an error, but `Proof.Verify` does not panic - ([\#558](https://github.com/cometbft/cometbft/issues/558)) diff --git a/.changelog/v0.34.28/bug-fixes/496-error-on-applyblock-should-panic.md b/.changelog/v0.34.28/bug-fixes/496-error-on-applyblock-should-panic.md deleted file mode 100644 index 55e9c874f8..0000000000 --- a/.changelog/v0.34.28/bug-fixes/496-error-on-applyblock-should-panic.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[consensus]` Unexpected error conditions in `ApplyBlock` are non-recoverable, so ignoring the error and carrying on is a bug. We replaced a `return` that disregarded the error by a `panic`. - ([\#496](https://github.com/cometbft/cometbft/pull/496)) \ No newline at end of file diff --git a/.changelog/v0.34.28/bug-fixes/524-rename-peerstate-tojson.md b/.changelog/v0.34.28/bug-fixes/524-rename-peerstate-tojson.md deleted file mode 100644 index b9a43b3ce4..0000000000 --- a/.changelog/v0.34.28/bug-fixes/524-rename-peerstate-tojson.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[consensus]` Rename `(*PeerState).ToJSON` to `MarshalJSON` to fix a logging data race - ([\#524](https://github.com/cometbft/cometbft/pull/524)) diff --git a/.changelog/v0.34.28/bug-fixes/575-fix-light-client-panic.md b/.changelog/v0.34.28/bug-fixes/575-fix-light-client-panic.md deleted file mode 100644 index 0ec55b923f..0000000000 --- a/.changelog/v0.34.28/bug-fixes/575-fix-light-client-panic.md +++ /dev/null @@ -1,6 +0,0 @@ -- `[light]` Fixed an edge case where a light client would panic when attempting - to query a node that (1) has started from a non-zero height and (2) does - not yet have any data. The light client will now, correctly, not panic - _and_ keep the node in its list of providers in the same way it would if - it queried a node starting from height zero that does not yet have data - ([\#575](https://github.com/cometbft/cometbft/issues/575)) \ No newline at end of file diff --git a/.changelog/v0.34.28/improvements/475-upgrade-go-schnorrkel.md b/.changelog/v0.34.28/improvements/475-upgrade-go-schnorrkel.md deleted file mode 100644 index bdaf96c14c..0000000000 --- a/.changelog/v0.34.28/improvements/475-upgrade-go-schnorrkel.md +++ /dev/null @@ -1 +0,0 @@ -- `[crypto/sr25519]` Upgrade to go-schnorrkel@v1.0.0 ([\#475](https://github.com/cometbft/cometbft/issues/475)) diff --git a/.changelog/v0.34.28/improvements/638-json-rpc-error-message.md b/.changelog/v0.34.28/improvements/638-json-rpc-error-message.md deleted file mode 100644 index 6922091fd2..0000000000 --- a/.changelog/v0.34.28/improvements/638-json-rpc-error-message.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[jsonrpc/client]` Improve the error message for client errors stemming from - bad HTTP responses. - ([cometbft/cometbft\#638](https://github.com/cometbft/cometbft/pull/638)) diff --git a/.changelog/v0.34.28/summary.md b/.changelog/v0.34.28/summary.md deleted file mode 100644 index ba3efa9d79..0000000000 --- a/.changelog/v0.34.28/summary.md +++ /dev/null @@ -1,6 +0,0 @@ -*April 26, 2023* - -This release fixes several bugs, and has had to introduce one small Go -API-breaking change in the `crypto/merkle` package in order to address what -could be a security issue for some users who directly and explicitly make use of -that code. diff --git a/.changelog/v0.34.29/bug-fixes/771-kvindexer-parsing-big-ints.md b/.changelog/v0.34.29/bug-fixes/771-kvindexer-parsing-big-ints.md deleted file mode 100644 index 4a0000db6d..0000000000 --- a/.changelog/v0.34.29/bug-fixes/771-kvindexer-parsing-big-ints.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[state/kvindex]` Querying event attributes that are bigger than int64 is now - enabled. ([\#771](https://github.com/cometbft/cometbft/pull/771)) diff --git a/.changelog/v0.34.29/bug-fixes/771-pubsub-parsing-big-ints.md b/.changelog/v0.34.29/bug-fixes/771-pubsub-parsing-big-ints.md deleted file mode 100644 index fc5f25a90f..0000000000 --- a/.changelog/v0.34.29/bug-fixes/771-pubsub-parsing-big-ints.md +++ /dev/null @@ -1,4 +0,0 @@ -- `[pubsub]` Pubsub queries are now able to parse big integers (larger than - int64). Very big floats are also properly parsed into very big integers - instead of being truncated to int64. - ([\#771](https://github.com/cometbft/cometbft/pull/771)) diff --git a/.changelog/v0.34.29/improvements/654-rpc-rm-response-data-logs.md b/.changelog/v0.34.29/improvements/654-rpc-rm-response-data-logs.md deleted file mode 100644 index 3fddfee8e7..0000000000 --- a/.changelog/v0.34.29/improvements/654-rpc-rm-response-data-logs.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[rpc]` Remove response data from response failure logs in order - to prevent large quantities of log data from being produced - ([\#654](https://github.com/cometbft/cometbft/issues/654)) \ No newline at end of file diff --git a/.changelog/v0.34.29/security-fixes/788-rpc-client-pw.md b/.changelog/v0.34.29/security-fixes/788-rpc-client-pw.md deleted file mode 100644 index 430b7b5ac4..0000000000 --- a/.changelog/v0.34.29/security-fixes/788-rpc-client-pw.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[rpc/jsonrpc/client]` **Low severity** - Prevent RPC - client credentials from being inadvertently dumped to logs - ([\#788](https://github.com/cometbft/cometbft/pull/788)) diff --git a/.changelog/v0.34.29/security-fixes/794-cli-debug-kill-unsafe-cast.md b/.changelog/v0.34.29/security-fixes/794-cli-debug-kill-unsafe-cast.md deleted file mode 100644 index 782eccd9d5..0000000000 --- a/.changelog/v0.34.29/security-fixes/794-cli-debug-kill-unsafe-cast.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[cmd/cometbft/commands/debug/kill]` **Low severity** - Fix unsafe int cast in - `debug kill` command ([\#794](https://github.com/cometbft/cometbft/pull/794)) diff --git a/.changelog/v0.34.29/security-fixes/865-fix-peerstate-marshaljson.md b/.changelog/v0.34.29/security-fixes/865-fix-peerstate-marshaljson.md deleted file mode 100644 index fdd9172c20..0000000000 --- a/.changelog/v0.34.29/security-fixes/865-fix-peerstate-marshaljson.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[consensus]` **Low severity** - Avoid recursive call after rename to - `(*PeerState).MarshalJSON` - ([\#863](https://github.com/cometbft/cometbft/pull/863)) diff --git a/.changelog/v0.34.29/security-fixes/890-mempool-fix-cache.md b/.changelog/v0.34.29/security-fixes/890-mempool-fix-cache.md deleted file mode 100644 index bad30efc7a..0000000000 --- a/.changelog/v0.34.29/security-fixes/890-mempool-fix-cache.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[mempool/clist_mempool]` **Low severity** - Prevent a transaction from - appearing twice in the mempool - ([\#890](https://github.com/cometbft/cometbft/pull/890): @otrack) diff --git a/.changelog/v0.34.29/summary.md b/.changelog/v0.34.29/summary.md deleted file mode 100644 index 7ecb273940..0000000000 --- a/.changelog/v0.34.29/summary.md +++ /dev/null @@ -1,4 +0,0 @@ -*June 14, 2023* - -Provides several minor bug fixes, as well as fixes for several low-severity -security issues. diff --git a/.changelog/v0.34.30/build/1351-bump-go-120.md b/.changelog/v0.34.30/build/1351-bump-go-120.md deleted file mode 100644 index 12091e3b61..0000000000 --- a/.changelog/v0.34.30/build/1351-bump-go-120.md +++ /dev/null @@ -1,2 +0,0 @@ -- Bump Go version used to v1.20 since v1.19 has reached EOL - ([\#1351](https://github.com/cometbft/cometbft/pull/1351)) \ No newline at end of file diff --git a/.changelog/v0.34.30/features/1512-metric-mempool-size-bytes.md b/.changelog/v0.34.30/features/1512-metric-mempool-size-bytes.md deleted file mode 100644 index b935dc4084..0000000000 --- a/.changelog/v0.34.30/features/1512-metric-mempool-size-bytes.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[metrics]` Add metric for mempool size in bytes `SizeBytes`. - ([\#1512](https://github.com/cometbft/cometbft/pull/1512)) \ No newline at end of file diff --git a/.changelog/v0.34.30/improvements/1210-close-evidence-db.md b/.changelog/v0.34.30/improvements/1210-close-evidence-db.md deleted file mode 100644 index e32bc87dbe..0000000000 --- a/.changelog/v0.34.30/improvements/1210-close-evidence-db.md +++ /dev/null @@ -1 +0,0 @@ -- `[node]` Close evidence.db OnStop ([cometbft/cometbft\#1210](https://github.com/cometbft/cometbft/pull/1210): @chillyvee) diff --git a/.changelog/v0.34.30/improvements/1558-experimental-gossip-limiting.md b/.changelog/v0.34.30/improvements/1558-experimental-gossip-limiting.md deleted file mode 100644 index c6606aa940..0000000000 --- a/.changelog/v0.34.30/improvements/1558-experimental-gossip-limiting.md +++ /dev/null @@ -1,9 +0,0 @@ -- `[mempool]` Add experimental feature to limit the number of persistent peers and non-persistent - peers to which the node gossip transactions (only for "v0" mempool). - ([\#1558](https://github.com/cometbft/cometbft/pull/1558), - ([\#1584](https://github.com/cometbft/cometbft/pull/1584)) -- `[config]` Add mempool parameters `experimental_max_gossip_connections_to_persistent_peers` and - `experimental_max_gossip_connections_to_non_persistent_peers` for limiting the number of peers to - which the node gossip transactions. - ([\#1558](https://github.com/cometbft/cometbft/pull/1558)) - ([\#1584](https://github.com/cometbft/cometbft/pull/1584)) diff --git a/.changelog/v0.34.30/improvements/857-make-handshake-cancelable.md b/.changelog/v0.34.30/improvements/857-make-handshake-cancelable.md deleted file mode 100644 index 16b447f6d2..0000000000 --- a/.changelog/v0.34.30/improvements/857-make-handshake-cancelable.md +++ /dev/null @@ -1 +0,0 @@ -- `[node]` Make handshake cancelable ([cometbft/cometbft\#857](https://github.com/cometbft/cometbft/pull/857)) diff --git a/.changelog/v0.34.30/summary.md b/.changelog/v0.34.30/summary.md deleted file mode 100644 index f1e5c7f755..0000000000 --- a/.changelog/v0.34.30/summary.md +++ /dev/null @@ -1,5 +0,0 @@ -*November 17, 2023* - -This release contains, among other things, an opt-in, experimental feature to -help reduce the bandwidth consumption associated with the mempool's transaction -gossip. diff --git a/.changelog/v0.34.31/bug-fixes/1654-semaphore-wait.md b/.changelog/v0.34.31/bug-fixes/1654-semaphore-wait.md deleted file mode 100644 index 9d0fb80adc..0000000000 --- a/.changelog/v0.34.31/bug-fixes/1654-semaphore-wait.md +++ /dev/null @@ -1,3 +0,0 @@ -- `[mempool]` Avoid infinite wait in transaction sending routine when - using experimental parameters to limiting transaction gossiping to peers - ([\#1654](https://github.com/cometbft/cometbft/pull/1654)) \ No newline at end of file diff --git a/.changelog/v0.34.31/summary.md b/.changelog/v0.34.31/summary.md deleted file mode 100644 index dbf3680044..0000000000 --- a/.changelog/v0.34.31/summary.md +++ /dev/null @@ -1,3 +0,0 @@ -*November 27, 2023* - -Fixes a small bug in the mempool for an experimental feature. diff --git a/.changelog/v0.34.32/bug-fixes/1749-light-client-attack-verify-all-sigs.md b/.changelog/v0.34.32/bug-fixes/1749-light-client-attack-verify-all-sigs.md deleted file mode 100644 index 1115c4d195..0000000000 --- a/.changelog/v0.34.32/bug-fixes/1749-light-client-attack-verify-all-sigs.md +++ /dev/null @@ -1,4 +0,0 @@ -- `[evidence]` When `VerifyCommitLight` & `VerifyCommitLightTrusting` are called as part - of evidence verification, all signatures present in the evidence must be verified - ([\#1749](https://github.com/cometbft/cometbft/pull/1749)) - diff --git a/.changelog/v0.34.32/improvements/1715-validate-validator-address.md b/.changelog/v0.34.32/improvements/1715-validate-validator-address.md deleted file mode 100644 index ec7f2c7da6..0000000000 --- a/.changelog/v0.34.32/improvements/1715-validate-validator-address.md +++ /dev/null @@ -1 +0,0 @@ -- `[types]` Validate `Validator#Address` in `ValidateBasic` ([\#1715](https://github.com/cometbft/cometbft/pull/1715)) diff --git a/.changelog/v0.34.32/improvements/1730-increase-abci-socket-message-size-limit.md b/.changelog/v0.34.32/improvements/1730-increase-abci-socket-message-size-limit.md deleted file mode 100644 index 5246eb57f0..0000000000 --- a/.changelog/v0.34.32/improvements/1730-increase-abci-socket-message-size-limit.md +++ /dev/null @@ -1 +0,0 @@ -- `[abci]` Increase ABCI socket message size limit to 2GB ([\#1730](https://github.com/cometbft/cometbft/pull/1730): @troykessler) diff --git a/.changelog/v0.34.32/improvements/2094-e2e-load-max-txs.md b/.changelog/v0.34.32/improvements/2094-e2e-load-max-txs.md deleted file mode 100644 index 31ca79cfe3..0000000000 --- a/.changelog/v0.34.32/improvements/2094-e2e-load-max-txs.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[e2e]` Add manifest option `load_max_txs` to limit the number of transactions generated by the - `load` command. ([\#2094](https://github.com/cometbft/cometbft/pull/2094)) diff --git a/.changelog/v0.34.32/improvements/2328-e2e-log-sent-txs.md b/.changelog/v0.34.32/improvements/2328-e2e-log-sent-txs.md deleted file mode 100644 index e1b69899f4..0000000000 --- a/.changelog/v0.34.32/improvements/2328-e2e-log-sent-txs.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[e2e]` Log the number of transactions that were sent successfully or failed. - ([\#2328](https://github.com/cometbft/cometbft/pull/2328)) \ No newline at end of file diff --git a/.changelog/v0.34.32/summary.md b/.changelog/v0.34.32/summary.md deleted file mode 100644 index 1017678765..0000000000 --- a/.changelog/v0.34.32/summary.md +++ /dev/null @@ -1,3 +0,0 @@ -*March 12, 2024* - -This release fixes a security bug in the light client. diff --git a/.changelog/v0.34.33/bug-fixes/2774-bitarray-unmarshal-json.md b/.changelog/v0.34.33/bug-fixes/2774-bitarray-unmarshal-json.md deleted file mode 100644 index 1c51af49d2..0000000000 --- a/.changelog/v0.34.33/bug-fixes/2774-bitarray-unmarshal-json.md +++ /dev/null @@ -1,2 +0,0 @@ -- [`bits`] prevent `BitArray.UnmarshalJSON` from crashing on 0 bits - ([\#2774](https://github.com/cometbft/cometbft/pull/2774)) diff --git a/.changelog/v0.34.33/dependencies/2783-update-cometbft-db.md b/.changelog/v0.34.33/dependencies/2783-update-cometbft-db.md deleted file mode 100644 index 7d1c67e078..0000000000 --- a/.changelog/v0.34.33/dependencies/2783-update-cometbft-db.md +++ /dev/null @@ -1,2 +0,0 @@ -- Bump cometbft-db version to v0.9.1, which brings support for RocksDB v8. - ([\#2783](https://github.com/cometbft/cometbft/pull/2783)) diff --git a/.changelog/v0.34.33/dependencies/2784-update-go.md b/.changelog/v0.34.33/dependencies/2784-update-go.md deleted file mode 100644 index 6185be4b61..0000000000 --- a/.changelog/v0.34.33/dependencies/2784-update-go.md +++ /dev/null @@ -1,2 +0,0 @@ -- Bump Go version used to v1.21 since v1.20 has reached EOL - ([\#2784](https://github.com/cometbft/cometbft/pull/2784)) diff --git a/.changelog/v0.34.33/summary.md b/.changelog/v0.34.33/summary.md deleted file mode 100644 index 7d173c605a..0000000000 --- a/.changelog/v0.34.33/summary.md +++ /dev/null @@ -1,3 +0,0 @@ -*April 26, 2024* - -This release bumps Go version to 1.21. diff --git a/.changelog/v0.34.34/bug-fixes/0016-abc-light-proposer-priorities.md b/.changelog/v0.34.34/bug-fixes/0016-abc-light-proposer-priorities.md deleted file mode 100644 index 6915b51db3..0000000000 --- a/.changelog/v0.34.34/bug-fixes/0016-abc-light-proposer-priorities.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[light]` Cross-check proposer priorities in retrieved validator sets - ([\#ASA-2024-009](https://github.com/cometbft/cometbft/security/advisories/GHSA-g5xx-c4hv-9ccc)) diff --git a/.changelog/v0.34.34/features/3760-remove-tools-package.md b/.changelog/v0.34.34/features/3760-remove-tools-package.md deleted file mode 100644 index 9c4f2de409..0000000000 --- a/.changelog/v0.34.34/features/3760-remove-tools-package.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[tools]` Remove tools package - [\#3760](https://github.com/cometbft/cometbft/pull/3760) diff --git a/.changelog/v0.34.34/improvements/0016-abc-types-validator-set.md b/.changelog/v0.34.34/improvements/0016-abc-types-validator-set.md deleted file mode 100644 index b8eb2d6579..0000000000 --- a/.changelog/v0.34.34/improvements/0016-abc-types-validator-set.md +++ /dev/null @@ -1,2 +0,0 @@ -- `[types]` Check that proposer is one of the validators in `ValidateBasic` - ([\#ASA-2024-009](https://github.com/cometbft/cometbft/security/advisories/GHSA-g5xx-c4hv-9ccc)) diff --git a/.changelog/v0.34.34/summary.md b/.changelog/v0.34.34/summary.md deleted file mode 100644 index 0ed31926ac..0000000000 --- a/.changelog/v0.34.34/summary.md +++ /dev/null @@ -1,4 +0,0 @@ -*September 3, 2024* - -This release includes a security fix for the light client and is recommended -for all users. diff --git a/.changelog/v0.34.35/dependencies/4045-update-image-pkg.md b/.changelog/v0.34.35/dependencies/4045-update-image-pkg.md deleted file mode 100644 index 7f51f47581..0000000000 --- a/.changelog/v0.34.35/dependencies/4045-update-image-pkg.md +++ /dev/null @@ -1,3 +0,0 @@ -- updated pkg gonum.org/v1/gonum to latest version unaffected by CVE- - 2024-24792, CVE-2023-29407, CVE-2023-29408, and CVE-2022-41727 - ([\#4045](https://github.com/cometbft/cometbft/pull/4045)) \ No newline at end of file diff --git a/.changelog/v0.34.35/dependencies/4053-update-python-requests-dep.md b/.changelog/v0.34.35/dependencies/4053-update-python-requests-dep.md deleted file mode 100644 index 4bb10b6ac1..0000000000 --- a/.changelog/v0.34.35/dependencies/4053-update-python-requests-dep.md +++ /dev/null @@ -1,2 +0,0 @@ -- updated python module "requests" to latest version unaffected by CVE-2023-32681 - and CVE-2024-35195 ([\#4053](https://github.com/cometbft/cometbft/pull/4053)) \ No newline at end of file diff --git a/.changelog/v0.34.35/dependencies/4059-update-cometbft-db.md b/.changelog/v0.34.35/dependencies/4059-update-cometbft-db.md deleted file mode 100644 index c101948597..0000000000 --- a/.changelog/v0.34.35/dependencies/4059-update-cometbft-db.md +++ /dev/null @@ -1,2 +0,0 @@ -- updated cometbft-db to v0.9.5 - ([\#4059](https://github.com/cometbft/cometbft/pull/4059)) \ No newline at end of file diff --git a/.changelog/v0.34.35/summary.md b/.changelog/v0.34.35/summary.md deleted file mode 100644 index 8bdfbd2b31..0000000000 --- a/.changelog/v0.34.35/summary.md +++ /dev/null @@ -1,3 +0,0 @@ -*September 16, 2024* - -This release bumps Go version to 1.22 and updates dependencies. diff --git a/.github/workflows/check-generated.yml b/.github/workflows/check-generated.yml index ede1fa7938..628604fddb 100644 --- a/.github/workflows/check-generated.yml +++ b/.github/workflows/check-generated.yml @@ -41,13 +41,13 @@ jobs: check-proto: runs-on: ubuntu-latest steps: - - uses: actions/setup-go@v5 - with: - go-version: "1.23.1" - - uses: actions/checkout@v4 with: fetch-depth: 1 # we need a .git directory to run git diff + - uses: actions/setup-go@v5 + with: + go-version-file: "go.mod" + - name: "Check protobuf generated code" run: | diff --git a/.github/workflows/coverage.yml b/.github/workflows/coverage.yml index d33aff00a0..e7b9cd20b0 100644 --- a/.github/workflows/coverage.yml +++ b/.github/workflows/coverage.yml @@ -17,10 +17,10 @@ jobs: env: GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}" steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: "1.23.1" - - uses: actions/checkout@v3 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | @@ -36,10 +36,10 @@ jobs: strategy: fail-fast: true steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: "1.23.1" - - uses: actions/checkout@v3 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | diff --git a/.github/workflows/e2e-manual.yml b/.github/workflows/e2e-manual.yml index 5659814e2e..577720d72c 100644 --- a/.github/workflows/e2e-manual.yml +++ b/.github/workflows/e2e-manual.yml @@ -14,11 +14,11 @@ jobs: runs-on: ubuntu-latest timeout-minutes: 60 steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: '1.23.1' + go-version-file: "go.mod" - - uses: actions/checkout@v4 - name: Build working-directory: test/e2e diff --git a/.github/workflows/e2e-nightly-34x.yml b/.github/workflows/e2e-nightly-34x.yml index 0e842f364f..ca17fa6720 100644 --- a/.github/workflows/e2e-nightly-34x.yml +++ b/.github/workflows/e2e-nightly-34x.yml @@ -21,14 +21,14 @@ jobs: runs-on: ubuntu-latest timeout-minutes: 60 steps: - - uses: actions/setup-go@v5 - with: - go-version: '1.23.1' - - uses: actions/checkout@v4 with: ref: 'v0.34.x-celestia' + - uses: actions/setup-go@v5 + with: + go-version-file: "go.mod" + - name: Build working-directory: test/e2e # Run make jobs in parallel, since we can't run steps in parallel. diff --git a/.github/workflows/e2e.yml b/.github/workflows/e2e.yml index e329dccb63..d4dbb14458 100644 --- a/.github/workflows/e2e.yml +++ b/.github/workflows/e2e.yml @@ -12,10 +12,10 @@ jobs: runs-on: ubuntu-latest timeout-minutes: 15 steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: '1.23.1' - - uses: actions/checkout@v3 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | diff --git a/.github/workflows/fuzz-nightly.yml b/.github/workflows/fuzz-nightly.yml index 9e8fe2c3e5..7f847dc1c8 100644 --- a/.github/workflows/fuzz-nightly.yml +++ b/.github/workflows/fuzz-nightly.yml @@ -9,11 +9,10 @@ jobs: fuzz-nightly-test: runs-on: ubuntu-latest steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: '1.23.1' - - - uses: actions/checkout@v4 + go-version-file: "go.mod" - name: Install go-fuzz working-directory: test/fuzz diff --git a/.github/workflows/govulncheck.yml b/.github/workflows/govulncheck.yml index e92f5c597a..a798d1920a 100644 --- a/.github/workflows/govulncheck.yml +++ b/.github/workflows/govulncheck.yml @@ -14,10 +14,10 @@ jobs: govulncheck: runs-on: ubuntu-latest steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v3 with: - go-version: "1.23.1" - - uses: actions/checkout@v3 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | diff --git a/.github/workflows/pre-release.yml b/.github/workflows/pre-release.yml index 61743c6191..217311ec83 100644 --- a/.github/workflows/pre-release.yml +++ b/.github/workflows/pre-release.yml @@ -18,7 +18,7 @@ jobs: - uses: actions/setup-go@v5 with: - go-version: '1.23.1' + go-version-file: "go.mod" # Similar check to ./release-version.yml, but enforces this when pushing # tags. The ./release-version.yml check can be bypassed and is mainly diff --git a/.github/workflows/release-version.yml b/.github/workflows/release-version.yml index b8fb1ebdda..cbc9929fb1 100644 --- a/.github/workflows/release-version.yml +++ b/.github/workflows/release-version.yml @@ -15,7 +15,7 @@ jobs: - uses: actions/setup-go@v5 with: - go-version: '1.23.1' + go-version-file: "go.mod" - name: Check version run: | diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 5b89163615..36e3395e71 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -16,7 +16,7 @@ jobs: - uses: actions/setup-go@v5 with: - go-version: '1.23.1' + go-version-file: "go.mod" - name: Generate release notes run: | diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index a65b925b50..3ce0eece9f 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -23,10 +23,10 @@ jobs: runs-on: ubuntu-latest timeout-minutes: 5 steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: "1.23.1" - - uses: actions/checkout@v3 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | @@ -55,10 +55,10 @@ jobs: needs: build timeout-minutes: 5 steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: "1.23.1" - - uses: actions/checkout@v4 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | @@ -87,10 +87,10 @@ jobs: needs: build timeout-minutes: 5 steps: + - uses: actions/checkout@v4 - uses: actions/setup-go@v5 with: - go-version: "1.23.1" - - uses: actions/checkout@v4 + go-version-file: "go.mod" - uses: technote-space/get-diff-action@v6 with: PATTERNS: | diff --git a/CHANGELOG.md b/CHANGELOG.md deleted file mode 100644 index 919cb7ce89..0000000000 --- a/CHANGELOG.md +++ /dev/null @@ -1,300 +0,0 @@ -# CHANGELOG - -## v0.34.35 - -*September 16, 2024* - -This release bumps Go version to 1.22 and updates dependencies. - -### DEPENDENCIES - -- updated pkg gonum.org/v1/gonum to latest version unaffected by CVE- - 2024-24792, CVE-2023-29407, CVE-2023-29408, and CVE-2022-41727 - ([\#4045](https://github.com/cometbft/cometbft/pull/4045)) -- updated python module "requests" to latest version unaffected by CVE-2023-32681 - and CVE-2024-35195 ([\#4053](https://github.com/cometbft/cometbft/pull/4053)) -- updated cometbft-db to v0.9.5 - ([\#4059](https://github.com/cometbft/cometbft/pull/4059)) - -## v0.34.34 - -*September 3, 2024* - -This release includes a security fix for the light client and is recommended -for all users. - -### BUG FIXES - -- `[light]` Cross-check proposer priorities in retrieved validator sets - ([\#ASA-2024-009](https://github.com/cometbft/cometbft/security/advisories/GHSA-g5xx-c4hv-9ccc)) - -### FEATURES - -- `[tools]` Remove tools package - [\#3760](https://github.com/cometbft/cometbft/pull/3760) - -### IMPROVEMENTS - -- `[types]` Check that proposer is one of the validators in `ValidateBasic` - ([\#ASA-2024-009](https://github.com/cometbft/cometbft/security/advisories/GHSA-g5xx-c4hv-9ccc)) - -## v0.34.33 - -*April 26, 2024* - -This release bumps Go version to 1.21. - -### BUG FIXES - -- [`bits`] prevent `BitArray.UnmarshalJSON` from crashing on 0 bits - ([\#2774](https://github.com/cometbft/cometbft/pull/2774)) - -### DEPENDENCIES - -- Bump cometbft-db version to v0.9.1, which brings support for RocksDB v8. - ([\#2783](https://github.com/cometbft/cometbft/pull/2783)) -- Bump Go version used to v1.21 since v1.20 has reached EOL - ([\#2784](https://github.com/cometbft/cometbft/pull/2784)) - -## v0.34.32 - -*March 12, 2024* - -This release fixes a security bug in the light client. - -### BUG FIXES - -- `[evidence]` When `VerifyCommitLight` & `VerifyCommitLightTrusting` are called as part - of evidence verification, all signatures present in the evidence must be verified - ([\#1749](https://github.com/cometbft/cometbft/pull/1749)) - -### IMPROVEMENTS - -- `[types]` Validate `Validator#Address` in `ValidateBasic` ([\#1715](https://github.com/cometbft/cometbft/pull/1715)) -- `[abci]` Increase ABCI socket message size limit to 2GB ([\#1730](https://github.com/cometbft/cometbft/pull/1730): @troykessler) -- `[e2e]` Add manifest option `load_max_txs` to limit the number of transactions generated by the - `load` command. ([\#2094](https://github.com/cometbft/cometbft/pull/2094)) -- `[e2e]` Log the number of transactions that were sent successfully or failed. - ([\#2328](https://github.com/cometbft/cometbft/pull/2328)) - -## v0.34.31 - -*November 27, 2023* - -Fixes a small bug in the mempool for an experimental feature. - -### BUG FIXES - -- `[mempool]` Avoid infinite wait in transaction sending routine when - using experimental parameters to limiting transaction gossiping to peers - ([\#1654](https://github.com/cometbft/cometbft/pull/1654)) - -## v0.34.30 - -*November 17, 2023* - -This release contains, among other things, an opt-in, experimental feature to -help reduce the bandwidth consumption associated with the mempool's transaction -gossip. - -### BUILD - -- Bump Go version used to v1.20 since v1.19 has reached EOL - ([\#1351](https://github.com/cometbft/cometbft/pull/1351)) - -### FEATURES - -- `[metrics]` Add metric for mempool size in bytes `SizeBytes`. - ([\#1512](https://github.com/cometbft/cometbft/pull/1512)) - -### IMPROVEMENTS - -- `[node]` Make handshake cancelable ([cometbft/cometbft\#857](https://github.com/cometbft/cometbft/pull/857)) -- `[node]` Close evidence.db OnStop ([cometbft/cometbft\#1210](https://github.com/cometbft/cometbft/pull/1210): @chillyvee) -- `[mempool]` Add experimental feature to limit the number of persistent peers and non-persistent - peers to which the node gossip transactions (only for "v0" mempool). - ([\#1558](https://github.com/cometbft/cometbft/pull/1558), - ([\#1584](https://github.com/cometbft/cometbft/pull/1584)) -- `[config]` Add mempool parameters `experimental_max_gossip_connections_to_persistent_peers` and - `experimental_max_gossip_connections_to_non_persistent_peers` for limiting the number of peers to - which the node gossip transactions. - ([\#1558](https://github.com/cometbft/cometbft/pull/1558)) - ([\#1584](https://github.com/cometbft/cometbft/pull/1584)) - -## v0.34.29 - -*June 14, 2023* - -Provides several minor bug fixes, as well as fixes for several low-severity -security issues. - -### BUG FIXES - -- `[state/kvindex]` Querying event attributes that are bigger than int64 is now - enabled. ([\#771](https://github.com/cometbft/cometbft/pull/771)) -- `[pubsub]` Pubsub queries are now able to parse big integers (larger than - int64). Very big floats are also properly parsed into very big integers - instead of being truncated to int64. - ([\#771](https://github.com/cometbft/cometbft/pull/771)) - -### IMPROVEMENTS - -- `[rpc]` Remove response data from response failure logs in order - to prevent large quantities of log data from being produced - ([\#654](https://github.com/cometbft/cometbft/issues/654)) - -### SECURITY FIXES - -- `[rpc/jsonrpc/client]` **Low severity** - Prevent RPC - client credentials from being inadvertently dumped to logs - ([\#788](https://github.com/cometbft/cometbft/pull/788)) -- `[cmd/cometbft/commands/debug/kill]` **Low severity** - Fix unsafe int cast in - `debug kill` command ([\#794](https://github.com/cometbft/cometbft/pull/794)) -- `[consensus]` **Low severity** - Avoid recursive call after rename to - `(*PeerState).MarshalJSON` - ([\#863](https://github.com/cometbft/cometbft/pull/863)) -- `[mempool/clist_mempool]` **Low severity** - Prevent a transaction from - appearing twice in the mempool - ([\#890](https://github.com/cometbft/cometbft/pull/890): @otrack) - -## v0.34.28 - -*April 26, 2023* - -This release fixes several bugs, and has had to introduce one small Go -API-breaking change in the `crypto/merkle` package in order to address what -could be a security issue for some users who directly and explicitly make use of -that code. - -### BREAKING CHANGES - -- `[crypto/merkle]` Do not allow verification of Merkle Proofs against empty trees (`nil` root). `Proof.ComputeRootHash` now panics when it encounters an error, but `Proof.Verify` does not panic - ([\#558](https://github.com/cometbft/cometbft/issues/558)) - -### BUG FIXES - -- `[consensus]` Unexpected error conditions in `ApplyBlock` are non-recoverable, so ignoring the error and carrying on is a bug. We replaced a `return` that disregarded the error by a `panic`. - ([\#496](https://github.com/cometbft/cometbft/pull/496)) -- `[consensus]` Rename `(*PeerState).ToJSON` to `MarshalJSON` to fix a logging data race - ([\#524](https://github.com/cometbft/cometbft/pull/524)) -- `[light]` Fixed an edge case where a light client would panic when attempting - to query a node that (1) has started from a non-zero height and (2) does - not yet have any data. The light client will now, correctly, not panic - _and_ keep the node in its list of providers in the same way it would if - it queried a node starting from height zero that does not yet have data - ([\#575](https://github.com/cometbft/cometbft/issues/575)) - -### IMPROVEMENTS - -- `[crypto/sr25519]` Upgrade to go-schnorrkel@v1.0.0 ([\#475](https://github.com/cometbft/cometbft/issues/475)) -- `[jsonrpc/client]` Improve the error message for client errors stemming from - bad HTTP responses. - ([cometbft/cometbft\#638](https://github.com/cometbft/cometbft/pull/638)) - -## v0.34.27 - -*Feb 27, 2023* - -This is the first official release of CometBFT - a fork of [Tendermint -Core](https://github.com/tendermint/tendermint). This particular release is -intended to be compatible with the Tendermint Core v0.34 release series. - -For details as to how to upgrade to CometBFT from Tendermint Core, please see -our [upgrading guidelines](./UPGRADING.md). - -If you have any questions, comments, concerns or feedback on this release, we -would love to hear from you! Please contact us via [GitHub -Discussions](https://github.com/cometbft/cometbft/discussions), -[Discord](https://discord.gg/cosmosnetwork) (in the `#cometbft` channel) or -[Telegram](https://t.me/CometBFT). - -Special thanks to @wcsiu, @ze97286, @faddat and @JayT106 for their contributions -to this release! - -### BREAKING CHANGES - -- Rename binary to `cometbft` and Docker image to `cometbft/cometbft` - ([\#152](https://github.com/cometbft/cometbft/pull/152)) -- The `TMHOME` environment variable was renamed to `CMTHOME`, and all - environment variables starting with `TM_` are instead prefixed with `CMT_` - ([\#211](https://github.com/cometbft/cometbft/issues/211)) -- Use Go 1.19 to build CometBFT, since Go 1.18 has reached end-of-life. - ([\#360](https://github.com/cometbft/cometbft/issues/360)) - -### BUG FIXES - -- `[consensus]` Fixed a busy loop that happened when sending of a block part - failed by sleeping in case of error. - ([\#4](https://github.com/informalsystems/tendermint/pull/4)) -- `[state/kvindexer]` Resolved crashes when event values contained slashes, - introduced after adding event sequences. - (\#[383](https://github.com/cometbft/cometbft/pull/383): @jmalicevic) -- `[consensus]` Short-term fix for the case when `needProofBlock` cannot find - previous block meta by defaulting to the creation of a new proof block. - ([\#386](https://github.com/cometbft/cometbft/pull/386): @adizere) - - Special thanks to the [Vega.xyz](https://vega.xyz/) team, and in particular - to Zohar (@ze97286), for reporting the problem and working with us to get to - a fix. -- `[p2p]` Correctly use non-blocking `TrySendEnvelope` method when attempting to - send messages, as opposed to the blocking `SendEnvelope` method. It is unclear - whether this has a meaningful impact on P2P performance, but this patch does - correct the underlying behaviour to what it should be - ([tendermint/tendermint\#9936](https://github.com/tendermint/tendermint/pull/9936)) - -### DEPENDENCIES - -- Replace [tm-db](https://github.com/tendermint/tm-db) with - [cometbft-db](https://github.com/cometbft/cometbft-db) - ([\#160](https://github.com/cometbft/cometbft/pull/160)) -- Bump tm-load-test to v1.3.0 to remove implicit dependency on Tendermint Core - ([\#165](https://github.com/cometbft/cometbft/pull/165)) -- `[crypto]` Update to use btcec v2 and the latest btcutil - ([tendermint/tendermint\#9787](https://github.com/tendermint/tendermint/pull/9787): - @wcsiu) - -### FEATURES - -- `[rpc]` Add `match_event` query parameter to indicate to the RPC that it - should match events _within_ attributes, not only within a height - ([tendermint/tendermint\#9759](https://github.com/tendermint/tendermint/pull/9759)) - -### IMPROVEMENTS - -- `[e2e]` Add functionality for uncoordinated (minor) upgrades - ([\#56](https://github.com/tendermint/tendermint/pull/56)) -- `[tools/tm-signer-harness]` Remove the folder as it is unused - ([\#136](https://github.com/cometbft/cometbft/issues/136)) -- Append the commit hash to the version of CometBFT being built - ([\#204](https://github.com/cometbft/cometbft/pull/204)) -- `[mempool/v1]` Suppress "rejected bad transaction" in priority mempool logs by - reducing log level from info to debug - ([\#314](https://github.com/cometbft/cometbft/pull/314): @JayT106) -- `[consensus]` Add `consensus_block_gossip_parts_received` and - `consensus_step_duration_seconds` metrics in order to aid in investigating the - impact of database compaction on consensus performance - ([tendermint/tendermint\#9733](https://github.com/tendermint/tendermint/pull/9733)) -- `[state/kvindexer]` Add `match.event` keyword to support condition evaluation - based on the event the attributes belong to - ([tendermint/tendermint\#9759](https://github.com/tendermint/tendermint/pull/9759)) -- `[p2p]` Reduce log spam through reducing log level of "Dialing peer" and - "Added peer" messages from info to debug - ([tendermint/tendermint\#9764](https://github.com/tendermint/tendermint/pull/9764): - @faddat) -- `[consensus]` Reduce bandwidth consumption of consensus votes by roughly 50% - through fixing a small logic bug - ([tendermint/tendermint\#9776](https://github.com/tendermint/tendermint/pull/9776)) - ---- - -CometBFT is a fork of [Tendermint -Core](https://github.com/tendermint/tendermint) as of late December 2022. - -## Bug bounty - -Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos). - -## Previous changes - -For changes released before the creation of CometBFT, please refer to the -Tendermint Core -[CHANGELOG.md](https://github.com/tendermint/tendermint/blob/a9feb1c023e172b542c972605311af83b777855b/CHANGELOG.md). diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md deleted file mode 100644 index 3f93f1e5e8..0000000000 --- a/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,109 +0,0 @@ -# The CometBFT Code of Conduct - -This code of conduct applies to all projects run by the CometBFT team and -hence to CometBFT. - ----- - -# Conduct - -## Contact: conduct@informal.systems - -* We are committed to providing a friendly, safe and welcoming environment for - all, regardless of level of experience, gender, gender identity and - expression, sexual orientation, disability, personal appearance, body size, - race, ethnicity, age, religion, nationality, or other similar characteristics. - -* On Slack, please avoid using overtly sexual nicknames or other nicknames that - might detract from a friendly, safe and welcoming environment for all. - -* Please be kind and courteous. There’s no need to be mean or rude. - -* Respect that people have differences of opinion and that every design or - implementation choice carries a trade-off and numerous costs. There is seldom - a right answer. - -* Please keep unstructured critique to a minimum. If you have solid ideas you - want to experiment with, make a fork and see how it works. - -* We will exclude you from interaction if you insult, demean or harass anyone. - That is not welcome behavior. We interpret the term “harassment” as including - the definition in the [Citizen Code of Conduct][ccoc]; if you have any lack of - clarity about what might be included in that concept, please read their - definition. In particular, we don’t tolerate behavior that excludes people in - socially marginalized groups. - -* Private harassment is also unacceptable. No matter who you are, if you feel - you have been or are being harassed or made uncomfortable by a community - member, please get in touch with one of the channel admins or the contact address above - immediately. Whether you’re a regular contributor or a newcomer, we care about - making this community a safe place for you and we’ve got your back. - -* Likewise any spamming, trolling, flaming, baiting or other attention-stealing - behavior is not welcome. - ----- - -# Moderation - -These are the policies for upholding our community’s standards of conduct. If -you feel that a thread needs moderation, please contact the above mentioned -person. - -1. Remarks that violate the CometBFT/Cosmos standards of conduct, including - hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. - (Cursing is allowed, but never targeting another user, and never in a hateful - manner.) - -2. Remarks that moderators find inappropriate, whether listed in the code of - conduct or not, are also not allowed. - -3. Moderators will first respond to such remarks with a warning. - -4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of - the communication channel to cool off. - -5. If the user comes back and continues to make trouble, they will be banned, - i.e., indefinitely excluded. - -6. Moderators may choose at their discretion to un-ban the user if it was a - first offense and they offer the offended party a genuine apology. - -7. If a moderator bans someone and you think it was unjustified, please take it - up with that moderator, or with a different moderator, in private. Complaints - about bans in-channel are not allowed. - -8. Moderators are held to a higher standard than other community members. If a - moderator creates an inappropriate situation, they should expect less leeway - than others. - -In the CometBFT/Cosmos community we strive to go the extra step to look out -for each other. Don’t just aim to be technically unimpeachable, try to be your -best self. In particular, avoid flirting with offensive or sensitive issues, -particularly if they’re off-topic; this all too often leads to unnecessary -fights, hurt feelings, and damaged trust; worse, it can drive people away -from the community entirely. - -And if someone takes issue with something you said or did, resist the urge to be -defensive. Just stop doing what it was they complained about and apologize. Even -if you feel you were misinterpreted or unfairly accused, chances are good there -was something you could’ve communicated better — remember that it’s your -responsibility to make your fellow Cosmonauts comfortable. Everyone wants to -get along and we are all here first and foremost because we want to talk -about cool technology. You will find that people will be eager to assume -good intent and forgive as long as you earn their trust. - -The enforcement policies listed above apply to all official CometBFT/Cosmos -venues. For other projects adopting the CometBFT/Cosmos Code of Conduct, -please contact the maintainers of those projects for enforcement. If you wish to -use this code of conduct for your own project, consider explicitly mentioning -your moderation policy or making a copy with your own moderation policy so as to -avoid confusion. - -\*Adapted from the [Node.js Policy on Trolling][node-trolling-policy], the -[Contributor Covenant v1.3.0][ccov] and the [Rust Code of Conduct][rust-coc]. - -[ccoc]: https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md -[node-trolling-policy]: http://blog.izs.me/post/30036893703/policy-on-trolling -[ccov]: http://contributor-covenant.org/version/1/3/0/ -[rust-coc]: https://www.rust-lang.org/en-US/conduct.html diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index 812e329e67..0000000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,390 +0,0 @@ -# Contributing - -Thank you for your interest in contributing to CometBFT! Before contributing, it -may be helpful to understand the goal of the project. The goal of CometBFT is to -develop a BFT consensus engine robust enough to support permissionless -value-carrying networks. While all contributions are welcome, contributors -should bear this goal in mind in deciding if they should target the main -CometBFT project or a potential fork. When targeting the main CometBFT project, -the following process leads to the best chance of landing changes in `main`. - -All work on the code base should be motivated by a [GitHub -Issue](https://github.com/cometbft/cometbft/issues). -[Search](https://github.com/cometbft/cometbft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) -is a good place to start when looking for places to contribute. If you would -like to work on an issue which already exists, please indicate so by leaving a -comment. - -All new contributions should start with a [GitHub -Issue](https://github.com/cometbft/cometbft/issues/new/choose). The issue helps -capture the problem you're trying to solve and allows for early feedback. Once -the issue is created the process can proceed in different directions depending -on how well defined the problem and potential solution are. If the change is -simple and well understood, maintainers will indicate their support with a -heartfelt emoji. - -If the issue would benefit from thorough discussion, maintainers may request -that you create a [Request For -Comment](https://github.com/cometbft/cometbft/tree/main/docs/rfc) in the -CometBFT repo. Discussion at the RFC stage will build collective -understanding of the dimensions of the problems and help structure conversations -around trade-offs. - -When the problem is well understood but the solution leads to large structural -changes to the code base, these changes should be proposed in the form of an -[Architectural Decision Record (ADR)](./docs/architecture/). The ADR will help -build consensus on an overall strategy to ensure the code base maintains -coherence in the larger context. If you are not comfortable with writing an ADR, -you can open a less-formal issue and the maintainers will help you turn it into -an ADR. - -> How to pick a number for the ADR? - -Find the largest existing ADR number and bump it by 1. - -When the problem as well as proposed solution are well understood, -changes should start with a [draft -pull request](https://github.blog/2019-02-14-introducing-draft-pull-requests/) -against `main`. The draft signals that work is underway. When the work -is ready for feedback, hitting "Ready for Review" will signal to the -maintainers to take a look. - -![Contributing flow](./docs/imgs/contributing.png) - -Each stage of the process is aimed at creating feedback cycles which align contributors and maintainers to make sure: - -- Contributors don’t waste their time implementing/proposing features which won’t land in `main`. -- Maintainers have the necessary context in order to support and review contributions. - - -## Forking - -Please note that Go requires code to live under absolute paths, which complicates forking. -While my fork lives at `https://github.com/ebuchman/cometbft`, -the code should never exist at `$GOPATH/src/github.com/ebuchman/cometbft`. -Instead, we use `git remote` to add the fork as a new remote for the original repo, -`$GOPATH/src/github.com/cometbft/cometbft`, and do all the work there. - -For instance, to create a fork and work on a branch of it, I would: - -- Create the fork on GitHub, using the fork button. -- Go to the original repo checked out locally (i.e. `$GOPATH/src/github.com/cometbft/cometbft`) -- `git remote rename origin upstream` -- `git remote add origin git@github.com:ebuchman/basecoin.git` - -Now `origin` refers to my fork and `upstream` refers to the CometBFT version. -So I can `git push -u origin main` to update my fork, and make pull requests to CometBFT from there. -Of course, replace `ebuchman` with your git handle. - -To pull in updates from the origin repo, run - -- `git fetch upstream` -- `git rebase upstream/main` (or whatever branch you want) - -## Dependencies - -We use [go modules](https://github.com/golang/go/wiki/Modules) to manage dependencies. - -That said, the `main` branch of every CometBFT repository should just build -with `go get`, which means they should be kept up-to-date with their -dependencies so we can get away with telling people they can just `go get` our -software. - -Since some dependencies are not under our control, a third party may break our -build, in which case we can fall back on `go mod tidy`. Even for dependencies under our control, go helps us to -keep multiple repos in sync as they evolve. Anything with an executable, such -as apps, tools, and the core, should use dep. - -Run `go list -u -m all` to get a list of dependencies that may not be -up-to-date. - -When updating dependencies, please only update the particular dependencies you -need. Instead of running `go get -u=patch`, which will update anything, -specify exactly the dependency you want to update. - -## Protobuf - -We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along -with [`gogoproto`](https://github.com/cosmos/gogoproto) to generate code for use -across CometBFT. - -To generate proto stubs, lint, and check protos for breaking changes, you will -need to install [buf](https://buf.build/) and `gogoproto`. Then, from the root -of the repository, run: - -```bash -# Lint all of the .proto files -make proto-lint - -# Check if any of your local changes (prior to committing to the Git repository) -# are breaking -make proto-check-breaking - -# Generate Go code from the .proto files -make proto-gen -``` - -To automatically format `.proto` files, you will need -[`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) installed. Once -installed, you can run: - -```bash -make proto-format -``` - -### Visual Studio Code - -If you are a VS Code user, you may want to add the following to your `.vscode/settings.json`: - -```json -{ - "protoc": { - "options": [ - "--proto_path=${workspaceRoot}/proto", - ] - } -} -``` - -## Changelog - -To manage and generate our changelog, we currently use [unclog](https://github.com/informalsystems/unclog). - -Every fix, improvement, feature, or breaking change should be made in a -pull-request that includes a file -`.changelog/unreleased/${category}/${issue-or-pr-number}-${description}.md`, -where: -- `category` is one of `improvements`, `breaking-changes`, `bug-fixes`, - `features` and if multiple apply, create multiple files; -- `description` is a short (4 to 6 word), hyphen separated description of the - fix, starting the component changed; and, -- `issue or PR number` is the CometBFT issue number, if one exists, or the PR - number, otherwise. - -For examples, see the [.changelog](.changelog) folder. - -A feature can also be worked on a feature branch, if its size and/or risk -justifies it (see [below](#branching-model-and-release)). - -### What does a good changelog entry look like? - -Changelog entries should answer the question: "what is important about this -change for users to know?" or "what problem does this solve for users?". It -should not simply be a reiteration of the title of the associated PR, unless the -title of the PR _very_ clearly explains the benefit of a change to a user. - -Some good examples of changelog entry descriptions: - -```md -- [consensus] \#1111 Small transaction throughput improvement (approximately - 3-5\% from preliminary tests) through refactoring the way we use channels -- [mempool] \#1112 Refactor Go API to be able to easily swap out the current - mempool implementation in CometBFT forks -- [p2p] \#1113 Automatically ban peers when their messages are unsolicited or - are received too frequently -``` - -Some bad examples of changelog entry descriptions: - -```md -- [consensus] \#1111 Refactor channel usage -- [mempool] \#1112 Make API generic -- [p2p] \#1113 Ban for PEX message abuse -``` - -For more on how to write good changelog entries, see: - -- -- -- - -### Changelog entry format - -Changelog entries should be formatted as follows: - -```md -- [module] \#xxx Some description of the change (@contributor) -``` - -Here, `module` is the part of the code that changed (typically a top-level Go -package), `xxx` is the pull-request number, and `contributor` is the author/s of -the change. - -It's also acceptable for `xxx` to refer to the relevant issue number, but -pull-request numbers are preferred. Note this means pull-requests should be -opened first so the changelog can then be updated with the pull-request's -number. There is no need to include the full link, as this will be added -automatically during release. But please include the backslash and pound, eg. -`\#2313`. - -Changelog entries should be ordered alphabetically according to the `module`, -and numerically according to the pull-request number. - -Changes with multiple classifications should be doubly included (eg. a bug fix -that is also a breaking change should be recorded under both). - -Breaking changes are further subdivided according to the APIs/users they impact. -Any change that affects multiple APIs/users should be recorded multiply - for -instance, a change to the `Blockchain Protocol` that removes a field from the -header should also be recorded under `CLI/RPC/Config` since the field will be -removed from the header in RPC responses as well. - -## Branching Model and Release - -The main development branch is `main`. - -Every release is maintained in a release branch named `vX.Y.Z`. - -Pending minor releases have long-lived release candidate ("RC") branches. Minor -release changes should be merged to these long-lived RC branches at the same -time that the changes are merged to `main`. - -If a feature's size is big and/or its risk is high, it can be implemented in a -feature branch. While the feature work is in progress, pull requests are open -and squash merged against the feature branch. Branch `main` is periodically -merged (merge commit) into the feature branch, to reduce branch divergence. When -the feature is complete, the feature branch is merged back (merge commit) into -`main`. The moment of the final merge can be carefully chosen so as to land -different features in different releases. - -Note, all pull requests should be squash merged except for merging to a release -branch (named `vX.Y`). This keeps the commit history clean and makes it easy to -reference the pull request where a change was introduced. - -### Development Procedure - -The latest state of development is on `main`, which must never fail `make test`. -_Never_ force push `main`, unless fixing broken git history (which we rarely do -anyways). - -To begin contributing, create a development branch either on -`github.com/cometbft/cometbft`, or your fork (using `git remote add origin`). - -Make changes, and before submitting a pull request, update the changelog to -record your change. Also, run either `git rebase` or `git merge` on top of the -latest `main`. (Since pull requests are squash-merged, either is fine!) - -Update the `UPGRADING.md` if the change you've made is breaking and the -instructions should be in place for a user on how he/she can upgrade its -software (ABCI application, CometBFT blockchain, light client, wallet). - -Sometimes (often!) pull requests get out-of-date with `main`, as other people -merge different pull requests to `main`. It is our convention that pull request -authors are responsible for updating their branches with `main`. (This also -means that you shouldn't update someone else's branch for them; even if it seems -like you're doing them a favor, you may be interfering with their git flow in -some way!) - -#### Merging Pull Requests - -It is also our convention that authors merge their own pull requests, when -possible. External contributors may not have the necessary permissions to do -this, in which case, a member of the core team will merge the pull request once -it's been approved. - -Before merging a pull request: - -- Ensure pull branch is up-to-date with a recent `main` (GitHub won't let you - merge without this!) -- Run `make test` to ensure that all tests pass -- [Squash](https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git) - merge pull request - -#### Pull Requests for Minor Releases - -If your change should be included in a minor release, please also open a PR -against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) -_immediately after your change has been merged to main_. - -You can do this by cherry-picking your commit off `main`: - -```sh -$ git checkout rc1/v0.33.5 -$ git checkout -b {new branch name} -$ git cherry-pick {commit SHA from main} -# may need to fix conflicts, and then use git add and git cherry-pick --continue -$ git push origin {new branch name} -``` - -After this, you can open a PR. Please note in the PR body if there were merge -conflicts so that reviewers can be sure to take a thorough look. - -### Git Commit Style - -We follow the [Go style guide on commit -messages](https://tip.golang.org/doc/contribute.html#commit_messages). Write -concise commits that start with the package name and have a description that -finishes the sentence "This change modifies CometBFT to...". For example, - -```sh -cmd/debug: execute p.Signal only when p is not nil - -[potentially longer description in the body] - -Fixes #nnnn -``` - -Each PR should have one commit once it lands on `main`; this can be accomplished -by using the "squash and merge" button on GitHub. Be sure to edit your commit -message, though! - -## Testing - -### Unit tests - -Unit tests are located in `_test.go` files as directed by [the Go testing -package](https://golang.org/pkg/testing/). If you're adding or removing a -function, please check there's a `TestType_Method` test for it. - -Run: `make test` - -### Integration tests - -Integration tests are also located in `_test.go` files. What differentiates -them is a more complicated setup, which usually involves setting up two or more -components. - -Run: `make test_integrations` - -### End-to-end tests - -End-to-end tests are used to verify a fully integrated CometBFT network. - -See [README](./test/e2e/README.md) for details. - -Run: - -```sh -cd test/e2e && \ - make && \ - ./build/runner -f networks/ci.toml -``` - -### Fuzz tests (ADVANCED) - -*NOTE: if you're just submitting your first PR, you won't need to touch these -most probably (99.9%)*. - -[Fuzz tests](https://en.wikipedia.org/wiki/Fuzzing) can be found inside the -`./test/fuzz` directory. See [README.md](./test/fuzz/README.md) for details. - -Run: `cd test/fuzz && make fuzz-{PACKAGE-COMPONENT}` - -### RPC Testing - -**If you contribute to the RPC endpoints it's important to document your -changes in the [Openapi file](./rpc/openapi/openapi.yaml)**. - -To test your changes you must install `nodejs` and run: - -```bash -npm i -g dredd -make build-linux build-contract-tests-hooks -make contract-tests -``` - -**WARNING: these are currently broken due to -not supporting complete OpenAPI 3**. - -This command will popup a network and check every endpoint against what has -been documented. diff --git a/README.md b/README.md index 58a17575b2..530abd849a 100644 --- a/README.md +++ b/README.md @@ -85,7 +85,3 @@ The canonical branches in this repo are based on CometBFT releases. For example: Releases are formatted: `v-tm-v` For example: [`v1.4.0-tm-v0.34.20`](https://github.com/celestiaorg/celestia-core/releases/tag/v1.4.0-tm-v0.34.20) is celestia-core version `1.4.0` based on CometBFT `0.34.20`. `CELESTIA_CORE_VERSION` strives to adhere to [Semantic Versioning](http://semver.org/). - -## Careers - -We are hiring Go engineers! Join us in building the future of blockchain scaling and interoperability. [Apply here](https://jobs.lever.co/celestia). diff --git a/SECURITY.md b/SECURITY.md deleted file mode 100644 index 2a5c566641..0000000000 --- a/SECURITY.md +++ /dev/null @@ -1,33 +0,0 @@ -# How to Report a Security Bug - -If you believe you have found a security vulnerability in the Interchain Stack, -you can report it to our primary vulnerability disclosure channel, the [Cosmos -HackerOne Bug Bounty program][h1]. - -If you prefer to report an issue via email, you may send a bug report to - with the issue details, reproduction, impact, and other -information. Please submit only one unique email thread per vulnerability. Any -issues reported via email are ineligible for bounty rewards. - -Artifacts from an email report are saved at the time the email is triaged. -Please note: our team is not able to monitor dynamic content (e.g. a Google Docs -link that is edited after receipt) throughout the lifecycle of a report. If you -would like to share additional information or modify previous information, -please include it in an additional reply as an additional attachment. - -Please **DO NOT** file a public issue in this repository to report a security -vulnerability. - -## Coordinated Vulnerability Disclosure Policy and Safe Harbor - -For the most up-to-date version of the policies that govern vulnerability -disclosure, please consult the [HackerOne program page][h1-policy]. - -The policy hosted on HackerOne is the official Coordinated Vulnerability -Disclosure policy and Safe Harbor for the Interchain Stack, and the teams and -infrastructure it supports, and it supersedes previous security policies that -have been used in the past by individual teams and projects with targets in -scope of the program. - -[h1]: https://hackerone.com/cosmos?type=team -[h1-policy]: https://hackerone.com/cosmos?type=team&view_policy=true diff --git a/STYLE_GUIDE.md b/STYLE_GUIDE.md deleted file mode 100644 index 5eeceb6c84..0000000000 --- a/STYLE_GUIDE.md +++ /dev/null @@ -1,162 +0,0 @@ -# Go Coding Style Guide - -In order to keep our code looking good with lots of programmers working on it, it helps to have a "style guide", so all -the code generally looks quite similar. This doesn't mean there is only one "right way" to write code, or even that this -standard is better than your style. But if we agree to a number of stylistic practices, it makes it much easier to read -and modify new code. Please feel free to make suggestions if there's something you would like to add or modify. - -We expect all contributors to be familiar with [Effective Go](https://golang.org/doc/effective_go.html) -(and it's recommended reading for all Go programmers anyways). Additionally, we generally agree with the suggestions - in [Uber's style guide](https://github.com/uber-go/guide/blob/master/style.md) and use that as a starting point. - - -## Code Structure - -Perhaps more key for code readability than good commenting is having the right structure. As a rule of thumb, try to write -in a logical order of importance, taking a little time to think how to order and divide the code such that someone could -scroll down and understand the functionality of it just as well as you do. A loose example of such order would be: - -* Constants, global and package-level variables -* Main Struct -* Options (only if they are seen as critical to the struct else they should be placed in another file) -* Initialization / Start and stop of the service -* Msgs/Events -* Public Functions (In order of most important) -* Private/helper functions -* Auxiliary structs and function (can also be above private functions or in a separate file) - -## General - -* Use `gofmt` (or `goimport`) to format all code upon saving it. (If you use VIM, check out vim-go). -* Use a linter (see below) and generally try to keep the linter happy (where it makes sense). -* Think about documentation, and try to leave godoc comments, when it will help new developers. -* Every package should have a high level doc.go file to describe the purpose of that package, its main functions, and any other relevant information. -* `TODO` should not be used. If important enough should be recorded as an issue. -* `BUG` / `FIXME` should be used sparingly to guide future developers on some of the vulnerabilities of the code. -* `XXX` can be used in work-in-progress (prefixed with "WIP:" on github) branches but they must be removed before approving a PR. -* Applications (e.g. clis/servers) *should* panic on unexpected unrecoverable errors and print a stack trace. - -## Comments - -* Use a space after comment deliminter (ex. `// your comment`). -* Many comments are not sentences. These should begin with a lower case letter and end without a period. -* Conversely, sentences in comments should be sentenced-cased and end with a period. - -## Linters - -These must be applied to all (Go) repos. - -* [shellcheck](https://github.com/koalaman/shellcheck) -* [golangci-lint](https://github.com/golangci/golangci-lint) (covers all important linters) - * See the `.golangci.yml` file in each repo for linter configuration. - -## Various - -* Reserve "Save" and "Load" for long-running persistence operations. When parsing bytes, use "Encode" or "Decode". -* Maintain consistency across the codebase. -* Functions that return functions should have the suffix `Fn` -* Names should not [stutter](https://blog.golang.org/package-names). For example, a struct generally shouldn’t have - a field named after itself; e.g., this shouldn't occur: - -``` golang -type middleware struct { - middleware Middleware -} -``` - -* In comments, use "iff" to mean, "if and only if". -* Product names are capitalized, like "CometBFT", "Basecoin", "Protobuf", etc except in command lines: `cometbft --help` -* Acronyms are all capitalized, like "RPC", "gRPC", "API". "MyID", rather than "MyId". -* Prefer errors.New() instead of fmt.Errorf() unless you're actually using the format feature with arguments. - -## Importing Libraries - -Sometimes it's necessary to rename libraries to avoid naming collisions or ambiguity. - -* Use [goimports](https://godoc.org/golang.org/x/tools/cmd/goimports) -* Separate imports into blocks - one for the standard lib, one for external libs and one for application libs. -* Here are some common library labels for consistency: - * dbm "github.com/cometbft/cometbft-db" - * cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" - * cmtcfg "github.com/cometbft/cometbft/config" - * cmttypes "github.com/cometbft/cometbft/types" -* Never use anonymous imports (the `.`), for example, `cmtlibs/common` or anything else. -* When importing a pkg from the `cmt/libs` directory, prefix the pkg alias with cmt. - * cmtbits "github.com/cometbft/cometbft/libs/bits" -* tip: Use the `_` library import to import a library for initialization effects (side effects) - -## Dependencies - -* Dependencies should be pinned by a release tag, or specific commit, to avoid breaking `go get` when external dependencies are updated. -* Refer to the [contributing](CONTRIBUTING.md) document for more details - -## Testing - -* The first rule of testing is: we add tests to our code -* The second rule of testing is: we add tests to our code -* For Golang testing: - * Make use of table driven testing where possible and not-cumbersome - * [Inspiration](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go) - * Make use of [assert](https://godoc.org/github.com/stretchr/testify/assert) and [require](https://godoc.org/github.com/stretchr/testify/require) -* When using mocks, it is recommended to use Testify [mock]( - ) along with [Mockery](https://github.com/vektra/mockery) for autogeneration - -## Errors - -* Ensure that errors are concise, clear and traceable. -* Use stdlib errors package. -* For wrapping errors, use `fmt.Errorf()` with `%w`. -* Panic is appropriate when an internal invariant of a system is broken, while all other cases (in particular, - incorrect or invalid usage) should return errors. - -## Config - -* Currently the TOML filetype is being used for config files -* A good practice is to store per-user config files under `~/.[yourAppName]/config.toml` - -## CLI - -* When implementing a CLI use [Cobra](https://github.com/spf13/cobra) and [Viper](https://github.com/spf13/viper). -* Helper messages for commands and flags must be all lowercase. -* Instead of using pointer flags (eg. `FlagSet().StringVar`) use Viper to retrieve flag values (eg. `viper.GetString`) - * The flag key used when setting and getting the flag should always be stored in a - variable taking the form `FlagXxx` or `flagXxx`. - * Flag short variable descriptions should always start with a lower case character as to remain consistent with - the description provided in the default `--help` flag. - -## Version - -* Every repo should have a version/version.go file that mimics the CometBFT repo -* We read the value of the constant version in our build scripts and hence it has to be a string - -## Non-Go Code - -* All non-Go code (`*.proto`, `Makefile`, `*.sh`), where there is no common - agreement on style, should be formatted according to - [EditorConfig](http://editorconfig.org/) config: - - ```toml - # top-most EditorConfig file - root = true - - # Unix-style newlines with a newline ending every file - [*] - charset = utf-8 - end_of_line = lf - insert_final_newline = true - trim_trailing_whitespace = true - - [Makefile] - indent_style = tab - - [*.sh] - indent_style = tab - - [*.proto] - indent_style = space - indent_size = 2 - ``` - - Make sure the file above (`.editorconfig`) are in the root directory of your - repo and you have a [plugin for your - editor](http://editorconfig.org/#download) installed. diff --git a/UPGRADING.md b/UPGRADING.md deleted file mode 100644 index fe550b1ece..0000000000 --- a/UPGRADING.md +++ /dev/null @@ -1,83 +0,0 @@ -# Upgrading CometBFT - -This guide provides instructions for upgrading to specific versions of CometBFT. - -## v0.34.35 - -It is recommended that CometBFT be built with Go v1.22+ since v1.21 is no longer -supported. - -## v0.34.33 - -It is recommended that CometBFT be built with Go v1.21+ since v1.20 is no longer -supported. - -## v0.34.29 - -It is recommended that CometBFT be built with Go v1.20+ since v1.19 is no longer -supported. - -## v0.34.28 - -For users explicitly making use of the Go APIs provided in the `crypto/merkle` -package, please note that, in order to fix a potential security issue, we had to -make a breaking change here. This change should only affect a small minority of -users. For more details, please see -[\#557](https://github.com/cometbft/cometbft/issues/557). - -## v0.34.27 - -This is the first official release of CometBFT, forked originally from -[Tendermint Core v0.34.24][v03424] and subsequently updated in Informal Systems' -public fork of Tendermint Core for [v0.34.25][v03425] and [v0.34.26][v03426]. - -### Upgrading from Tendermint Core - -If you already make use of Tendermint Core (either the original Tendermint Core -v0.34.24, or Informal Systems' public fork), you can upgrade to CometBFT -v0.34.27 by replacing your dependency in your `go.mod` file: - -```bash -go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.34.27 -``` - -We make use of the original module URL in order to minimize the impact of -switching to CometBFT. This is only possible in our v0.34 release series, and we -will be switching our module URL to `github.com/cometbft/cometbft` in the next -major release. - -### Home directory - -CometBFT, by default, will consider its home directory in `~/.cometbft` from now -on instead of `~/.tendermint`. - -### Environment variables - -The environment variable prefixes have now changed from `TM` to `CMT`. For -example, `TMHOME` or `TM_HOME` become `CMTHOME` or `CMT_HOME`. - -We have implemented a fallback check in case `TMHOME` is still set and `CMTHOME` -is not, but you will start to see a warning message in the logs if the old -`TMHOME` variable is set. This fallback check will be removed entirely in a -subsequent major release of CometBFT. - -### Building CometBFT - -CometBFT must be compiled using Go 1.19 or higher. The use of Go 1.18 is not -supported, since this version has reached end-of-life with the release of [Go 1.20][go120]. - -### Troubleshooting - -If you run into any trouble with this upgrade, please [contact us][discussions]. - ---- - -For historical upgrading instructions for Tendermint Core v0.34.24 and earlier, -please see the [Tendermint Core upgrading instructions][tmupgrade]. - -[v03424]: https://github.com/tendermint/tendermint/releases/tag/v0.34.24 -[v03425]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.25 -[v03426]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.26 -[discussions]: https://github.com/cometbft/cometbft/discussions -[tmupgrade]: https://github.com/tendermint/tendermint/blob/35581cf54ec436b8c37fabb43fdaa3f48339a170/UPGRADING.md -[go120]: https://go.dev/blog/go1.20 diff --git a/docs/DOCS_README.md b/docs/DOCS_README.md deleted file mode 100644 index af686162e7..0000000000 --- a/docs/DOCS_README.md +++ /dev/null @@ -1,14 +0,0 @@ -# Docs Build Workflow - -The documentation for CometBFT is hosted at: - -- - -built from the files in these (`/docs` and `/spec`) directories. - -Content modified and merged to these folders will be deployed to the `https://docs.cometbft.com` website using workflow logic from the [cometbft-docs](https://github.com/cometbft/cometbft-docs) repository - -### Building locally - -For information on how to build the documentation and view it locally, please visit the [cometbft-docs](https://github.com/cometbft/cometbft-docs) Github repository. - diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index cc36e176f2..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: CometBFT Documentation -description: CometBFT is a blockchain application platform. -footer: - newsletter: false ---- - -# CometBFT - -Welcome to the CometBFT documentation! - -CometBFT is a blockchain application platform; it provides the equivalent -of a web-server, database, and supporting libraries for blockchain applications -written in any programming language. Like a web-server serving web applications, -CometBFT serves blockchain applications. - -More formally, CometBFT performs Byzantine Fault Tolerant (BFT) -State Machine Replication (SMR) for arbitrary deterministic, finite state machines. -For more background, see [What is CometBFT?](introduction/README.md#what-is-cometbft). - -To get started quickly with an example application, see the [quick start guide](guides/quick-start.md). - -To upgrade from Tendermint Core v0.34.x to CometBFT v0.34.x, please see our -[upgrading instructions](./guides/upgrading-from-tm.md). - -To learn about application development on CometBFT, see the -[Application Blockchain Interface](https://github.com/cometbft/cometbft/tree/v0.34.x/spec/abci). - -For more details on using CometBFT, see the respective documentation for -[CometBFT internals](core/), [benchmarking and monitoring](tools/), and -[network deployments](networks/). - -## Contribute - -To recommend a change to the documentation, please submit a PR. Each major -release's documentation is housed on the corresponding release branch, e.g. for -the v0.34 release series, the documentation is housed on the `v0.34.x` branch. - -When submitting changes that affect all releases, please start by submitting a -PR to the docs on `main` - this will be backported to the relevant release -branches. If a change is exclusively relevant to a specific release, please -target that release branch with your PR. - -Changes to the documentation will be reviewed by the team and, if accepted and -merged, published to for the respective version(s). - -The build process for the documentation is housed in the -[CometBFT documentation repository](https://github.com/cometbft/cometbft-docs). diff --git a/docs/app-dev/README.md b/docs/app-dev/README.md deleted file mode 100644 index aff0a570ca..0000000000 --- a/docs/app-dev/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -order: false -parent: - order: 3 ---- - -# Apps - -- [Using ABCI-CLI](./abci-cli.md) -- [Getting Started](./getting-started.md) -- [Indexing transactions](./indexing-transactions.md) -- [Application Architecture Guide](./app-architecture.md) diff --git a/docs/app-dev/abci-cli.md b/docs/app-dev/abci-cli.md deleted file mode 100644 index 255b056e20..0000000000 --- a/docs/app-dev/abci-cli.md +++ /dev/null @@ -1,229 +0,0 @@ ---- -order: 2 ---- - -# Using ABCI-CLI - -To facilitate testing and debugging of ABCI servers and simple apps, we -built a CLI, the `abci-cli`, for sending ABCI messages from the command -line. - -## Install - -Make sure you [have Go installed](https://golang.org/doc/install). - -Next, install the `abci-cli` tool and example applications: - -```sh -git clone https://github.com/cometbft/cometbft.git -cd cometbft -make install_abci -``` - -Now run `abci-cli` to see the list of commands: - -```sh -Usage: - abci-cli [command] - -Available Commands: - batch Run a batch of abci commands against an application - check_tx Validate a tx - commit Commit the application state and return the Merkle root hash - console Start an interactive abci console for multiple commands - counter ABCI demo example - deliver_tx Deliver a new tx to the application - kvstore ABCI demo example - echo Have the application echo a message - help Help about any command - info Get some info about the application - query Query the application state - set_option Set an options on the application - -Flags: - --abci string socket or grpc (default "socket") - --address string address of application socket (default "tcp://127.0.0.1:26658") - -h, --help help for abci-cli - -v, --verbose print the command and results as if it were a console session - -Use "abci-cli [command] --help" for more information about a command. -``` - -## KVStore - First Example - -The `abci-cli` tool lets us send ABCI messages to our application, to -help build and debug them. - -The most important messages are `deliver_tx`, `check_tx`, and `commit`, -but there are others for convenience, configuration, and information -purposes. - -We'll start a kvstore application, which was installed at the same time -as `abci-cli` above. The kvstore just stores transactions in a merkle -tree. - -Its code can be found -[here](https://github.com/cometbft/cometbft/blob/v0.34.x/abci/cmd/abci-cli/abci-cli.go) -and looks like the following: - -```go -func cmdKVStore(cmd *cobra.Command, args []string) error { - logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) - - // Create the application - in memory or persisted to disk - var app types.Application - if flagPersist == "" { - app = kvstore.NewKVStoreApplication() - } else { - app = kvstore.NewPersistentKVStoreApplication(flagPersist) - app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore")) - } - - // Start the listener - srv, err := server.NewServer(flagAddrD, flagAbci, app) - if err != nil { - return err - } - srv.SetLogger(logger.With("module", "abci-server")) - if err := srv.Start(); err != nil { - return err - } - - // Stop upon receiving SIGTERM or CTRL-C. - tmos.TrapSignal(logger, func() { - // Cleanup - srv.Stop() - }) - - // Run forever. - select {} -} -``` - -Start the application by running: - -```sh -abci-cli kvstore -``` - -And in another terminal, run - -```sh -abci-cli echo hello -abci-cli info -``` - -You'll see something like: - -```sh --> data: hello --> data.hex: 68656C6C6F -``` - -and: - -```sh --> data: {"size":0} --> data.hex: 7B2273697A65223A307D -``` - -An ABCI application must provide two things: - -- a socket server -- a handler for ABCI messages - -When we run the `abci-cli` tool we open a new connection to the -application's socket server, send the given ABCI message, and wait for a -response. - -The server may be generic for a particular language, and we provide a -[reference implementation in -Golang](https://github.com/cometbft/cometbft/tree/v0.34.x/abci/server). See the -[list of other ABCI implementations](https://github.com/tendermint/awesome#ecosystem) for servers in -other languages. - -The handler is specific to the application, and may be arbitrary, so -long as it is deterministic and conforms to the ABCI interface -specification. - -So when we run `abci-cli info`, we open a new connection to the ABCI -server, which calls the `Info()` method on the application, which tells -us the number of transactions in our Merkle tree. - -Now, since every command opens a new connection, we provide the -`abci-cli console` and `abci-cli batch` commands, to allow multiple ABCI -messages to be sent over a single connection. - -Running `abci-cli console` should drop you in an interactive console for -speaking ABCI messages to your application. - -Try running these commands: - -```sh -> echo hello --> code: OK --> data: hello --> data.hex: 0x68656C6C6F - -> info --> code: OK --> data: {"size":0} --> data.hex: 0x7B2273697A65223A307D - -> commit --> code: OK --> data.hex: 0x0000000000000000 - -> deliver_tx "abc" --> code: OK - -> info --> code: OK --> data: {"size":1} --> data.hex: 0x7B2273697A65223A317D - -> commit --> code: OK --> data.hex: 0x0200000000000000 - -> query "abc" --> code: OK --> log: exists --> height: 2 --> value: abc --> value.hex: 616263 - -> deliver_tx "def=xyz" --> code: OK - -> commit --> code: OK --> data.hex: 0x0400000000000000 - -> query "def" --> code: OK --> log: exists --> height: 3 --> value: xyz --> value.hex: 78797A -``` - -Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if -we do `deliver_tx "abc=efg"` it will store `(abc, efg)`. - -You could put the commands in a file and run -`abci-cli --verbose batch < myfile`. - -Note that the `abci-cli` is designed strictly for testing and debugging. In a real -deployment, the role of sending messages is taken by CometBFT, which -connects to the app using three separate connections, each with its own -pattern of messages. - -For examples of running an ABCI app with CometBFT, see the -[getting started guide](./getting-started.md). - -## Bounties - -Want to write an app in your favorite language?! We'd be happy -to help you out. See [funding](https://github.com/interchainio/funding) opportunities from the -[Interchain Foundation](https://interchain.io) for implementations in new languages and more. diff --git a/docs/app-dev/app-architecture.md b/docs/app-dev/app-architecture.md deleted file mode 100644 index be42916402..0000000000 --- a/docs/app-dev/app-architecture.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -order: 3 ---- - -# Application Architecture Guide - -Here we provide a brief guide on the recommended architecture of a -CometBFT blockchain application. - -We distinguish here between two forms of "application". The first is the -end-user application, like a desktop-based wallet app that a user downloads, -which is where the user actually interacts with the system. The other is the -ABCI application, which is the logic that actually runs on the blockchain. -Transactions sent by an end-user application are ultimately processed by the ABCI -application after being committed by CometBFT. - -The end-user application communicates with a REST API exposed by the application. -The application runs CometBFT nodes and verifies CometBFT light-client proofs -through the CometBFT RPC. The CometBFT process communicates with -a local ABCI application, where the user query or transaction is actually -processed. - -The ABCI application must be a deterministic result of the consensus -engine of CometBFT - any external influence on the application state that didn't -come through CometBFT could cause a consensus failure. Thus _nothing_ -should communicate with the ABCI application except CometBFT via ABCI. - -If the ABCI application is written in Go, it can be compiled into the -CometBFT binary. Otherwise, it should use a unix socket to communicate -with CometBFT. If it's necessary to use TCP, extra care must be taken -to encrypt and authenticate the connection. - -All reads from the ABCI application happen through the CometBFT `/abci_query` -endpoint. All writes to the ABCI application happen through the CometBFT -`/broadcast_tx_*` endpoints. - -The Light-Client Daemon is what provides light clients (end users) with -nearly all the security of a full node. It formats and broadcasts -transactions, and verifies proofs of queries and transaction results. -Note that it need not be a daemon - the Light-Client logic could instead -be implemented in the same process as the end-user application. - -Note for those ABCI applications with weaker security requirements, the -functionality of the Light-Client Daemon can be moved into the ABCI -application process itself. That said, exposing the ABCI application process -to anything besides CometBFT over ABCI requires extreme caution, as -all transactions, and possibly all queries, should still pass through -CometBFT. - -See the following for more extensive documentation: - -- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1617) (legacy/deprecated) -- [CometBFT RPC Docs](https://docs.cometbft.com/v0.34/rpc/) -- [CometBFT in Production](../core/running-in-production.md) -- [ABCI spec](https://github.com/cometbft/cometbft/tree/v0.34.x/spec/abci) diff --git a/docs/app-dev/getting-started.md b/docs/app-dev/getting-started.md deleted file mode 100644 index 5473b36633..0000000000 --- a/docs/app-dev/getting-started.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -order: 1 ---- - -# Getting Started - -## First CometBFT App - -As a general purpose blockchain engine, CometBFT is agnostic to the -application you want to run. So, to run a complete blockchain that does -something useful, you must start two programs: one is CometBFT, -the other is your application, which can be written in any programming -language. Recall from [the intro to -ABCI](../introduction/what-is-cometbft.md#abci-overview) that CometBFT -handles all the p2p and consensus stuff, and just forwards transactions to the -application when they need to be validated, or when they're ready to be -executed and committed. - -In this guide, we show you some examples of how to run an application -using CometBFT. - -### Install - -The first apps we will work with are written in Go. To install them, you -need to [install Go](https://golang.org/doc/install), put -`$GOPATH/bin` in your `$PATH` and enable go modules. If you use `bash`, -follow these instructions: - -```bash -echo export GOPATH=\"\$HOME/go\" >> ~/.bash_profile -echo export PATH=\"\$PATH:\$GOPATH/bin\" >> ~/.bash_profile -``` - -Then run - -```bash -go get github.com/cometbft/cometbft -cd $GOPATH/src/github.com/cometbft/cometbft -make install_abci -``` - -Now you should have the `abci-cli` installed; run `abci-cli` to see the list of commands: - -``` -Usage: - abci-cli [command] - -Available Commands: - batch run a batch of abci commands against an application - check_tx validate a transaction - commit commit the application state and return the Merkle root hash - completion Generate the autocompletion script for the specified shell - console start an interactive ABCI console for multiple commands - deliver_tx deliver a new transaction to the application - echo have the application echo a message - help Help about any command - info get some info about the application - kvstore ABCI demo example - query query the application state - test run integration tests - version print ABCI console version - -Flags: - --abci string either socket or grpc (default "socket") - --address string address of application socket (default "tcp://0.0.0.0:26658") - -h, --help help for abci-cli - --log_level string set the logger level (default "debug") - -v, --verbose print the command and results as if it were a console session - -Use "abci-cli [command] --help" for more information about a command. -``` - -You'll notice the `kvstore` command, an example application written in Go. - -Now, let's run an app! - -## KVStore - A First Example - -The kvstore app is a [Merkle -tree](https://en.wikipedia.org/wiki/Merkle_tree) that just stores all -transactions. If the transaction contains an `=`, e.g. `key=value`, then -the `value` is stored under the `key` in the Merkle tree. Otherwise, the -full transaction bytes are stored as the key and the value. - -Let's start a kvstore application. - -```sh -abci-cli kvstore -``` - -In another terminal, we can start CometBFT. You should already have the -CometBFT binary installed. If not, follow the steps from -[here](../introduction/install.md). If you have never run CometBFT -before, use: - -```sh -cometbft init -cometbft node -``` - -If you have used CometBFT, you may want to reset the data for a new -blockchain by running `cometbft unsafe-reset-all`. Then you can run -`cometbft node` to start CometBFT, and connect to the app. For more -details, see [the guide on using CometBFT](../core/using-cometbft.md). - -You should see CometBFT making blocks! We can get the status of our -CometBFT node as follows: - -```sh -curl -s localhost:26657/status -``` - -The `-s` just silences `curl`. For nicer output, pipe the result into a -tool like [jq](https://stedolan.github.io/jq/) or `json_pp`. - -Now let's send some transactions to the kvstore. - -```sh -curl -s 'localhost:26657/broadcast_tx_commit?tx="abcd"' -``` - -Note the single quote (`'`) around the url, which ensures that the -double quotes (`"`) are not escaped by bash. This command sent a -transaction with bytes `abcd`, so `abcd` will be stored as both the key -and the value in the Merkle tree. The response should look something -like: - -```json -{ - "jsonrpc": "2.0", - "id": "", - "result": { - "check_tx": {}, - "deliver_tx": { - "tags": [ - { - "key": "YXBwLmNyZWF0b3I=", - "value": "amFl" - }, - { - "key": "YXBwLmtleQ==", - "value": "YWJjZA==" - } - ] - }, - "hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39", - "height": 14 - } -} -``` - -We can confirm that our transaction worked and the value got stored by -querying the app: - -```sh -curl -s 'localhost:26657/abci_query?data="abcd"' -``` - -The result should look like: - -```json -{ - "jsonrpc": "2.0", - "id": "", - "result": { - "response": { - "log": "exists", - "index": "-1", - "key": "YWJjZA==", - "value": "YWJjZA==" - } - } -} -``` - -Note the `value` in the result (`YWJjZA==`); this is the base64-encoding -of the ASCII of `abcd`. You can verify this in a python 2 shell by -running `"YWJjZA==".decode('base64')` or in python 3 shell by running -`import codecs; codecs.decode(b"YWJjZA==", 'base64').decode('ascii')`. -Stay tuned for a future release that [makes this output more -human-readable](https://github.com/tendermint/tendermint/issues/1794). - -Now let's try setting a different key and value: - -```sh -curl -s 'localhost:26657/broadcast_tx_commit?tx="name=satoshi"' -``` - -Now if we query for `name`, we should get `satoshi`, or `c2F0b3NoaQ==` -in base64: - -```sh -curl -s 'localhost:26657/abci_query?data="name"' -``` - -Try some other transactions and queries to make sure everything is -working! diff --git a/docs/app-dev/indexing-transactions.md b/docs/app-dev/indexing-transactions.md deleted file mode 100644 index 4d64e8ae08..0000000000 --- a/docs/app-dev/indexing-transactions.md +++ /dev/null @@ -1,280 +0,0 @@ ---- -order: 6 ---- - -# Indexing Transactions - -CometBFT allows you to index transactions and blocks and later query or -subscribe to their results. Transactions are indexed by `TxResult.Events` and -blocks are indexed by `Response(Begin|End)Block.Events`. However, transactions -are also indexed by a primary key which includes the transaction hash and maps -to and stores the corresponding `TxResult`. Blocks are indexed by a primary key -which includes the block height and maps to and stores the block height, i.e. -the block itself is never stored. - -Each event contains a type and a list of attributes, which are key-value pairs -denoting something about what happened during the method's execution. For more -details on `Events`, see the -[ABCI](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/abci/abci.md#events) -documentation. - -An `Event` has a composite key associated with it. A `compositeKey` is -constructed by its type and key separated by a dot. - -For example: - -```json -"jack": [ - "account.number": 100 -] -``` - -would be equal to the composite key of `jack.account.number`. - -By default, CometBFT will index all transactions by their respective hashes -and height and blocks by their height. - -CometBFT allows for different events within the same height to have -equal attributes. - -## Configuration - -Operators can configure indexing via the `[tx_index]` section. The `indexer` -field takes a series of supported indexers. If `null` is included, indexing will -be turned off regardless of other values provided. - -```toml -[tx-index] - -# The backend database to back the indexer. -# If indexer is "null", no indexer service will be used. -# -# The application will set which txs to index. In some cases a node operator will be able -# to decide which txs to index based on configuration set in the application. -# -# Options: -# 1) "null" -# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend). -# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed. -# 3) "psql" - the indexer services backed by PostgreSQL. -# indexer = "kv" -``` - -### Supported Indexers - -#### KV - -The `kv` indexer type is an embedded key-value store supported by the main -underlying CometBFT database. Using the `kv` indexer type allows you to query -for block and transaction events directly against CometBFT's RPC. However, the -query syntax is limited and so this indexer type might be deprecated or removed -entirely in the future. - -**Implementation and data layout** - -The kv indexer stores each attribute of an event individually, by creating a composite key -of the *event type*, *attribute key*, *attribute value*, *height* and *event sequence*. - -For example the following events: - -``` -Type: "transfer", - Attributes: []abci.EventAttribute{ - {Key: []byte("sender"), Value: []byte("Bob"), Index: true}, - {Key: []byte("recipient"), Value: []byte("Alice"), Index: true}, - {Key: []byte("balance"), Value: []byte("100"), Index: true}, - {Key: []byte("note"), Value: []byte("nothing"), Index: true}, - }, - -``` - -``` -Type: "transfer", - Attributes: []abci.EventAttribute{ - {Key: []byte("sender"), Value: []byte("Tom"), Index: true}, - {Key: []byte("recipient"), Value: []byte("Alice"), Index: true}, - {Key: []byte("balance"), Value: []byte("200"), Index: true}, - {Key: []byte("note"), Value: []byte("nothing"), Index: true}, - }, -``` - -will be represented as follows in the store: - -``` -Key value -transferSenderBobEndBlock1 1 -transferRecipientAliceEndBlock11 1 -transferBalance100EndBlock11 1 -transferNodeNothingEndblock11 1 ----- event2 ------ -transferSenderTomEndBlock12 1 -transferRecipientAliceEndBlock12 1 -transferBalance200EndBlock12 1 -transferNodeNothingEndblock12 1 - -``` -The key is thus formed of the event type, the attribute key and value, the event the attribute belongs to (`EndBlock` or `BeginBlock`), -the height and the event number. The event number is a local variable kept by the indexer and incremented when a new event is processed. - -It is an `int64` variable and has no other semantics besides being used to associate attributes belonging to the same events within a height. -This variable is not atomically incremented as event indexing is deterministic. **Should this ever change**, the event id generation -will be broken. - -#### PostgreSQL - -The `psql` indexer type allows an operator to enable block and transaction event -indexing by proxying it to an external PostgreSQL instance allowing for the events -to be stored in relational models. Since the events are stored in a RDBMS, operators -can leverage SQL to perform a series of rich and complex queries that are not -supported by the `kv` indexer type. Since operators can leverage SQL directly, -searching is not enabled for the `psql` indexer type via CometBFT's RPC -- any -such query will fail. - -Note, the SQL schema is stored in `state/indexer/sink/psql/schema.sql` and operators -must explicitly create the relations prior to starting CometBFT and enabling -the `psql` indexer type. - -Example: - -```shell -$ psql ... -f state/indexer/sink/psql/schema.sql -``` - -## Default Indexes - -The CometBFT tx and block event indexer indexes a few select reserved events -by default. - -### Transactions - -The following indexes are indexed by default: - -- `tx.height` -- `tx.hash` - -### Blocks - -The following indexes are indexed by default: - -- `block.height` - -## Adding Events - -Applications are free to define which events to index. CometBFT does not -expose functionality to define which events to index and which to ignore. In -your application's `DeliverTx` method, add the `Events` field with pairs of -UTF-8 encoded strings (e.g. "transfer.sender": "Bob", "transfer.recipient": -"Alice", "transfer.balance": "100"). - -Example: - -```go -func (app *KVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.Result { - //... - events := []abci.Event{ - { - Type: "transfer", - Attributes: []abci.EventAttribute{ - {Key: []byte("sender"), Value: []byte("Bob"), Index: true}, - {Key: []byte("recipient"), Value: []byte("Alice"), Index: true}, - {Key: []byte("balance"), Value: []byte("100"), Index: true}, - {Key: []byte("note"), Value: []byte("nothing"), Index: true}, - }, - }, - } - return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events} -} -``` - -If the indexer is not `null`, the transaction will be indexed. Each event is -indexed using a composite key in the form of `{eventType}.{eventAttribute}={eventValue}`, -e.g. `transfer.sender=bob`. - -## Querying Transactions Events - -You can query for a paginated set of transaction by their events by calling the -`/tx_search` RPC endpoint: - -```bash -curl "localhost:26657/tx_search?query=\"message.sender='cosmos1...'\"&prove=true" -``` -If the conditions are related to transaction events and the user wants to make sure the -conditions are true within the same events, the `match_events` keyword should be used, -as described [below](#querying_block_events) - -Check out [API docs](https://docs.cometbft.com/v0.34/rpc/#/Info/tx_search) -for more information on query syntax and other options. - -## Subscribing to Transactions - -Clients can subscribe to transactions with the given tags via WebSocket by providing -a query to `/subscribe` RPC endpoint. - -```json -{ - "jsonrpc": "2.0", - "method": "subscribe", - "id": "0", - "params": { - "query": "message.sender='cosmos1...'" - } -} -``` - -Check out [API docs](https://docs.cometbft.com/v0.34/rpc/#subscribe) for more information -on query syntax and other options. - -## Querying Block Events - -You can query for a paginated set of blocks by their events by calling the -`/block_search` RPC endpoint: - -```bash -curl "localhost:26657/block_search?query=\"block.height > 10 AND val_set.num_changed > 0\"" -``` - -## `match_events` keyword - -The query results in the height number(s) (or transaction hashes when querying transactions) which contain events whose attributes match the query conditions. -However, there are two options to query the indexers. To demonstrate the two modes, we reuse the two events -where Bob and Tom send money to Alice and query the block indexer. We issue the following query: - -```bash -curl "localhost:26657/block_search?query=\"sender=Bob AND balance = 200\"" -``` - -The result will return height 1 even though the attributes matching the conditions in the query -occurred in different events. - -If we wish to retrieve only heights where the attributes occurred within the same event, -the query syntax is as follows: - -```bash -curl "localhost:26657/block_search?query=\"sender=Bob AND balance = 200\"&match_events=true" -``` -Currently the default behavior is if `match_events` is set to false. - -Check out [API docs](https://docs.cometbft.com/v0.34/rpc/#/Info/block_search) -for more information on query syntax and other options. - -**Backwards compatibility** - -Storing the event sequence was introduced in CometBFT 0.34.25. As there are no previous releases of CometBFT, -all nodes running CometBFT will include the event sequence. However, mixed networks running CometBFT v0.34.25 and greater -and Tendermint Core versions before v0.34.25 are possible. On nodes running Tendermint Core, the `match_events` keyword -is ignored and the data is retrieved as if `match_events=false`. - -Additionally, if a node that was running Tendermint Core -when the data was first indexed, and switched to CometBFT, is queried, it will retrieve this previously indexed -data as if `match_events=false` (attributes can match the query conditions across different events on the same height). - - -# Event attribute value types - -Users can use anything as an event value. However, if the event attribute value is a number, the following restrictions apply: - -- Negative numbers will not be properly retrieved when querying the indexer -- When querying the events using `tx_search` and `block_search`, the value given as part of the condition cannot be a float. -- Any event value retrieved from the database will be represented as a `BigInt` (from `math/big`) -- Floating point values are not read from the database even with the introduction of `BigInt`. This was intentionally done -to keep the same beheaviour as was historically present and not introduce breaking changes. This will be fixed in the 0.38 series. diff --git a/docs/core/README.md b/docs/core/README.md deleted file mode 100644 index cc7d4b6fdb..0000000000 --- a/docs/core/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -order: 1 -parent: - title: Core - order: 4 ---- - -# Overview - -This section dives into the internals of the CometBFT's implementation. - -- [Using CometBFT](./using-cometbft.md) -- [Configuration](./configuration.md) -- [Running in Production](./running-in-production.md) -- [Metrics](./metrics.md) -- [Validators](./validators.md) -- [Subscribing to events](./subscription.md) -- [Block Structure](./block-structure.md) -- [RPC](./rpc.md) -- [Fast Sync](./fast-sync.md) -- [State Sync](./state-sync.md) -- [Mempool](./mempool.md) -- [Light Client](./light-client.md) diff --git a/docs/core/block-structure.md b/docs/core/block-structure.md deleted file mode 100644 index dc73e50275..0000000000 --- a/docs/core/block-structure.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -order: 8 ---- - -# Block Structure - -The CometBFT consensus engine records all agreements by a -supermajority of nodes into a blockchain, which is replicated among all -nodes. This blockchain is accessible via various RPC endpoints, mainly -`/block?height=` to get the full block, as well as -`/blockchain?minHeight=_&maxHeight=_` to get a list of headers. But what -exactly is stored in these blocks? - -The [specification](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/data_structures.md) contains a detailed description of each component - that's the best place to get started. - -To dig deeper, check out the [types package documentation](https://godoc.org/github.com/cometbft/cometbft/types). diff --git a/docs/core/configuration.md b/docs/core/configuration.md deleted file mode 100644 index a9ace28b93..0000000000 --- a/docs/core/configuration.md +++ /dev/null @@ -1,558 +0,0 @@ ---- -order: 3 ---- - -# Configuration - -CometBFT can be configured via a TOML file in -`$CMTHOME/config/config.toml`. Some of these parameters can be overridden by -command-line flags. For most users, the options in the `##### main base configuration options #####` are intended to be modified while config options -further below are intended for advance power users. - -## Options - -The default configuration file create by `cometbft init` has all -the parameters set with their default values. It will look something -like the file below, however, double check by inspecting the -`config.toml` created with your version of `cometbft` installed: - -```toml - -# This is a TOML config file. -# For more information, see https://github.com/toml-lang/toml - -# NOTE: Any path below can be absolute (e.g. "/var/myawesomeapp/data") or -# relative to the home directory (e.g. "data"). The home directory is -# "$HOME/.cometbft" by default, but could be changed via $CMTHOME env variable -# or --home cmd flag. - -####################################################################### -### Main Base Config Options ### -####################################################################### - -# TCP or UNIX socket address of the ABCI application, -# or the name of an ABCI application compiled in with the CometBFT binary -proxy_app = "tcp://127.0.0.1:26658" - -# A custom human readable name for this node -moniker = "anonymous" - -# If this node is many blocks behind the tip of the chain, FastSync -# allows them to catchup quickly by downloading blocks in parallel -# and verifying their commits -fast_sync = true - -# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb -# * goleveldb (github.com/syndtr/goleveldb - most popular implementation) -# - pure go -# - stable -# * cleveldb (uses levigo wrapper) -# - fast -# - requires gcc -# - use cleveldb build tag (go build -tags cleveldb) -# * boltdb (uses etcd's fork of bolt - github.com/etcd-io/bbolt) -# - EXPERIMENTAL -# - may be faster is some use-cases (random reads - indexer) -# - use boltdb build tag (go build -tags boltdb) -# * rocksdb (uses github.com/tecbot/gorocksdb) -# - EXPERIMENTAL -# - requires gcc -# - use rocksdb build tag (go build -tags rocksdb) -# * badgerdb (uses github.com/dgraph-io/badger) -# - EXPERIMENTAL -# - use badgerdb build tag (go build -tags badgerdb) -db_backend = "goleveldb" - -# Database directory -db_dir = "data" - -# Output level for logging, including package level options -log_level = "info" - -# Output format: 'plain' (colored text) or 'json' -log_format = "plain" - -##### additional base config options ##### - -# Path to the JSON file containing the initial validator set and other meta data -genesis_file = "config/genesis.json" - -# Path to the JSON file containing the private key to use as a validator in the consensus protocol -priv_validator_key_file = "config/priv_validator_key.json" - -# Path to the JSON file containing the last sign state of a validator -priv_validator_state_file = "data/priv_validator_state.json" - -# TCP or UNIX socket address for CometBFT to listen on for -# connections from an external PrivValidator process -priv_validator_laddr = "" - -# Path to the JSON file containing the private key to use for node authentication in the p2p protocol -node_key_file = "config/node_key.json" - -# Mechanism to connect to the ABCI application: socket | grpc -abci = "socket" - -# If true, query the ABCI app on connecting to a new peer -# so the app can decide if we should keep the connection or not -filter_peers = false - - -####################################################################### -### Advanced Configuration Options ### -####################################################################### - -####################################################### -### RPC Server Configuration Options ### -####################################################### -[rpc] - -# TCP or UNIX socket address for the RPC server to listen on -laddr = "tcp://127.0.0.1:26657" - -# A list of origins a cross-domain request can be executed from -# Default value '[]' disables cors support -# Use '["*"]' to allow any origin -cors_allowed_origins = [] - -# A list of methods the client is allowed to use with cross-domain requests -cors_allowed_methods = ["HEAD", "GET", "POST", ] - -# A list of non simple headers the client is allowed to use with cross-domain requests -cors_allowed_headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time", ] - -# TCP or UNIX socket address for the gRPC server to listen on -# NOTE: This server only supports /broadcast_tx_commit -grpc_laddr = "" - -# Maximum number of simultaneous connections. -# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections -# If you want to accept a larger number than the default, make sure -# you increase your OS limits. -# 0 - unlimited. -# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files} -# 1024 - 40 - 10 - 50 = 924 = ~900 -grpc_max_open_connections = 900 - -# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool -unsafe = false - -# Maximum number of simultaneous connections (including WebSocket). -# Does not include gRPC connections. See grpc_max_open_connections -# If you want to accept a larger number than the default, make sure -# you increase your OS limits. -# 0 - unlimited. -# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files} -# 1024 - 40 - 10 - 50 = 924 = ~900 -max_open_connections = 900 - -# Maximum number of unique clientIDs that can /subscribe -# If you're using /broadcast_tx_commit, set to the estimated maximum number -# of broadcast_tx_commit calls per block. -max_subscription_clients = 100 - -# Maximum number of unique queries a given client can /subscribe to -# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to -# the estimated # maximum number of broadcast_tx_commit calls per block. -max_subscriptions_per_client = 5 - -# Experimental parameter to specify the maximum number of events a node will -# buffer, per subscription, before returning an error and closing the -# subscription. Must be set to at least 100, but higher values will accommodate -# higher event throughput rates (and will use more memory). -experimental_subscription_buffer_size = 200 - -# Experimental parameter to specify the maximum number of RPC responses that -# can be buffered per WebSocket client. If clients cannot read from the -# WebSocket endpoint fast enough, they will be disconnected, so increasing this -# parameter may reduce the chances of them being disconnected (but will cause -# the node to use more memory). -# -# Must be at least the same as "experimental_subscription_buffer_size", -# otherwise connections could be dropped unnecessarily. This value should -# ideally be somewhat higher than "experimental_subscription_buffer_size" to -# accommodate non-subscription-related RPC responses. -experimental_websocket_write_buffer_size = 200 - -# If a WebSocket client cannot read fast enough, at present we may -# silently drop events instead of generating an error or disconnecting the -# client. -# -# Enabling this experimental parameter will cause the WebSocket connection to -# be closed instead if it cannot read fast enough, allowing for greater -# predictability in subscription behaviour. -experimental_close_on_slow_client = false - -# How long to wait for a tx to be committed during /broadcast_tx_commit. -# WARNING: Using a value larger than 10s will result in increasing the -# global HTTP write timeout, which applies to all connections and endpoints. -# See https://github.com/tendermint/tendermint/issues/3435 -timeout_broadcast_tx_commit = "10s" - -# Maximum size of request body, in bytes -max_body_bytes = 1000000 - -# Maximum size of request header, in bytes -max_header_bytes = 1048576 - -# The path to a file containing certificate that is used to create the HTTPS server. -# Might be either absolute path or path related to CometBFT's config directory. -# If the certificate is signed by a certificate authority, -# the certFile should be the concatenation of the server's certificate, any intermediates, -# and the CA's certificate. -# NOTE: both tls_cert_file and tls_key_file must be present for CometBFT to create HTTPS server. -# Otherwise, HTTP server is run. -tls_cert_file = "" - -# The path to a file containing matching private key that is used to create the HTTPS server. -# Might be either absolute path or path related to CometBFT's config directory. -# NOTE: both tls-cert-file and tls-key-file must be present for CometBFT to create HTTPS server. -# Otherwise, HTTP server is run. -tls_key_file = "" - -# pprof listen address (https://golang.org/pkg/net/http/pprof) -pprof_laddr = "" - -####################################################### -### P2P Configuration Options ### -####################################################### -[p2p] - -# Address to listen for incoming connections -laddr = "tcp://0.0.0.0:26656" - -# Address to advertise to peers for them to dial -# If empty, will use the same port as the laddr, -# and will introspect on the listener or use UPnP -# to figure out the address. ip and port are required -# example: 159.89.10.97:26656 -external_address = "" - -# Comma separated list of seed nodes to connect to -seeds = "" - -# Comma separated list of nodes to keep persistent connections to -persistent_peers = "" - -# UPNP port forwarding -upnp = false - -# Path to address book -addr_book_file = "config/addrbook.json" - -# Set true for strict address routability rules -# Set false for private or local networks -addr_book_strict = true - -# Maximum number of inbound peers -max_num_inbound_peers = 40 - -# Maximum number of outbound peers to connect to, excluding persistent peers -max_num_outbound_peers = 10 - -# List of node IDs, to which a connection will be (re)established ignoring any existing limits -unconditional_peer_ids = "" - -# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used) -persistent_peers_max_dial_period = "0s" - -# Time to wait before flushing messages out on the connection -flush_throttle_timeout = "100ms" - -# Maximum size of a message packet payload, in bytes -max_packet_msg_payload_size = 1024 - -# Rate at which packets can be sent, in bytes/second -send_rate = 5120000 - -# Rate at which packets can be received, in bytes/second -recv_rate = 5120000 - -# Set true to enable the peer-exchange reactor -pex = true - -# Seed mode, in which node constantly crawls the network and looks for -# peers. If another node asks it for addresses, it responds and disconnects. -# -# Does not work if the peer-exchange reactor is disabled. -seed_mode = false - -# Comma separated list of peer IDs to keep private (will not be gossiped to other peers) -private_peer_ids = "" - -# Toggle to disable guard against peers connecting from the same ip. -allow_duplicate_ip = false - -# Peer connection configuration. -handshake_timeout = "20s" -dial_timeout = "3s" - -####################################################### -### Mempool Configuration Option ### -####################################################### -[mempool] - -# Mempool version to use: -# 1) "v0" - (default) FIFO mempool. -# 2) "v1" - prioritized mempool. -# 3) "v2" - CAT -version = "v2" - -# Recheck (default: true) defines whether CometBFT should recheck the -# validity for all remaining transaction in the mempool after a block. -# Since a block affects the application state, some transactions in the -# mempool may become invalid. If this does not apply to your application, -# you can disable rechecking. -recheck = true -broadcast = true -wal_dir = "" - -# Maximum number of transactions in the mempool -size = 5000 - -# Limit the total size of all txs in the mempool. -# This only accounts for raw transactions (e.g. given 1MB transactions and -# max_txs_bytes=5MB, mempool will only accept 5 transactions). -max_txs_bytes = 1073741824 - -# Size of the cache (used to filter transactions we saw earlier) in transactions -cache_size = 10000 - -# Do not remove invalid transactions from the cache (default: false) -# Set to true if it's not possible for any invalid transaction to become valid -# again in the future. -keep-invalid-txs-in-cache = false - -# Maximum size of a single transaction. -# NOTE: the max size of a tx transmitted over the network is {max_tx_bytes}. -max_tx_bytes = 1048576 - -# Maximum size of a batch of transactions to send to a peer -# Including space needed by encoding (one varint per transaction). -# XXX: Unused due to https://github.com/tendermint/tendermint/issues/5796 -max_batch_bytes = 0 - -# ttl-duration, if non-zero, defines the maximum amount of time a transaction -# can exist for in the mempool. -# -# Note, if ttl-num-blocks is also defined, a transaction will be removed if it -# has existed in the mempool at least ttl-num-blocks number of blocks or if it's -# insertion time into the mempool is beyond ttl-duration. -ttl-duration = "0s" - -# ttl-num-blocks, if non-zero, defines the maximum number of blocks a transaction -# can exist for in the mempool. -# -# Note, if ttl-duration is also defined, a transaction will be removed if it -# has existed in the mempool at least ttl-num-blocks number of blocks or if -# it's insertion time into the mempool is beyond ttl-duration. -ttl-num-blocks = 0 - -####################################################### -### State Sync Configuration Options ### -####################################################### -[statesync] -# State sync rapidly bootstraps a new node by discovering, fetching, and restoring a state machine -# snapshot from peers instead of fetching and replaying historical blocks. Requires some peers in -# the network to take and serve state machine snapshots. State sync is not attempted if the node -# has any local state (LastBlockHeight > 0). The node will have a truncated block history, -# starting from the height of the snapshot. -enable = false - -# RPC servers (comma-separated) for light client verification of the synced state machine and -# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding -# header hash obtained from a trusted source, and a period during which validators can be trusted. -# -# For Cosmos SDK-based chains, trust_period should usually be about 2/3 of the unbonding time (~2 -# weeks) during which they can be financially punished (slashed) for misbehavior. -rpc_servers = "" -trust_height = 0 -trust_hash = "" -trust_period = "168h0m0s" - -# Time to spend discovering snapshots before initiating a restore. -discovery_time = "15s" - -# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp). -# Will create a new, randomly named directory within, and remove it when done. -temp_dir = "" - -# The timeout duration before re-requesting a chunk, possibly from a different -# peer (default: 1 minute). -chunk_request_timeout = "10s" - -# The number of concurrent chunk fetchers to run (default: 1). -chunk_fetchers = "4" - -####################################################### -### Fast Sync Configuration Connections ### -####################################################### -[fastsync] - -# Fast Sync version to use: -# 1) "v0" (default) - the legacy fast sync implementation -# 2) "v1" - refactor of v0 version for better testability -# 2) "v2" - complete redesign of v0, optimized for testability & readability -version = "v0" - -####################################################### -### Consensus Configuration Options ### -####################################################### -[consensus] - -wal_file = "data/cs.wal/wal" - -# How long we wait for a proposal block before prevoting nil -timeout_propose = "3s" -# How much timeout_propose increases with each round -timeout_propose_delta = "500ms" -# How long we wait after receiving +2/3 prevotes for “anything” (ie. not a single block or nil) -timeout_prevote = "1s" -# How much the timeout_prevote increases with each round -timeout_prevote_delta = "500ms" -# How long we wait after receiving +2/3 precommits for “anything” (ie. not a single block or nil) -timeout_precommit = "1s" -# How much the timeout_precommit increases with each round -timeout_precommit_delta = "500ms" -# How long we wait after committing a block, before starting on the new -# height (this gives us a chance to receive some more precommits, even -# though we already have +2/3). -timeout_commit = "1s" - -# How many blocks to look back to check existence of the node's consensus votes before joining consensus -# When non-zero, the node will panic upon restart -# if the same consensus key was used to sign {double_sign_check_height} last blocks. -# So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic. -double_sign_check_height = 0 - -# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0) -skip_timeout_commit = false - -# EmptyBlocks mode and possible interval between empty blocks -create_empty_blocks = true -create_empty_blocks_interval = "0s" - -# Reactor sleep duration parameters -peer_gossip_sleep_duration = "100ms" -peer_query_maj23_sleep_duration = "2s" - -####################################################### -### Storage Configuration Options ### -####################################################### -[storage] - -# Set to true to discard ABCI responses from the state store, which can save a -# considerable amount of disk space. Set to false to ensure ABCI responses are -# persisted. ABCI responses are required for /block_results RPC queries, and to -# reindex events in the command-line tool. -discard_abci_responses = false - -####################################################### -### Transaction Indexer Configuration Options ### -####################################################### -[tx_index] - -# What indexer to use for transactions -# -# The application will set which txs to index. In some cases a node operator will be able -# to decide which txs to index based on configuration set in the application. -# -# Options: -# 1) "null" -# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend). -# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed. -# 3) "psql" - the indexer services backed by PostgreSQL. -# When "kv" or "psql" is chosen "tx.height" and "tx.hash" will always be indexed. -indexer = "kv" - -# The PostgreSQL connection configuration, the connection format: -# postgresql://:@:/? -psql-conn = "" - -####################################################### -### Instrumentation Configuration Options ### -####################################################### -[instrumentation] - -# When true, Prometheus metrics are served under /metrics on -# PrometheusListenAddr. -# Check out the documentation for the list of available metrics. -prometheus = false - -# Address to listen for Prometheus collector(s) connections -prometheus_listen_addr = ":26660" - -# Maximum number of simultaneous connections. -# If you want to accept a larger number than the default, make sure -# you increase your OS limits. -# 0 - unlimited. -max_open_connections = 3 - -# Instrumentation namespace -namespace = "cometbft" - ``` - -## Empty blocks VS no empty blocks -### create_empty_blocks = true - -If `create_empty_blocks` is set to `true` in your config, blocks will be created ~ every second (with default consensus parameters). You can regulate the delay between blocks by changing the `timeout_commit`. E.g. `timeout_commit = "10s"` should result in ~ 10 second blocks. - -### create_empty_blocks = false - -In this setting, blocks are created when transactions received. - -Note after the block H, CometBFT creates something we call a "proof block" -(only if the application hash changed) H+1. The reason for this is to support -proofs. If you have a transaction in block H that changes the state to X, the -new application hash will only be included in block H+1. If after your -transaction is committed, you want to get a light-client proof for the new state -(X), you need the new block to be committed in order to do that because the new -block has the new application hash for the state X. That's why we make a new -(empty) block if the application hash changes. Otherwise, you won't be able to -make a proof for the new state. - -Plus, if you set `create_empty_blocks_interval` to something other than the -default (`0`), CometBFT will be creating empty blocks even in the absence of -transactions every `create_empty_blocks_interval`. For instance, with -`create_empty_blocks = false` and `create_empty_blocks_interval = "30s"`, -CometBFT will only create blocks if there are transactions, or after waiting -30 seconds without receiving any transactions. - -Plus, if you set `create_empty_blocks_interval` to something other than the default (`0`), CometBFT will be creating empty blocks even in the absence of transactions every `create_empty_blocks_interval.` For instance, with `create_empty_blocks = false` and `create_empty_blocks_interval = "30s"`, CometBFT will only create blocks if there are transactions, or after waiting 30 seconds without receiving any transactions. - -## Consensus timeouts explained -There's a variety of information about timeouts in [Running in -production](./running-in-production.md#configuration-parameters). -You can also find more detailed explanation in the paper describing -the Tendermint consensus algorithm, adopted by CometBFT: [The latest -gossip on BFT consensus](https://arxiv.org/abs/1807.04938). - -```toml -[consensus] -... - -timeout_propose = "3s" -timeout_propose_delta = "500ms" -timeout_prevote = "1s" -timeout_prevote_delta = "500ms" -timeout_precommit = "1s" -timeout_precommit_delta = "500ms" -timeout_commit = "1s" -``` - -Note that in a successful round, the only timeout that we absolutely wait no -matter what is `timeout_commit`. - -Here's a brief summary of the timeouts: - -- `timeout_propose` = how long a validator should wait for a proposal block before prevoting nil -- `timeout_propose_delta` = how much `timeout_propose` increases with each round -- `timeout_prevote` = how long a validator should wait after receiving +2/3 prevotes for - anything (ie. not a single block or nil) -- `timeout_prevote_delta` = how much the `timeout_prevote` increases with each round -- `timeout_precommit` = how long a validator should wait after receiving +2/3 precommits for - anything (ie. not a single block or nil) -- `timeout_precommit_delta` = how much the `timeout_precommit` increases with each round -- `timeout_commit` = how long a validator should wait after committing a block, before starting - on the new height (this gives us a chance to receive some more precommits, - even though we already have +2/3) diff --git a/docs/core/fast-sync.md b/docs/core/fast-sync.md deleted file mode 100644 index b3a13a34ff..0000000000 --- a/docs/core/fast-sync.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -order: 10 ---- - -# Fast Sync - -In a proof of work blockchain, syncing with the chain is the same -process as staying up-to-date with the consensus: download blocks, and -look for the one with the most total work. In proof-of-stake, the -consensus process is more complex, as it involves rounds of -communication between the nodes to determine what block should be -committed next. Using this process to sync up with the blockchain from -scratch can take a very long time. It's much faster to just download -blocks and check the merkle tree of validators than to run the real-time -consensus gossip protocol. - -## Using Fast Sync - -To support faster syncing, CometBFT offers a `fast-sync` mode, which -is enabled by default, and can be toggled in the `config.toml` or via -`--fast_sync=false`. - -In this mode, the CometBFT daemon will sync hundreds of times faster -than if it used the real-time consensus process. Once caught up, the -daemon will switch out of fast sync and into the normal consensus mode. -After running for some time, the node is considered `caught up` if it -has at least one peer and its height is at least as high as the max -reported peer height. -See [the IsCaughtUp method](https://github.com/cometbft/cometbft/blob/v0.34.x/blockchain/v0/pool.go#L168). - -Note: There are three versions of fast sync. We recommend using v0 as v1 and v2 are still in beta. - If you would like to use a different version you can do so by changing the version in the `config.toml`: - -```toml -####################################################### -### Fast Sync Configuration Connections ### -####################################################### -[fastsync] - -# Fast Sync version to use: -# 1) "v0" (default) - the legacy fast sync implementation -# 2) "v1" - refactor of v0 version for better testability -# 2) "v2" - complete redesign of v0, optimized for testability & readability -version = "v0" -``` - -If we're lagging sufficiently, we should go back to fast syncing, but -this is an [open issue](https://github.com/tendermint/tendermint/issues/129). diff --git a/docs/core/how-to-read-logs.md b/docs/core/how-to-read-logs.md deleted file mode 100644 index e94a7570f0..0000000000 --- a/docs/core/how-to-read-logs.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -order: 7 ---- - -# How to read logs - -## Walkabout example - -We first create three connections (mempool, consensus and query) to the -application (running `kvstore` locally in this case). - -```sh -I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn -I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient -I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient -I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient -``` - -Then CometBFT and the application perform a handshake. - -```sh -I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD -I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90 -I[10-04|13:54:27.368] Completed ABCI Handshake - CometBFT and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD -``` - -After that, we start a few more things like the event switch, reactors, -and perform UPNP discover in order to detect the IP address. - -```sh -I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch -I[10-04|13:54:27.375] This node is a validator module=consensus -I[10-04|13:54:27.379] Starting Node module=main impl=Node -I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=26656 -I[10-04|13:54:27.382] Getting UPNP external address module=p2p -I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout" -I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:26656) -I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch" -I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor -I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor -I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor -I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false -I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState -I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.cometbft/data/cs.wal/wal impl=WAL -I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker -``` - -Notice the second row where CometBFT reports that "This node is a -validator". It also could be just an observer (regular node). - -Next we replay all the messages from the WAL. - -```sh -I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91 -I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight -I[10-04|13:54:30.390] Replay: Done module=consensus -``` - -"Started node" message signals that everything is ready for work. - -```sh -I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:26657 module=rpc-server -I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{id: DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:26656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:26657])}" -``` - -Next follows a standard block creation cycle, where we enter a new -round, propose a block, receive more than 2/3 of prevotes, then -precommits and finally have a chance to commit a block. For details, -please refer to [Byzantine Consensus Algorithm](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/consensus/consensus.md). - -```sh -I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus -I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus -I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}" -I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}" -I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E -I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus -I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0 -I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null -I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}" -I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus -I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E -I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null -I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}" -I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus -I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD -I[10-04|13:54:30.405] Block{ - Header{ - ChainID: test-chain-3MNw2N - Height: 91 - Time: 2017-10-04 13:54:30.393 +0000 UTC - NumTxs: 0 - LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544 - LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D - Data: - Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297 - App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD - }#F671D562C7B9242900A286E1882EE64E5556FE9E - Data{ - - }# - Commit{ - BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544 - Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}} - }#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D -}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus -I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0 -I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD -I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91 -``` - -## List of modules - -Here is the list of modules you may encounter in CometBFT's log and a -little overview what they do. - -- `abci-client` As mentioned in [Application Development Guide](../app-dev/abci-cli.md), CometBFT acts as an ABCI - client with respect to the application and maintains 3 connections: - mempool, consensus and query. The code used by CometBFT can - be found [here](https://github.com/cometbft/cometbft/blob/v0.34.x/abci/client). -- `blockchain` Provides storage, pool (a group of peers), and reactor - for both storing and exchanging blocks between peers. -- `consensus` The heart of CometBFT, which is the - implementation of the consensus algorithm. Includes two - "submodules": `wal` (write-ahead logging) for ensuring data - integrity and `replay` to replay blocks and messages on recovery - from a crash. -- `events` Simple event notification system. The list of events can be - found - [here](https://github.com/cometbft/cometbft/blob/v0.34.x/types/events.go). - You can subscribe to them by calling `subscribe` RPC method. Refer - to [RPC docs](./rpc.md) for additional information. -- `mempool` Mempool module handles all incoming transactions, whenever - they are coming from peers or the application. -- `p2p` Provides an abstraction around peer-to-peer communication. For - more details, please check out the - [README](https://github.com/cometbft/cometbft/blob/v0.34.x/p2p/README.md). -- `rpc` [CometBFT's RPC](./rpc.md). -- `rpc-server` RPC server. For implementation details, please read the - [doc.go](https://github.com/cometbft/cometbft/blob/v0.34.x/rpc/jsonrpc/doc.go). -- `state` Represents the latest state and execution submodule, which - executes blocks against the application. -- `types` A collection of the publicly exposed types and methods to - work with them. diff --git a/docs/core/light-client.md b/docs/core/light-client.md deleted file mode 100644 index 4a08a5129d..0000000000 --- a/docs/core/light-client.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -order: 13 ---- - -# Light Client - -Light clients are an important part of the complete blockchain system for most -applications. CometBFT provides unique speed and security properties for -light client applications. - -See our [light -package](https://pkg.go.dev/github.com/cometbft/cometbft/light?tab=doc). - -## Overview - -The objective of the light client protocol is to get a commit for a recent -block hash where the commit includes a majority of signatures from the last -known validator set. From there, all the application state is verifiable with -[merkle proofs](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/encoding.md#iavl-tree). - -## Properties - -- You get the full collateralized security benefits of CometBFT; no - need to wait for confirmations. -- You get the full speed benefits of CometBFT; transactions - commit instantly. -- You can get the most recent version of the application state - non-interactively (without committing anything to the blockchain). For - example, this means that you can get the most recent value of a name from the - name-registry without worrying about fork censorship attacks, without posting - a commit and waiting for confirmations. It's fast, secure, and free! - -## Where to obtain trusted height & hash - -[Trust Options](https://pkg.go.dev/github.com/cometbft/cometbft/light?tab=doc#TrustOptions) - -One way to obtain semi-trusted hash & height is to query multiple full nodes -and compare their hashes: - -```bash -$ curl -s https://233.123.0.140:26657:26657/commit | jq "{height: .result.signed_header.header.height, hash: .result.signed_header.commit.block_id.hash}" -{ - "height": "273", - "hash": "188F4F36CBCD2C91B57509BBF231C777E79B52EE3E0D90D06B1A25EB16E6E23D" -} -``` - -## Running a light client as an HTTP proxy server - -CometBFT comes with a built-in `cometbft light` command, which can be used -to run a light client proxy server, verifying CometBFT RPC. All calls that -can be tracked back to a block header by a proof will be verified before -passing them back to the caller. Other than that, it will present the same -interface as a full CometBFT node. - -You can start the light client proxy server by running `cometbft light `, -with a variety of flags to specify the primary node, the witness nodes (which cross-check -the information provided by the primary), the hash and height of the trusted header, -and more. - -For example: - -```bash -$ cometbft light supernova -p tcp://233.123.0.140:26657 \ - -w tcp://179.63.29.15:26657,tcp://144.165.223.135:26657 \ - --height=10 --hash=37E9A6DD3FA25E83B22C18835401E8E56088D0D7ABC6FD99FCDC920DD76C1C57 -``` - -For additional options, run `cometbft light --help`. diff --git a/docs/core/mempool.md b/docs/core/mempool.md deleted file mode 100644 index 8dd9687819..0000000000 --- a/docs/core/mempool.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -order: 12 ---- - -# Mempool - -## Transaction ordering - -Currently, there's no ordering of transactions other than the order they've -arrived (via RPC or from other nodes). - -So the only way to specify the order is to send them to a single node. - -valA: - -- `tx1` -- `tx2` -- `tx3` - -If the transactions are split up across different nodes, there's no way to -ensure they are processed in the expected order. - -valA: - -- `tx1` -- `tx2` - -valB: - -- `tx3` - -If valB is the proposer, the order might be: - -- `tx3` -- `tx1` -- `tx2` - -If valA is the proposer, the order might be: - -- `tx1` -- `tx2` -- `tx3` - -That said, if the transactions contain some internal value, like an -order/nonce/sequence number, the application can reject transactions that are -out of order. So if a node receives `tx3`, then `tx1`, it can reject `tx3` and then -accept `tx1`. The sender can then retry sending `tx3`, which should probably be -rejected until the node has seen `tx2`. diff --git a/docs/core/metrics.md b/docs/core/metrics.md deleted file mode 100644 index 749c2a7b6e..0000000000 --- a/docs/core/metrics.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -order: 5 ---- - -# Metrics - -CometBFT can report and serve the Prometheus metrics, which in their turn can -be consumed by Prometheus collector(s). - -This functionality is disabled by default. - -To enable the Prometheus metrics, set `instrumentation.prometheus=true` in your -config file. Metrics will be served under `/metrics` on 26660 port by default. -Listen address can be changed in the config file (see -`instrumentation.prometheus\_listen\_addr`). - -## List of available metrics - -The following metrics are available: - -| **Name** | **Type** | **Tags** | **Description** | -|--------------------------------------------|-----------|------------------|------------------------------------------------------------------------| -| consensus\_height | Gauge | | Height of the chain | -| consensus\_validators | Gauge | | Number of validators | -| consensus\_validators\_power | Gauge | | Total voting power of all validators | -| consensus\_validator\_power | Gauge | | Voting power of the node if in the validator set | -| consensus\_validator\_last\_signed\_height | Gauge | | Last height the node signed a block, if the node is a validator | -| consensus\_validator\_missed\_blocks | Gauge | | Total amount of blocks missed for the node, if the node is a validator | -| consensus\_missing\_validators | Gauge | | Number of validators who did not sign | -| consensus\_missing\_validators\_power | Gauge | | Total voting power of the missing validators | -| consensus\_byzantine\_validators | Gauge | | Number of validators who tried to double sign | -| consensus\_byzantine\_validators\_power | Gauge | | Total voting power of the byzantine validators | -| consensus\_block\_interval\_seconds | Histogram | | Time between this and last block (Block.Header.Time) in seconds | -| consensus\_rounds | Gauge | | Number of rounds | -| consensus\_num\_txs | Gauge | | Number of transactions | -| consensus\_total\_txs | Gauge | | Total number of transactions committed | -| consensus\_block\_parts | Counter | peer\_id | Number of blockparts transmitted by peer | -| consensus\_latest\_block\_height | Gauge | | /status sync\_info number | -| consensus\_fast\_syncing | Gauge | | Either 0 (not fast syncing) or 1 (syncing) | -| consensus\_state\_syncing | Gauge | | Either 0 (not state syncing) or 1 (syncing) | -| consensus\_block\_size\_bytes | Gauge | | Block size in bytes | -| consensus\_step\_duration | Histogram | step | Histogram of durations for each step in the consensus protocol | -| consensus\_block\_gossip\_parts\_received | Counter | matches\_current | Number of block parts received by the node | -| p2p\_message\_send\_bytes\_total | Counter | message\_type | Number of bytes sent to all peers per message type | -| p2p\_message\_receive\_bytes\_total | Counter | message\_type | Number of bytes received from all peers per message type | -| p2p\_peers | Gauge | | Number of peers node's connected to | -| p2p\_peer\_receive\_bytes\_total | Counter | peer\_id, chID | Number of bytes per channel received from a given peer | -| p2p\_peer\_send\_bytes\_total | Counter | peer\_id, chID | Number of bytes per channel sent to a given peer | -| p2p\_peer\_pending\_send\_bytes | Gauge | peer\_id | Number of pending bytes to be sent to a given peer | -| p2p\_num\_txs | Gauge | peer\_id | Number of transactions submitted by each peer\_id | -| p2p\_pending\_send\_bytes | Gauge | peer\_id | Amount of data pending to be sent to peer | -| mempool\_size | Gauge | | Number of uncommitted transactions | -| mempool\_tx\_size\_bytes | Histogram | | Transaction sizes in bytes | -| mempool\_failed\_txs | Counter | | Number of failed transactions | -| mempool\_recheck\_times | Counter | | Number of transactions rechecked in the mempool | -| state\_block\_processing\_time | Histogram | | Time between BeginBlock and EndBlock in ms | - - -## Useful queries - -Percentage of missing + byzantine validators: - -```md -((consensus\_byzantine\_validators\_power + consensus\_missing\_validators\_power) / consensus\_validators\_power) * 100 -``` diff --git a/docs/core/rpc.md b/docs/core/rpc.md deleted file mode 100644 index e118d5a3a2..0000000000 --- a/docs/core/rpc.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -order: 9 ---- - -# RPC - -The RPC documentation is hosted here: - -- [OpenAPI reference](../rpc) diff --git a/docs/core/running-in-production.md b/docs/core/running-in-production.md deleted file mode 100644 index 88ef6686c7..0000000000 --- a/docs/core/running-in-production.md +++ /dev/null @@ -1,412 +0,0 @@ ---- -order: 4 ---- - -# Running in production - -## Database - -By default, CometBFT uses the `syndtr/goleveldb` package for its in-process -key-value database. If you want maximal performance, it may be best to install -the real C-implementation of LevelDB and compile CometBFT to use that using -`make build COMETBFT_BUILD_OPTIONS=cleveldb`. See the [install -instructions](../introduction/install.md) for details. - -CometBFT keeps multiple distinct databases in the `$CMTHOME/data`: - -- `blockstore.db`: Keeps the entire blockchain - stores blocks, - block commits, and block meta data, each indexed by height. Used to sync new - peers. -- `evidence.db`: Stores all verified evidence of misbehaviour. -- `state.db`: Stores the current blockchain state (ie. height, validators, - consensus params). Only grows if consensus params or validators change. Also - used to temporarily store intermediate results during block processing. -- `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result events. - -By default, CometBFT will only index txs by their hash and height, not by their DeliverTx -result events. See [indexing transactions](../app-dev/indexing-transactions.md) for -details. - -Applications can expose block pruning strategies to the node operator. -Please read the documentation of your application to find out more details. - -Applications can use [state sync](./state-sync.md) to help nodes bootstrap quickly. - -## Logging - -Default logging level (`log_level = "main:info,state:info,statesync:info,*:error"`) should suffice for -normal operation mode. Read [this -post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756) -for details on how to configure `log_level` config variable. Some of the -modules can be found [here](./how-to-read-logs.md#list-of-modules). If -you're trying to debug CometBFT or asked to provide logs with debug -logging level, you can do so by running CometBFT with -`--log_level="*:debug"`. - -## Write Ahead Logs (WAL) - -CometBFT uses write ahead logs for the consensus (`cs.wal`) and the mempool -(`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated. - -### Consensus WAL - -The `consensus.wal` is used to ensure we can recover from a crash at any point -in the consensus state machine. -It writes all consensus messages (timeouts, proposals, block part, or vote) -to a single file, flushing to disk before processing messages from its own -validator. Since CometBFT validators are expected to never sign a conflicting vote, the -WAL ensures we can always recover deterministically to the latest state of the consensus without -using the network or re-signing any consensus messages. - -If your `consensus.wal` is corrupted, see [below](#wal-corruption). - -### Mempool WAL - -The `mempool.wal` logs all incoming txs before running CheckTx, but is -otherwise not used in any programmatic way. It's just a kind of manual -safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes -may never make it into the blockchain if those nodes crash before being able to -propose it. Clients must monitor their txs by subscribing over websockets, -polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be -resent from the mempool WAL manually. - -For the above reasons, the `mempool.wal` is disabled by default. To enable, set -`mempool.wal_dir` to where you want the WAL to be located (e.g. -`data/mempool.wal`). - -## DoS Exposure and Mitigation - -Validators are supposed to setup [Sentry Node Architecture](./validators.md) -to prevent Denial-of-Service attacks. - -### P2P - -The core of the CometBFT peer-to-peer system is `MConnection`. Each -connection has `MaxPacketMsgPayloadSize`, which is the maximum packet -size and bounded send & receive queues. One can impose restrictions on -send & receive rate per connection (`SendRate`, `RecvRate`). - -The number of open P2P connections can become quite large, and hit the operating system's open -file limit (since TCP connections are considered files on UNIX-based systems). Nodes should be -given a sizable open file limit, e.g. 8192, via `ulimit -n 8192` or other deployment-specific -mechanisms. - -### RPC - -#### Attack Exposure and Mitigation - -**It is generally not recommended for RPC endpoints to be exposed publicly, and -especially so if the node in question is a validator**, as the CometBFT RPC does -not currently provide advanced security features. Public exposure of RPC -endpoints without appropriate protection can make the associated node vulnerable -to a variety of attacks. - -It is entirely up to operators to ensure, if nodes' RPC endpoints have to be -exposed publicly, that appropriate measures have been taken to mitigate against -attacks. Some examples of mitigation measures include, but are not limited to: - -- Never publicly exposing the RPC endpoints of validators (i.e. if the RPC - endpoints absolutely have to be exposed, ensure you do so only on full nodes - and with appropriate protection) -- Correct usage of rate-limiting, authentication and caching (e.g. as provided - by reverse proxies like [nginx](https://nginx.org/) and/or DDoS protection - services like [Cloudflare](https://www.cloudflare.com)) -- Only exposing the specific endpoints absolutely necessary for the relevant use - cases (configurable via nginx/Cloudflare/etc.) - -If no expertise is available to the operator to assist with securing nodes' RPC -endpoints, it is strongly recommended to never expose those endpoints publicly. - -**Under no condition should any of the [unsafe RPC endpoints](../rpc/#/Unsafe) -ever be exposed publicly.** - -#### Endpoints Returning Multiple Entries - -Endpoints returning multiple entries are limited by default to return 30 -elements (100 max). See the [RPC Documentation](../rpc/) for more information. - -## Debugging CometBFT - -If you ever have to debug CometBFT, the first thing you should probably do is -check out the logs. See [How to read logs](./how-to-read-logs.md), where we -explain what certain log statements mean. - -If, after skimming through the logs, things are not clear still, the next thing -to try is querying the `/status` RPC endpoint. It provides the necessary info: -whenever the node is syncing or not, what height it is on, etc. - -```bash -curl http(s)://{ip}:{rpcPort}/status -``` - -`/dump_consensus_state` will give you a detailed overview of the consensus -state (proposer, latest validators, peers states). From it, you should be able -to figure out why, for example, the network had halted. - -```bash -curl http(s)://{ip}:{rpcPort}/dump_consensus_state -``` - -There is a reduced version of this endpoint - `/consensus_state`, which returns -just the votes seen at the current height. - -If, after consulting with the logs and above endpoints, you still have no idea -what's happening, consider using `cometbft debug kill` sub-command. This -command will scrap all the available info and kill the process. See -[Debugging](../tools/debugging.md) for the exact format. - -You can inspect the resulting archive yourself or create an issue on -[Github](https://github.com/cometbft/cometbft). Before opening an issue -however, be sure to check if there's [no existing -issue](https://github.com/cometbft/cometbft/issues) already. - -## Monitoring CometBFT - -Each CometBFT instance has a standard `/health` RPC endpoint, which responds -with 200 (OK) if everything is fine and 500 (or no response) - if something is -wrong. - -Other useful endpoints include mentioned earlier `/status`, `/net_info` and -`/validators`. - -CometBFT also can report and serve Prometheus metrics. See -[Metrics](./metrics.md). - -`cometbft debug dump` sub-command can be used to periodically dump useful -information into an archive. See [Debugging](../tools/debugging.md) for more -information. - -## What happens when my app dies - -You are supposed to run CometBFT under a [process -supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like -systemd or runit). It will ensure CometBFT is always running (despite -possible errors). - -Getting back to the original question, if your application dies, -CometBFT will panic. After a process supervisor restarts your -application, CometBFT should be able to reconnect successfully. The -order of restart does not matter for it. - -## Signal handling - -We catch SIGINT and SIGTERM and try to clean up nicely. For other -signals we use the default behavior in Go: -[Default behavior of signals in Go programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs). - -## Corruption - -**NOTE:** Make sure you have a backup of the CometBFT data directory. - -### Possible causes - -Remember that most corruption is caused by hardware issues: - -- RAID controllers with faulty / worn out battery backup, and an unexpected power loss -- Hard disk drives with write-back cache enabled, and an unexpected power loss -- Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss -- Defective RAM -- Defective or overheating CPU(s) - -Other causes can be: - -- Database systems configured with fsync=off and an OS crash or power loss -- Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit. -- CometBFT bugs -- Operating system bugs -- Admin error (e.g., directly modifying CometBFT data-directory contents) - -(Source: ) - -### WAL Corruption - -If consensus WAL is corrupted at the latest height and you are trying to start -CometBFT, replay will fail with panic. - -Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take: - -1. Delete the WAL file and restart CometBFT. It will attempt to sync with other peers. -2. Try to repair the WAL file manually: - -1) Create a backup of the corrupted WAL file: - - ```sh - cp "$CMTHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup - ``` - -2) Use `./scripts/wal2json` to create a human-readable version: - - ```sh - ./scripts/wal2json/wal2json "$CMTHOME/data/cs.wal/wal" > /tmp/corrupted_wal - ``` - -3) Search for a "CORRUPTED MESSAGE" line. -4) By looking at the previous message and the message after the corrupted one - and looking at the logs, try to rebuild the message. If the consequent - messages are marked as corrupted too (this may happen if length header - got corrupted or some writes did not make it to the WAL ~ truncation), - then remove all the lines starting from the corrupted one and restart - CometBFT. - - ```sh - $EDITOR /tmp/corrupted_wal - ``` - -5) After editing, convert this file back into binary form by running: - - ```sh - ./scripts/json2wal/json2wal /tmp/corrupted_wal $CMTHOME/data/cs.wal/wal - ``` - -## Hardware - -### Processor and Memory - -While actual specs vary depending on the load and validators count, minimal -requirements are: - -- 1GB RAM -- 25GB of disk space -- 1.4 GHz CPU - -SSD disks are preferable for applications with high transaction throughput. - -Recommended: - -- 2GB RAM -- 100GB SSD -- x64 2.0 GHz 2v CPU - -While for now, CometBFT stores all the history and it may require significant -disk space over time, we are planning to implement state syncing (See [this -issue](https://github.com/tendermint/tendermint/issues/828)). So, storing all -the past blocks will not be necessary. - -### Validator signing on 32 bit architectures (or ARM) - -Both our `ed25519` and `secp256k1` implementations require constant time -`uint64` multiplication. Non-constant time crypto can (and has) leaked -private keys on both `ed25519` and `secp256k1`. This doesn't exist in hardware -on 32 bit x86 platforms ([source](https://bearssl.org/ctmul.html)), and it -depends on the compiler to enforce that it is constant time. It's unclear at -this point whenever the Golang compiler does this correctly for all -implementations. - -**We do not support nor recommend running a validator on 32 bit architectures OR -the "VIA Nano 2000 Series", and the architectures in the ARM section rated -"S-".** - -### Operating Systems - -CometBFT can be compiled for a wide range of operating systems thanks to Go -language (the list of \$OS/\$ARCH pairs can be found -[here](https://golang.org/doc/install/source#environment)). - -While we do not favor any operation system, more secure and stable Linux server -distributions (like CentOS) should be preferred over desktop operation systems -(like Mac OS). - -### Miscellaneous - -NOTE: if you are going to use CometBFT in a public domain, make sure -you read [hardware recommendations](https://cosmos.network/validators) for a validator in the -Cosmos network. - -## Configuration parameters - -- `p2p.flush_throttle_timeout` -- `p2p.max_packet_msg_payload_size` -- `p2p.send_rate` -- `p2p.recv_rate` - -If you are going to use CometBFT in a private domain and you have a -private high-speed network among your peers, it makes sense to lower -flush throttle timeout and increase other params. - -```toml -[p2p] - -send_rate=20000000 # 2MB/s -recv_rate=20000000 # 2MB/s -flush_throttle_timeout=10 -max_packet_msg_payload_size=10240 # 10KB -``` - -- `mempool.recheck` - -After every block, CometBFT rechecks every transaction left in the -mempool to see if transactions committed in that block affected the -application state, so some of the transactions left may become invalid. -If that does not apply to your application, you can disable it by -setting `mempool.recheck=false`. - -- `mempool.broadcast` - -Setting this to false will stop the mempool from relaying transactions -to other peers until they are included in a block. It means only the -peer you send the tx to will see it until it is included in a block. - -- `consensus.skip_timeout_commit` - -We want `skip_timeout_commit=false` when there is economics on the line -because proposers should wait to hear for more votes. But if you don't -care about that and want the fastest consensus, you can skip it. It will -be kept false by default for public deployments (e.g. [Cosmos -Hub](https://cosmos.network/intro/hub)) while for enterprise -applications, setting it to true is not a problem. - -- `consensus.peer_gossip_sleep_duration` - -You can try to reduce the time your node sleeps before checking if -theres something to send its peers. - -- `consensus.timeout_commit` - -You can also try lowering `timeout_commit` (time we sleep before -proposing the next block). - -- `p2p.addr_book_strict` - -By default, CometBFT checks whenever a peer's address is routable before -saving it to the address book. The address is considered as routable if the IP -is [valid and within allowed ranges](https://github.com/cometbft/cometbft/blob/v0.34.x/p2p/netaddress.go#L258). - -This may not be the case for private or local networks, where your IP range is usually -strictly limited and private. If that case, you need to set `addr_book_strict` -to `false` (turn it off). - -- `rpc.max_open_connections` - -By default, the number of simultaneous connections is limited because most OS -give you limited number of file descriptors. - -If you want to accept greater number of connections, you will need to increase -these limits. - -[Sysctls to tune the system to be able to open more connections](https://github.com/satori-com/tcpkali/blob/master/doc/tcpkali.man.md#sysctls-to-tune-the-system-to-be-able-to-open-more-connections) - -The process file limits must also be increased, e.g. via `ulimit -n 8192`. - -...for N connections, such as 50k: - -```md -kern.maxfiles=10000+2*N # BSD -kern.maxfilesperproc=100+2*N # BSD -kern.ipc.maxsockets=10000+2*N # BSD -fs.file-max=10000+2*N # Linux -net.ipv4.tcp_max_orphans=N # Linux - -# For load-generating clients. -net.ipv4.ip_local_port_range="10000 65535" # Linux. -net.inet.ip.portrange.first=10000 # BSD/Mac. -net.inet.ip.portrange.last=65535 # (Enough for N < 55535) -net.ipv4.tcp_tw_reuse=1 # Linux -net.inet.tcp.maxtcptw=2*N # BSD - -# If using netfilter on Linux: -net.netfilter.nf_conntrack_max=N -echo $((N/8)) > /sys/module/nf_conntrack/parameters/hashsize -``` - -The similar option exists for limiting the number of gRPC connections - -`rpc.grpc_max_open_connections`. diff --git a/docs/core/state-sync.md b/docs/core/state-sync.md deleted file mode 100644 index a6e314fe57..0000000000 --- a/docs/core/state-sync.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -order: 11 ---- - -# State Sync - -With fast sync a node is downloading all of the data of an application from genesis and verifying it. -With state sync your node will download data related to the head or near the head of the chain and verify the data. -This leads to drastically shorter times for joining a network. - -## Using State Sync - -State sync will continuously work in the background to supply nodes with chunked data when bootstrapping. - -> NOTE: Before trying to use state sync, see if the application you are operating a node for supports it. - -Under the state sync section in `config.toml` you will find multiple settings that need to be configured in order for your node to use state sync. - -Lets breakdown the settings: - -- `enable`: Enable is to inform the node that you will be using state sync to bootstrap your node. -- `rpc_servers`: RPC servers are needed because state sync utilizes the light client for verification. - - 2 servers are required, more is always helpful. -- `temp_dir`: Temporary directory is store the chunks in the machines local storage, If nothing is set it will create a directory in `/tmp` - -The next information you will need to acquire it through publicly exposed RPC's or a block explorer which you trust. - -- `trust_height`: Trusted height defines at which height your node should trust the chain. -- `trust_hash`: Trusted hash is the hash in the `BlockID` corresponding to the trusted height. -- `trust_period`: Trust period is the period in which headers can be verified. - > :warning: This value should be significantly smaller than the unbonding period. - -If you are relying on publicly exposed RPC's to get the need information, you can use `curl`. - -Example: - -```bash -curl -s https://233.123.0.140:26657/commit | jq "{height: .result.signed_header.header.height, hash: .result.signed_header.commit.block_id.hash}" -``` - -The response will be: - -```json -{ - "height": "273", - "hash": "188F4F36CBCD2C91B57509BBF231C777E79B52EE3E0D90D06B1A25EB16E6E23D" -} -``` diff --git a/docs/core/subscription.md b/docs/core/subscription.md deleted file mode 100644 index 796a415ff1..0000000000 --- a/docs/core/subscription.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -order: 7 ---- - -# Subscribing to events via Websocket - -CometBFT emits different events, which you can subscribe to via -[Websocket](https://en.wikipedia.org/wiki/WebSocket). This can be useful -for third-party applications (for analysis) or for inspecting state. - -[List of events](https://godoc.org/github.com/cometbft/cometbft/types#pkg-constants) - -To connect to a node via websocket from the CLI, you can use a tool such as -[wscat](https://github.com/websockets/wscat) and run: - -```sh -wscat -c ws://127.0.0.1:26657/websocket -``` - -NOTE: If your node's RPC endpoint is TLS-enabled, utilize the scheme `wss` instead of `ws`. - -You can subscribe to any of the events above by calling the `subscribe` RPC -method via Websocket along with a valid query. - -```json -{ - "jsonrpc": "2.0", - "method": "subscribe", - "id": 0, - "params": { - "query": "tm.event='NewBlock'" - } -} -``` - -Check out [API docs](https://docs.cometbft.com/v0.34/rpc/) for -more information on query syntax and other options. - -You can also use tags, given you had included them into DeliverTx -response, to query transaction results. See [Indexing -transactions](./indexing-transactions.md) for details. - - -## Query parameter and event type restrictions - -While CometBFT imposes no restrictions on the application with regards to the type of -the event output, there are several restrictions when it comes to querying -events whose attribute values are numeric. - -- Queries cannot include negative numbers -- If floating points are compared to integers, they are converted to an integer -- Floating point to floating point comparison leads to a loss of precision for very big floating point numbers -(e.g., `10000000000000000000.0` is treated the same as `10000000000000000000.6`) -- When floating points do get converted to integers, they are always rounded down. -This has been done to preserve the behaviour present before introducing the support for BigInts in the query parameters. - -## ValidatorSetUpdates - -When validator set changes, ValidatorSetUpdates event is published. The -event carries a list of pubkey/power pairs. The list is the same -CometBFT receives from ABCI application (see [EndBlock -section](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/abci/abci++_methods.md#endblock) in -the ABCI spec). - -Response: - -```json -{ - "jsonrpc": "2.0", - "id": 0, - "result": { - "query": "tm.event='ValidatorSetUpdates'", - "data": { - "type": "tendermint/event/ValidatorSetUpdates", - "value": { - "validator_updates": [ - { - "address": "09EAD022FD25DE3A02E64B0FE9610B1417183EE4", - "pub_key": { - "type": "tendermint/PubKeyEd25519", - "value": "ww0z4WaZ0Xg+YI10w43wTWbBmM3dpVza4mmSQYsd0ck=" - }, - "voting_power": "10", - "proposer_priority": "0" - } - ] - } - } - } -} -``` diff --git a/docs/core/using-cometbft.md b/docs/core/using-cometbft.md deleted file mode 100644 index b50173b5ff..0000000000 --- a/docs/core/using-cometbft.md +++ /dev/null @@ -1,574 +0,0 @@ ---- -order: 2 ---- - -# Using CometBFT - -This is a guide to using the `cometbft` program from the command line. -It assumes only that you have the `cometbft` binary installed and have -some rudimentary idea of what CometBFT and ABCI are. - -You can see the help menu with `cometbft --help`, and the version -number with `cometbft version`. - -## Directory Root - -The default directory for blockchain data is `~/.cometbft`. Override -this by setting the `CMTHOME` environment variable. - -## Initialize - -Initialize the root directory by running: - -```sh -cometbft init -``` - -This will create a new private key (`priv_validator_key.json`), and a -genesis file (`genesis.json`) containing the associated public key, in -`$CMTHOME/config`. This is all that's necessary to run a local testnet -with one validator. - -For more elaborate initialization, see the testnet command: - -```sh -cometbft testnet --help -``` - -### Genesis - -The `genesis.json` file in `$CMTHOME/config/` defines the initial -CometBFT state upon genesis of the blockchain ([see -definition](https://github.com/cometbft/cometbft/blob/v0.34.x/types/genesis.go)). - -#### Fields - -- `genesis_time`: Official time of blockchain start. -- `chain_id`: ID of the blockchain. **This must be unique for - every blockchain.** If your testnet blockchains do not have unique - chain IDs, you will have a bad time. The ChainID must be less than 50 symbols. -- `initial_height`: Height at which CometBFT should begin at. If a blockchain is conducting a network upgrade, - starting from the stopped height brings uniqueness to previous heights. -- `consensus_params` ([see spec](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/data_structures.md#consensusparams)) - - `block` - - `max_bytes`: Max block size, in bytes. - - `max_gas`: Max gas per block. - - `time_iota_ms`: Minimum time increment between consecutive blocks (in - milliseconds). If the block header timestamp is ahead of the system clock, - decrease this value. - - `evidence` - - `max_age_num_blocks`: Max age of evidence, in blocks. The basic formula - for calculating this is: MaxAgeDuration / {average block time}. - - `max_age_duration`: Max age of evidence, in time. It should correspond - with an app's "unbonding period" or other similar mechanism for handling - [Nothing-At-Stake - attacks](https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ#what-is-the-nothing-at-stake-problem-and-how-can-it-be-fixed). - - `max_bytes`: This sets the maximum size in bytes of evidence that can be committed - in a single block and should fall comfortably under the max block bytes. - - `validator` - - `pub_key_types`: Public key types validators can use. - - `version` - - `app_version`: ABCI application version. -- `validators`: List of initial validators. Note this may be overridden entirely by the - application, and may be left empty to make explicit that the - application will initialize the validator set upon `InitChain`. - - `pub_key`: The first element specifies the key type, - using the declared `PubKeyName` for the adopted - [key type](https://github.com/cometbft/cometbft/blob/v0.34.x/crypto/ed25519/ed25519.go#L36). - The second element are the pubkey bytes. - - `power`: The validator's voting power. - - `name`: Name of the validator (optional). -- `app_hash`: The expected application hash (as returned by the - `ResponseInfo` ABCI message) upon genesis. If the app's hash does - not match, CometBFT will panic. -- `app_state`: The application state (e.g. initial distribution - of tokens). - -> :warning: **ChainID must be unique to every blockchain. Reusing old chainID can cause issues** - -#### Sample genesis.json - -```json -{ - "genesis_time": "2023-01-21T11:17:42.341227868Z", - "chain_id": "test-chain-ROp9KF", - "initial_height": "0", - "consensus_params": { - "block": { - "max_bytes": "22020096", - "max_gas": "-1", - }, - "evidence": { - "max_age_num_blocks": "100000", - "max_age_duration": "172800000000000", - "max_bytes": 51200, - }, - "validator": { - "pub_key_types": [ - "ed25519" - ] - } - }, - "validators": [ - { - "address": "B547AB87E79F75A4A3198C57A8C2FDAF8628CB47", - "pub_key": { - "type": "tendermint/PubKeyEd25519", - "value": "P/V6GHuZrb8rs/k1oBorxc6vyXMlnzhJmv7LmjELDys=" - }, - "power": "10", - "name": "" - } - ], - "app_hash": "" -} -``` - -## Run - -To run a CometBFT node, use: - -```bash -cometbft node -``` - -By default, CometBFT will try to connect to an ABCI application on -`127.0.0.1:26658`. If you have the `kvstore` ABCI app installed, run it in -another window. If you don't, kill CometBFT and run an in-process version of -the `kvstore` app: - -```bash -cometbft node --proxy_app=kvstore -``` - -After a few seconds, you should see blocks start streaming in. Note that blocks -are produced regularly, even if there are no transactions. See _No Empty -Blocks_, below, to modify this setting. - -CometBFT supports in-process versions of the `counter`, `kvstore`, and `noop` -apps that ship as examples with `abci-cli`. It's easy to compile your app -in-process with CometBFT if it's written in Go. If your app is not written in -Go, run it in another process, and use the `--proxy_app` flag to specify the -address of the socket it is listening on, for instance: - -```bash -cometbft node --proxy_app=/var/run/abci.sock -``` - -You can find out what flags are supported by running `cometbft node --help`. - -## Transactions - -To send a transaction, use `curl` to make requests to the CometBFT RPC -server, for example: - -```sh -curl http://localhost:26657/broadcast_tx_commit?tx=\"abcd\" -``` - -We can see the chain's status at the `/status` end-point: - -```sh -curl http://localhost:26657/status | json_pp -``` - -and the `latest_app_hash` in particular: - -```sh -curl http://localhost:26657/status | json_pp | grep latest_app_hash -``` - -Visit `http://localhost:26657` in your browser to see the list of other -endpoints. Some take no arguments (like `/status`), while others specify -the argument name and use `_` as a placeholder. - - -> TIP: Find the RPC Documentation [here](https://docs.cometbft.com/v0.34/rpc/) - -### Formatting - -The following nuances when sending/formatting transactions should be -taken into account: - -With `GET`: - -To send a UTF8 string byte array, quote the value of the tx parameter: - -```sh -curl 'http://localhost:26657/broadcast_tx_commit?tx="hello"' -``` - -which sends a 5 byte transaction: "h e l l o" \[68 65 6c 6c 6f\]. - -Note the URL must be wrapped with single quotes, else bash will ignore -the double quotes. To avoid the single quotes, escape the double quotes: - -```sh -curl http://localhost:26657/broadcast_tx_commit?tx=\"hello\" -``` - -Using a special character: - -```sh -curl 'http://localhost:26657/broadcast_tx_commit?tx="€5"' -``` - -sends a 4 byte transaction: "€5" (UTF8) \[e2 82 ac 35\]. - -To send as raw hex, omit quotes AND prefix the hex string with `0x`: - -```sh -curl http://localhost:26657/broadcast_tx_commit?tx=0x01020304 -``` - -which sends a 4 byte transaction: \[01 02 03 04\]. - -With `POST` (using `json`), the raw hex must be `base64` encoded: - -```sh -curl --data-binary '{"jsonrpc":"2.0","id":"anything","method":"broadcast_tx_commit","params": {"tx": "AQIDBA=="}}' -H 'content-type:text/plain;' http://localhost:26657 -``` - -which sends the same 4 byte transaction: \[01 02 03 04\]. - -Note that raw hex cannot be used in `POST` transactions. - -## Reset - -> :warning: **UNSAFE** Only do this in development and only if you can -afford to lose all blockchain data! - - -To reset a blockchain, stop the node and run: - -```sh -cometbft unsafe_reset_all -``` - -This command will remove the data directory and reset private validator and -address book files. - -## Configuration - -CometBFT uses a `config.toml` for configuration. For details, see [the -config specification](./configuration.md). - -Notable options include the socket address of the application -(`proxy_app`), the listening address of the CometBFT peer -(`p2p.laddr`), and the listening address of the RPC server -(`rpc.laddr`). - -Some fields from the config file can be overwritten with flags. - -## No Empty Blocks - -While the default behavior of `cometbft` is still to create blocks -approximately once per second, it is possible to disable empty blocks or -set a block creation interval. In the former case, blocks will be -created when there are new transactions or when the AppHash changes. - -To configure CometBFT to not produce empty blocks unless there are -transactions or the app hash changes, run CometBFT with this -additional flag: - -```sh -cometbft node --consensus.create_empty_blocks=false -``` - -or set the configuration via the `config.toml` file: - -```toml -[consensus] -create_empty_blocks = false -``` - -Remember: because the default is to _create empty blocks_, avoiding -empty blocks requires the config option to be set to `false`. - -The block interval setting allows for a delay (in time.Duration format [ParseDuration](https://golang.org/pkg/time/#ParseDuration)) between the -creation of each new empty block. It can be set with this additional flag: - -```sh ---consensus.create_empty_blocks_interval="5s" -``` - -or set the configuration via the `config.toml` file: - -```toml -[consensus] -create_empty_blocks_interval = "5s" -``` - -With this setting, empty blocks will be produced every 5s if no block -has been produced otherwise, regardless of the value of -`create_empty_blocks`. - -## Broadcast API - -Earlier, we used the `broadcast_tx_commit` endpoint to send a -transaction. When a transaction is sent to a CometBFT node, it will -run via `CheckTx` against the application. If it passes `CheckTx`, it -will be included in the mempool, broadcasted to other peers, and -eventually included in a block. - -Since there are multiple phases to processing a transaction, we offer -multiple endpoints to broadcast a transaction: - -```md -/broadcast_tx_async -/broadcast_tx_sync -/broadcast_tx_commit -``` - -These correspond to no-processing, processing through the mempool, and -processing through a block, respectively. That is, `broadcast_tx_async`, -will return right away without waiting to hear if the transaction is -even valid, while `broadcast_tx_sync` will return with the result of -running the transaction through `CheckTx`. Using `broadcast_tx_commit` -will wait until the transaction is committed in a block or until some -timeout is reached, but will return right away if the transaction does -not pass `CheckTx`. The return value for `broadcast_tx_commit` includes -two fields, `check_tx` and `deliver_tx`, pertaining to the result of -running the transaction through those ABCI messages. - -The benefit of using `broadcast_tx_commit` is that the request returns -after the transaction is committed (i.e. included in a block), but that -can take on the order of a second. For a quick result, use -`broadcast_tx_sync`, but the transaction will not be committed until -later, and by that point its effect on the state may change. - -Note the mempool does not provide strong guarantees - just because a tx passed -CheckTx (ie. was accepted into the mempool), doesn't mean it will be committed, -as nodes with the tx in their mempool may crash before they get to propose. -For more information, see the [mempool -write-ahead-log](./running-in-production.md#mempool-wal) - -## CometBFT Networks - -When `cometbft init` is run, both a `genesis.json` and -`priv_validator_key.json` are created in `~/.cometbft/config`. The -`genesis.json` might look like: - -```json -{ - "validators" : [ - { - "pub_key" : { - "value" : "h3hk+QE8c6QLTySp8TcfzclJw/BG79ziGB/pIA+DfPE=", - "type" : "tendermint/PubKeyEd25519" - }, - "power" : 10, - "name" : "" - } - ], - "app_hash" : "", - "chain_id" : "test-chain-rDlYSN", - "genesis_time" : "0001-01-01T00:00:00Z" -} -``` - -And the `priv_validator_key.json`: - -```json -{ - "last_step" : 0, - "last_round" : "0", - "address" : "B788DEDE4F50AD8BC9462DE76741CCAFF87D51E2", - "pub_key" : { - "value" : "h3hk+QE8c6QLTySp8TcfzclJw/BG79ziGB/pIA+DfPE=", - "type" : "tendermint/PubKeyEd25519" - }, - "last_height" : "0", - "priv_key" : { - "value" : "JPivl82x+LfVkp8i3ztoTjY6c6GJ4pBxQexErOCyhwqHeGT5ATxzpAtPJKnxNx/NyUnD8Ebv3OIYH+kgD4N88Q==", - "type" : "tendermint/PrivKeyEd25519" - } -} -``` - -The `priv_validator_key.json` actually contains a private key, and should -thus be kept absolutely secret; for now we work with the plain text. -Note the `last_` fields, which are used to prevent us from signing -conflicting messages. - -Note also that the `pub_key` (the public key) in the -`priv_validator_key.json` is also present in the `genesis.json`. - -The genesis file contains the list of public keys which may participate -in the consensus, and their corresponding voting power. Greater than 2/3 -of the voting power must be active (i.e. the corresponding private keys -must be producing signatures) for the consensus to make progress. In our -case, the genesis file contains the public key of our -`priv_validator_key.json`, so a CometBFT node started with the default -root directory will be able to make progress. Voting power uses an int64 -but must be positive, thus the range is: 0 through 9223372036854775807. -Because of how the current proposer selection algorithm works, we do not -recommend having voting powers greater than 10\^12 (ie. 1 trillion). - -If we want to add more nodes to the network, we have two choices: we can -add a new validator node, who will also participate in the consensus by -proposing blocks and voting on them, or we can add a new non-validator -node, who will not participate directly, but will verify and keep up -with the consensus protocol. - -### Peers - -#### Seed - -A seed node is a node who relays the addresses of other peers which they know -of. These nodes constantly crawl the network to try to get more peers. The -addresses which the seed node relays get saved into a local address book. Once -these are in the address book, you will connect to those addresses directly. -Basically the seed nodes job is just to relay everyones addresses. You won't -connect to seed nodes once you have received enough addresses, so typically you -only need them on the first start. The seed node will immediately disconnect -from you after sending you some addresses. - -#### Persistent Peer - -Persistent peers are people you want to be constantly connected with. If you -disconnect you will try to connect directly back to them as opposed to using -another address from the address book. On restarts you will always try to -connect to these peers regardless of the size of your address book. - -All peers relay peers they know of by default. This is called the peer exchange -protocol (PEX). With PEX, peers will be gossiping about known peers and forming -a network, storing peer addresses in the addrbook. Because of this, you don't -have to use a seed node if you have a live persistent peer. - -#### Connecting to Peers - -To connect to peers on start-up, specify them in the -`$CMTHOME/config/config.toml` or on the command line. Use `seeds` to -specify seed nodes, and -`persistent_peers` to specify peers that your node will maintain -persistent connections with. - -For example, - -```sh -cometbft node --p2p.seeds "f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@1.2.3.4:26656,0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@5.6.7.8:26656" -``` - -Alternatively, you can use the `/dial_seeds` endpoint of the RPC to -specify seeds for a running node to connect to: - -```sh -curl 'localhost:26657/dial_seeds?seeds=\["f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@1.2.3.4:26656","0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@5.6.7.8:26656"\]' -``` - -Note, with PEX enabled, you -should not need seeds after the first start. - -If you want CometBFT to connect to specific set of addresses and -maintain a persistent connection with each, you can use the -`--p2p.persistent_peers` flag or the corresponding setting in the -`config.toml` or the `/dial_peers` RPC endpoint to do it without -stopping CometBFT instance. - -```sh -cometbft node --p2p.persistent_peers "429fcf25974313b95673f58d77eacdd434402665@10.11.12.13:26656,96663a3dd0d7b9d17d4c8211b191af259621c693@10.11.12.14:26656" - -curl 'localhost:26657/dial_peers?persistent=true&peers=\["429fcf25974313b95673f58d77eacdd434402665@10.11.12.13:26656","96663a3dd0d7b9d17d4c8211b191af259621c693@10.11.12.14:26656"\]' -``` - -### Adding a Non-Validator - -Adding a non-validator is simple. Just copy the original `genesis.json` -to `~/.cometbft/config` on the new machine and start the node, -specifying seeds or persistent peers as necessary. If no seeds or -persistent peers are specified, the node won't make any blocks, because -it's not a validator, and it won't hear about any blocks, because it's -not connected to the other peer. - -### Adding a Validator - -The easiest way to add new validators is to do it in the `genesis.json`, -before starting the network. For instance, we could make a new -`priv_validator_key.json`, and copy it's `pub_key` into the above genesis. - -We can generate a new `priv_validator_key.json` with the command: - -```sh -cometbft gen_validator -``` - -Now we can update our genesis file. For instance, if the new -`priv_validator_key.json` looks like: - -```json -{ - "address" : "5AF49D2A2D4F5AD4C7C8C4CC2FB020131E9C4902", - "pub_key" : { - "value" : "l9X9+fjkeBzDfPGbUM7AMIRE6uJN78zN5+lk5OYotek=", - "type" : "tendermint/PubKeyEd25519" - }, - "priv_key" : { - "value" : "EDJY9W6zlAw+su6ITgTKg2nTZcHAH1NMTW5iwlgmNDuX1f35+OR4HMN88ZtQzsAwhETq4k3vzM3n6WTk5ii16Q==", - "type" : "tendermint/PrivKeyEd25519" - }, - "last_step" : 0, - "last_round" : "0", - "last_height" : "0" -} -``` - -then the new `genesis.json` will be: - -```json -{ - "validators" : [ - { - "pub_key" : { - "value" : "h3hk+QE8c6QLTySp8TcfzclJw/BG79ziGB/pIA+DfPE=", - "type" : "tendermint/PubKeyEd25519" - }, - "power" : 10, - "name" : "" - }, - { - "pub_key" : { - "value" : "l9X9+fjkeBzDfPGbUM7AMIRE6uJN78zN5+lk5OYotek=", - "type" : "tendermint/PubKeyEd25519" - }, - "power" : 10, - "name" : "" - } - ], - "app_hash" : "", - "chain_id" : "test-chain-rDlYSN", - "genesis_time" : "0001-01-01T00:00:00Z" -} -``` - -Update the `genesis.json` in `~/.cometbft/config`. Copy the genesis -file and the new `priv_validator_key.json` to the `~/.cometbft/config` on -a new machine. - -Now run `cometbft node` on both machines, and use either -`--p2p.persistent_peers` or the `/dial_peers` to get them to peer up. -They should start making blocks, and will only continue to do so as long -as both of them are online. - -To make a CometBFT network that can tolerate one of the validators -failing, you need at least four validator nodes (e.g., 2/3). - -Updating validators in a live network is supported but must be -explicitly programmed by the application developer. See the [application -developers guide](../app-dev/abci-cli.md) for more details. - -### Local Network - -To run a network locally, say on a single machine, you must change the `_laddr` -fields in the `config.toml` (or using the flags) so that the listening -addresses of the various sockets don't conflict. Additionally, you must set -`addr_book_strict=false` in the `config.toml`, otherwise CometBFT's p2p -library will deny making connections to peers with the same IP address. - -### Upgrading - -See the -[UPGRADING.md](https://github.com/cometbft/cometbft/blob/v0.34.x/UPGRADING.md) -guide. You may need to reset your chain between major breaking releases. -Although, we expect CometBFT to have fewer breaking releases in the future -(especially after 1.0 release). diff --git a/docs/core/validators.md b/docs/core/validators.md deleted file mode 100644 index b28cb7ff0f..0000000000 --- a/docs/core/validators.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -order: 6 ---- - -# Validators - -Validators are responsible for committing new blocks in the blockchain. -These validators participate in the consensus protocol by broadcasting -_votes_ which contain cryptographic signatures signed by each -validator's private key. - -Some Proof-of-Stake consensus algorithms aim to create a "completely" -decentralized system where all stakeholders (even those who are not -always available online) participate in the committing of blocks. -CometBFT has a different approach to block creation. Validators are -expected to be online, and the set of validators is permissioned/curated -by some external process. Proof-of-stake is not required, but can be -implemented on top of CometBFT consensus. That is, validators may be -required to post collateral on-chain, off-chain, or may not be required -to post any collateral at all. - -Validators have a cryptographic key-pair and an associated amount of -"voting power". Voting power need not be the same. - -## Becoming a Validator - -There are two ways to become validator. - -1. They can be pre-established in the [genesis state](./using-cometbft.md#genesis) -2. The ABCI app responds to the EndBlock message with changes to the - existing validator set. - -## Setting up a Validator - -When setting up a validator there are countless ways to configure your setup. This guide is aimed at showing one of them, the sentry node design. This design is mainly for DDoS prevention. - -### Network Layout - -![ALT Network Layout](../imgs/sentry_layout.png) - -The diagram is based on AWS, other cloud providers will have similar solutions to design a solution. Running nodes is not limited to cloud providers, you can run nodes on bare metal systems as well. The architecture will be the same no matter which setup you decide to go with. - -The proposed network diagram is similar to the classical backend/frontend separation of services in a corporate environment. The “backend” in this case is the private network of the validator in the data center. The data center network might involve multiple subnets, firewalls and redundancy devices, which is not detailed on this diagram. The important point is that the data center allows direct connectivity to the chosen cloud environment. Amazon AWS has “Direct Connect”, while Google Cloud has “Partner Interconnect”. This is a dedicated connection to the cloud provider (usually directly to your virtual private cloud instance in one of the regions). - -All sentry nodes (the “frontend”) connect to the validator using this private connection. The validator does not have a public IP address to provide its services. - -Amazon has multiple availability zones within a region. One can install sentry nodes in other regions too. In this case the second, third and further regions need to have a private connection to the validator node. This can be achieved by VPC Peering (“VPC Network Peering” in Google Cloud). In this case, the second, third and further region sentry nodes will be directed to the first region and through the direct connect to the data center, arriving to the validator. - -A more persistent solution (not detailed on the diagram) is to have multiple direct connections to different regions from the data center. This way VPC Peering is not mandatory, although still beneficial for the sentry nodes. This overcomes the risk of depending on one region. It is more costly. - -### Local Configuration - -![ALT Local Configuration](../imgs/sentry_local_config.png) - -The validator will only talk to the sentry that are provided, the sentry nodes will communicate to the validator via a secret connection and the rest of the network through a normal connection. The sentry nodes do have the option of communicating with each other as well. - -When initializing nodes there are five parameters in the `config.toml` that may need to be altered. - -- `pex:` boolean. This turns the peer exchange reactor on or off for a node. When `pex=false`, only the `persistent_peers` list is available for connection. -- `persistent_peers:` a comma separated list of `nodeID@ip:port` values that define a list of peers that are expected to be online at all times. This is necessary at first startup because by setting `pex=false` the node will not be able to join the network. -- `unconditional_peer_ids:` comma separated list of nodeID's. These nodes will be connected to no matter the limits of inbound and outbound peers. This is useful for when sentry nodes have full address books. -- `private_peer_ids:` comma separated list of nodeID's. These nodes will not be gossiped to the network. This is an important field as you do not want your validator IP gossiped to the network. -- `addr_book_strict:` boolean. By default nodes with a routable address will be considered for connection. If this setting is turned off (false), non-routable IP addresses, like addresses in a private network can be added to the address book. -- `double_sign_check_height` int64 height. How many blocks to look back to check existence of the node's consensus votes before joining consensus When non-zero, the node will panic upon restart if the same consensus key was used to sign `double_sign_check_height` last blocks. So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic. - -#### Validator Node Configuration - -| Config Option | Setting | -| ------------------------ | -------------------------- | -| pex | false | -| persistent_peers | list of sentry nodes | -| private_peer_ids | none | -| unconditional_peer_ids | optionally sentry node IDs | -| addr_book_strict | false | -| double_sign_check_height | 10 | - -The validator node should have `pex=false` so it does not gossip to the entire network. The persistent peers will be your sentry nodes. Private peers can be left empty as the validator is not trying to hide who it is communicating with. Setting unconditional peers is optional for a validator because they will not have a full address books. - -#### Sentry Node Configuration - -| Config Option | Setting | -| ---------------------- | --------------------------------------------- | -| pex | true | -| persistent_peers | validator node, optionally other sentry nodes | -| private_peer_ids | validator node ID | -| unconditional_peer_ids | validator node ID, optionally sentry node IDs | -| addr_book_strict | false | - -The sentry nodes should be able to talk to the entire network hence why `pex=true`. The persistent peers of a sentry node will be the validator, and optionally other sentry nodes. The sentry nodes should make sure that they do not gossip the validator's ip, to do this you must put the validators nodeID as a private peer. The unconditional peer IDs will be the validator ID and optionally other sentry nodes. - -> Note: Do not forget to secure your node's firewalls when setting them up. - -More Information can be found at these links: - -- -- - -### Validator keys - -Protecting a validator's consensus key is the most important factor to take in when designing your setup. The key that a validator is given upon creation of the node is called a consensus key, it has to be online at all times in order to vote on blocks. It is **not recommended** to merely hold your private key in the default json file (`priv_validator_key.json`). Fortunately, the [Interchain Foundation](https://interchain.io/) has worked with a team to build a key management server for validators. You can find documentation on how to use it [here](https://github.com/iqlusioninc/tmkms), it is used extensively in production. You are not limited to using this tool, there are also [HSMs](https://safenet.gemalto.com/data-encryption/hardware-security-modules-hsms/), there is not a recommended HSM. - -Currently CometBFT uses [Ed25519](https://ed25519.cr.yp.to/) keys which are widely supported across the security sector and HSMs. - -## Committing a Block - -> **+2/3 is short for "more than 2/3"** - -A block is committed when +2/3 of the validator set sign -[precommit votes](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/data_structures.md#vote) -for that block at the same `round`. -The +2/3 set of precommit votes is called a -[commit](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/data_structures.md#commit). -While any +2/3 set of precommits for the same block at the same height&round can serve as -validation, the canonical commit is included in the next block (see -[LastCommit](https://github.com/cometbft/cometbft/blob/v0.34.x/spec/core/data_structures.md#block)). diff --git a/docs/guides/README.md b/docs/guides/README.md deleted file mode 100644 index 7c8b0a9e5b..0000000000 --- a/docs/guides/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -order: false -parent: - order: 2 ---- - -# Guides - -- [Installing CometBFT](./install.md) -- [Quick-start using CometBFT](./quick-start.md) -- [Upgrading from Tendermint to CometBFT](./upgrading-from-tm.md) -- [Creating a built-in application in Go](./go-built-in.md) -- [Creating an external application in Go](./go.md) -- [Creating an external application in Java](./java.md) -- [Creating an external application in Kotlin](./kotlin.md) diff --git a/docs/guides/go-built-in.md b/docs/guides/go-built-in.md deleted file mode 100644 index 81daf4e070..0000000000 --- a/docs/guides/go-built-in.md +++ /dev/null @@ -1,760 +0,0 @@ ---- -order: 3 ---- - -# Creating a built-in application in Go - -## Guide Assumptions - -This guide is designed for beginners who want to get started with a CometBFT -application from scratch. It does not assume that you have any prior -experience with CometBFT. - -CometBFT is a service that provides a Byzantine Fault Tolerant consensus engine -for state-machine replication. The replicated state-machine, or "application", can be written -in any language that can send and receive protocol buffer messages in a client-server model. -Applications written in Go can also use CometBFT as a library and run the service in the same -process as the application. - -By following along this tutorial you will create a CometBFT application called kvstore, -a (very) simple distributed BFT key-value store. -The application will be written in Go and -some understanding of the Go programming language is expected. -If you have never written Go, you may want to go through [Learn X in Y minutes -Where X=Go](https://learnxinyminutes.com/docs/go/) first, to familiarize -yourself with the syntax. - -Note: Please use the latest released version of this guide and of CometBFT. -We strongly advise against using unreleased commits for your development. - -### Built-in app vs external app - -On the one hand, to get maximum performance you can run your application in -the same process as the CometBFT, as long as your application is written in Go. -[Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is written -this way. -This is the approach followed in this tutorial. - -On the other hand, having a separate application might give you better security -guarantees as two processes would be communicating via established binary protocol. -CometBFT will not have access to application's state. -If that is the way you wish to proceed, use the [Creating an application in Go](./go.md) guide instead of this one. - - -## 1.1 Installing Go - -Verify that you have the latest version of Go installed (refer to the [official guide for installing Go](https://golang.org/doc/install)): - -```bash -$ go version -go version go1.22.9 darwin/amd64 -``` - -## 1.2 Creating a new Go project - -We'll start by creating a new Go project. - -```bash -mkdir kvstore -``` - -Inside the example directory, create a `main.go` file with the following content: - -```go -package main - -import ( - "fmt" -) - -func main() { - fmt.Println("Hello, CometBFT") -} -``` - -When run, this should print "Hello, CometBFT" to the standard output. - -```bash -cd kvstore -$ go run main.go -Hello, CometBFT -``` - -We are going to use [Go modules](https://github.com/golang/go/wiki/Modules) for -dependency management, so let's start by including a dependency on this version of -CometBFT. - -```bash -go mod init kvstore -go get github.com/tendermint/tendermint -go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.34.28 -``` - -After running the above commands you will see two generated files, `go.mod` and `go.sum`. -The go.mod file should look similar to: - -```go -module github.com/me/example - -go 1.22 - -require ( - github.com/cometbft/cometbft v0.34.27 -) -``` - -As you write the kvstore application, you can rebuild the binary by -pulling any new dependencies and recompiling it. - -```sh -go get -go build -``` - -## 1.3 Writing a CometBFT application - -CometBFT communicates with the application through the Application -BlockChain Interface (ABCI). The messages exchanged through the interface are -defined in the ABCI [protobuf -file](https://github.com/cometbft/cometbft/blob/v0.34.x/proto/tendermint/abci/types.proto). - -We begin by creating the basic scaffolding for an ABCI application by -creating a new type, `KVStoreApplication`, which implements the -methods defined by the `abcitypes.Application` interface. - -Create a file called `app.go` with the following contents: - -```go -package main - -import ( - abcitypes "github.com/cometbft/cometbft/abci/types" -) - -type KVStoreApplication struct{} - -var _ abcitypes.Application = (*KVStoreApplication)(nil) - -func NewKVStoreApplication() *KVStoreApplication { - return &KVStoreApplication{} -} - -func (app *KVStoreApplication) Info(info abcitypes.RequestInfo) abcitypes.ResponseInfo { - return abcitypes.ResponseInfo{} -} - -func (app *KVStoreApplication) Query(query abcitypes.RequestQuery) abcitypes.ResponseQuery { - return abcitypes.ResponseQuery{} -} - -func (app *KVStoreApplication) CheckTx(tx abcitypes.RequestCheckTx) abcitypes.ResponseCheckTx { - return abcitypes.ResponseCheckTx{} -} - -func (app *KVStoreApplication) InitChain(chain abcitypes.RequestInitChain) abcitypes.ResponseInitChain { - return abcitypes.ResponseInitChain{} -} - - -func (app *KVStoreApplication) BeginBlock(block abcitypes.RequestBeginBlock) abcitypes.ResponseBeginBlock { - return abcitypes.ResponseBeginBlock{} -} - -func (app *KVStoreApplication) DeliverTx(tx abcitypes.RequestDeliverTx) abcitypes.ResponseDeliverTx { - return abcitypes.ResponseDeliverTx{} -} - -func (app *KVStoreApplication) EndBlock(block abcitypes.RequestEndBlock) abcitypes.ResponseEndBlock { - return abcitypes.ResponseEndBlock{} -} - -func (app *KVStoreApplication) Commit() abcitypes.ResponseCommit { - return abcitypes.ResponseCommit{} -} - -func (app *KVStoreApplication) ListSnapshots(snapshots abcitypes.RequestListSnapshots) abcitypes.ResponseListSnapshots { - return abcitypes.ResponseListSnapshots{} -} - -func (app *KVStoreApplication) OfferSnapshot(snapshot abcitypes.RequestOfferSnapshot) abcitypes.ResponseOfferSnapshot { - return abcitypes.ResponseOfferSnapshot{} -} - -func (app *KVStoreApplication) LoadSnapshotChunk(chunk abcitypes.RequestLoadSnapshotChunk) abcitypes.ResponseLoadSnapshotChunk { - return abcitypes.ResponseLoadSnapshotChunk{} -} - -func (app *KVStoreApplication) ApplySnapshotChunk(chunk abcitypes.RequestApplySnapshotChunk) abcitypes.ResponseApplySnapshotChunk { - return abcitypes.ResponseApplySnapshotChunk{} -} -``` - -The types used here are defined in the CometBFT library and were added as a dependency -to the project when you ran `go get`. If your IDE is not recognizing the types, go ahead and run the command again. - -```bash -go get github.com/cometbft/cometbft@v0.34.27 -``` - -Now go back to the `main.go` and modify the `main` function so it matches the following, -where an instance of the `KVStoreApplication` type is created. - -```go -func main() { - fmt.Println("Hello, CometBFT") - - _ = NewKVStoreApplication() -} -``` - -You can recompile and run the application now by running `go get` and `go build`, but it does -not do anything. -So let's revisit the code adding the logic needed to implement our minimal key/value store -and to start it along with the CometBFT Service. - - -### 1.3.1 Add a persistent data store - -Our application will need to write its state out to persistent storage so that it -can stop and start without losing all of its data. - -For this tutorial, we will use [BadgerDB](https://github.com/dgraph-io/badger), a -fast embedded key-value store. - -First, add Badger as a dependency of your go module using the `go get` command: - -`go get github.com/dgraph-io/badger/v3` - -Next, let's update the application and its constructor to receive a handle to the database, as follows: - -```go -type KVStoreApplication struct { - db *badger.DB - onGoingBlock *badger.Txn -} - -var _ abcitypes.Application = (*KVStoreApplication)(nil) - -func NewKVStoreApplication(db *badger.DB) *KVStoreApplication { - return &KVStoreApplication{db: db} -} -``` - -The `onGoingBlock` keeps track of the Badger transaction that will update the application's state when a block -is completed. Don't worry about it for now, we'll get to that later. - -Next, update the `import` stanza at the top to include the Badger library: - -```go -import( - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - -Finally, update the `main.go` file to invoke the updated constructor: - -```go - _ = NewKVStoreApplication(nil) -``` - -### 1.3.2 CheckTx - -When CometBFT receives a new transaction from a client, or from another full node, -CometBFT asks the application if the transaction is acceptable, using the `CheckTx` method. -Invalid transactions will not be shared with other nodes and will not become part of any blocks and, therefore, will not be executed by the application. - -In our application, a transaction is a string with the form `key=value`, indicating a key and value to write to the store. - -The most basic validation check we can perform is to check if the transaction conforms to the `key=value` pattern. -For that, let's add the following helper method to app.go: - -```go -func (app *KVStoreApplication) isValid(tx []byte) uint32 { - // check format - parts := bytes.Split(tx, []byte("=")) - if len(parts) != 2 { - return 1 - } - - return 0 -} -``` - -Now you can rewrite the `CheckTx` method to use the helper function: - -```go -func (app *KVStoreApplication) CheckTx(req abcitypes.RequestCheckTx) abcitypes.ResponseCheckTx { - code := app.isValid(req.Tx) - return abcitypes.ResponseCheckTx{Code: code} -} -``` - -While this `CheckTx` is simple and only validates that the transaction is well-formed, -it is very common for `CheckTx` to make more complex use of the state of an application. -For example, you may refuse to overwrite an existing value, or you can associate -versions to the key/value pairs and allow the caller to specify a version to -perform a conditional update. - -Depending on the checks and on the conditions violated, the function may return -different values, but any response with a non-zero code will be considered invalid -by CometBFT. Our `CheckTx` logic returns 0 to CometBFT when a transaction passes -its validation checks. The specific value of the code is meaningless to CometBFT. -Non-zero codes are logged by CometBFT so applications can provide more specific -information on why the transaction was rejected. - -Note that `CheckTx` does not execute the transaction, it only verifies that the transaction could be executed. We do not know yet if the rest of the network has agreed to accept this transaction into a block. - - -Finally, make sure to add the bytes package to the `import` stanza at the top of `app.go`: - -```go -import( - "bytes" - - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - - -### 1.3.3 BeginBlock -> DeliverTx -> EndBlock -> Commit - -When the CometBFT consensus engine has decided on the block, the block is transferred to the -application over three ABCI method calls: `BeginBlock`, `DeliverTx`, and `EndBlock`. - -- `BeginBlock` is called once to indicate to the application that it is about to -receive a block. -- `DeliverTx` is called repeatedly, once for each application transaction that was included in the block. -- `EndBlock` is called once to indicate to the application that no more transactions -will be delivered to the application within this block. - -Note that, to implement these calls in our application we're going to make use of Badger's -transaction mechanism. We will always refer to these as Badger transactions, not to -confuse them with the transactions included in the blocks delivered by CometBFT, -the _application transactions_. - -First, let's create a new Badger transaction during `BeginBlock`. All application transactions in the -current block will be executed within this Badger transaction. -Then, return informing CometBFT that the application is ready to receive application transactions: - -```go -func (app *KVStoreApplication) BeginBlock(req abcitypes.RequestBeginBlock) abcitypes.ResponseBeginBlock { - app.onGoingBlock = app.db.NewTransaction(true) - return abcitypes.ResponseBeginBlock{} -} -``` - -Next, let's modify `DeliverTx` to add the `key` and `value` to the database transaction every time our application -receives a new application transaction through `RequestDeliverTx`. - -```go -func (app *KVStoreApplication) DeliverTx(req abcitypes.RequestDeliverTx) abcitypes.ResponseDeliverTx { - if code := app.isValid(req.Tx); code != 0 { - return abcitypes.ResponseDeliverTx{Code: code} - } - - parts := bytes.SplitN(req.Tx, []byte("="), 2) - key, value := parts[0], parts[1] - - if err := app.onGoingBlock.Set(key, value); err != nil { - log.Panicf("Error writing to database, unable to execute tx: %v", err) - } - - return abcitypes.ResponseDeliverTx{Code: 0} -} -``` - -Note that we check the validity of the transaction _again_ during `DeliverTx`. -Transactions are not guaranteed to be valid when they are delivered to an -application, even if they were valid when they were proposed. -This can happen if the application state is used to determine transaction -validity. Application state may have changed between the initial execution of `CheckTx` -and the transaction delivery in `DeliverTx` in a way that rendered the transaction -no longer valid. - -`EndBlock` is called to inform the application that the full block has been delivered -and give the application a chance to perform any other computation needed, before the -effects of the transactions become permanent. - -Note that `EndBlock` **cannot** yet commit the Badger transaction we were building -in during `DeliverTx`. -Since other methods, such as `Query`, rely on a consistent view of the application's -state, the application should only update its state by committing the Badger transactions -when the full block has been delivered and the `Commit` method is invoked. - -The `Commit` method tells the application to make permanent the effects of -the application transactions. -Let's update the method to terminate the pending Badger transaction and -persist the resulting state: - -```go -func (app *KVStoreApplication) Commit() abcitypes.ResponseCommit { - if err := app.onGoingBlock.Commit(); err != nil { - log.Panicf("Error writing to database, unable to commit block: %v", err) - } - return abcitypes.ResponseCommit{Data: []byte{}} -} -``` - -Finally, make sure to add the log library to the `import` stanza as well: - -```go -import ( - "bytes" - "log" - - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - -You may have noticed that the application we are writing will crash if it receives -an unexpected error from the Badger database during the `DeliverTx` or `Commit` methods. -This is not an accident. If the application received an error from the database, there -is no deterministic way for it to make progress so the only safe option is to terminate. - -### 1.3.4 Query - -When a client tries to read some information from the `kvstore`, the request will be -handled in the `Query` method. To do this, let's rewrite the `Query` method in `app.go`: - -```go -func (app *KVStoreApplication) Query(req abcitypes.RequestQuery) abcitypes.ResponseQuery { - resp := abcitypes.ResponseQuery{Key: req.Data} - - dbErr := app.db.View(func(txn *badger.Txn) error { - item, err := txn.Get(req.Data) - if err != nil { - if err != badger.ErrKeyNotFound { - return err - } - resp.Log = "key does not exist" - return nil - } - - return item.Value(func(val []byte) error { - resp.Log = "exists" - resp.Value = val - return nil - }) - }) - if dbErr != nil { - log.Panicf("Error reading database, unable to execute query: %v", dbErr) - } - return resp -} -``` - -Since it reads only committed data from the store, transactions that are part of a block -that is being processed are not reflected in the query result. - - -## 1.4 Starting an application and a CometBFT instance in the same process - -Now that we have the basic functionality of our application in place, let's put it all together inside of our main.go file. - -Change the contents of your `main.go` file to the following. - -```go -package main - -import ( - "flag" - "fmt" - "github.com/cometbft/cometbft/p2p" - "github.com/cometbft/cometbft/privval" - "github.com/cometbft/cometbft/proxy" - "log" - "os" - "os/signal" - "path/filepath" - "syscall" - - "github.com/dgraph-io/badger/v3" - "github.com/spf13/viper" - cfg "github.com/cometbft/cometbft/config" - cmtflags "github.com/cometbft/cometbft/libs/cli/flags" - cmtlog "github.com/cometbft/cometbft/libs/log" - nm "github.com/cometbft/cometbft/node" -) - -var homeDir string - -func init() { - flag.StringVar(&homeDir, "cmt-home", "", "Path to the CometBFT config directory (if empty, uses $HOME/.cometbft)") -} - -func main() { - flag.Parse() - if homeDir == "" { - homeDir = os.ExpandEnv("$HOME/.cometbft") - } - config := cfg.DefaultConfig() - - config.SetRoot(homeDir) - - viper.SetConfigFile(fmt.Sprintf("%s/%s", homeDir, "config/config.toml")) - if err := viper.ReadInConfig(); err != nil { - log.Fatalf("Reading config: %v", err) - } - if err := viper.Unmarshal(config); err != nil { - log.Fatalf("Decoding config: %v", err) - } - if err := config.ValidateBasic(); err != nil { - log.Fatalf("Invalid configuration data: %v", err) - } - - dbPath := filepath.Join(homeDir, "badger") - db, err := badger.Open(badger.DefaultOptions(dbPath)) - if err != nil { - log.Fatalf("Opening database: %v", err) - } - defer func() { - if err := db.Close(); err != nil { - log.Printf("Closing database: %v", err) - } - }() - - app := NewKVStoreApplication(db) - - pv := privval.LoadFilePV( - config.PrivValidatorKeyFile(), - config.PrivValidatorStateFile(), - ) - - nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile()) - if err != nil { - log.Fatalf("failed to load node's key: %v", err) - } - - logger := cmtlog.NewTMLogger(cmtlog.NewSyncWriter(os.Stdout)) - logger, err = cmtflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel) - if err != nil { - log.Fatalf("failed to parse log level: %v", err) - } - - node, err := nm.NewNode( - config, - pv, - nodeKey, - proxy.NewLocalClientCreator(app), - nm.DefaultGenesisDocProviderFunc(config), - nm.DefaultDBProvider, - nm.DefaultMetricsProvider(config.Instrumentation), - logger) - - if err != nil { - log.Fatalf("Creating node: %v", err) - } - - node.Start() - defer func() { - node.Stop() - node.Wait() - }() - - c := make(chan os.Signal, 1) - signal.Notify(c, os.Interrupt, syscall.SIGTERM) - <-c -} -``` - -This is a huge blob of code, so let's break it down into pieces. - -First, we use [viper](https://github.com/spf13/viper) to load the CometBFT configuration files, which we will generate later: - - -```go - config := cfg.DefaultValidatorConfig() - - config.SetRoot(homeDir) - - viper.SetConfigFile(fmt.Sprintf("%s/%s", homeDir, "config/config.toml")) - if err := viper.ReadInConfig(); err != nil { - log.Fatalf("Reading config: %v", err) - } - if err := viper.Unmarshal(config); err != nil { - log.Fatalf("Decoding config: %v", err) - } - if err := config.ValidateBasic(); err != nil { - log.Fatalf("Invalid configuration data: %v", err) - } -``` - -Next, we initialize the Badger database and create an app instance. - -```go - dbPath := filepath.Join(homeDir, "badger") - db, err := badger.Open(badger.DefaultOptions(dbPath)) - if err != nil { - log.Fatalf("Opening database: %v", err) - } - defer func() { - if err := db.Close(); err != nil { - log.Fatalf("Closing database: %v", err) - } - }() - - app := NewKVStoreApplication(db) -``` - -We use `FilePV`, which is a private validator (i.e. thing which signs consensus -messages). Normally, you would use `SignerRemote` to connect to an external -[HSM](https://kb.certus.one/hsm.html). - -```go - pv := privval.LoadFilePV( - config.PrivValidatorKeyFile(), - config.PrivValidatorStateFile(), - ) -``` - -`nodeKey` is needed to identify the node in a p2p network. - -```go - nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile()) - if err != nil { - return nil, fmt.Errorf("failed to load node's key: %w", err) - } -``` - -Now we have everything set up to run the CometBFT node. We construct -a node by passing it the configuration, the logger, a handle to our application and -the genesis information: - -```go - node, err := nm.NewNode( - config, - pv, - nodeKey, - proxy.NewLocalClientCreator(app), - nm.DefaultGenesisDocProviderFunc(config), - nm.DefaultDBProvider, - nm.DefaultMetricsProvider(config.Instrumentation), - logger) - - if err != nil { - log.Fatalf("Creating node: %v", err) - } -``` - -Finally, we start the node, i.e., the CometBFT service inside our application: - -```go - node.Start() - defer func() { - node.Stop() - node.Wait() - }() -``` - -The additional logic at the end of the file allows the program to catch SIGTERM. This means that the node can shut down gracefully when an operator tries to kill the program: - -```go - c := make(chan os.Signal, 1) - signal.Notify(c, os.Interrupt, syscall.SIGTERM) - <-c -``` - -## 1.5 Initializing and Running - -Our application is almost ready to run, but first we'll need to populate the CometBFT configuration files. -The following command will create a `cometbft-home` directory in your project and add a basic set of configuration files in `cometbft-home/config/`. -For more information on what these files contain see [the configuration documentation](https://github.com/cometbft/cometbft/blob/v0.34.x/docs/core/configuration.md). - -From the root of your project, run: - -```bash -go run github.com/cometbft/cometbft/cmd/cometbft@v0.34.27 init --home /tmp/cometbft-home -``` - -You should see an output similar to the following: - -```bash -I[2022-11-09|09:06:34.444] Generated private validator module=main keyFile=/tmp/cometbft-home/config/priv_validator_key.json stateFile=/tmp/cometbft-home/data/priv_validator_state.json -I[2022-11-09|09:06:34.444] Generated node key module=main path=/tmp/cometbft-home/config/node_key.json -I[2022-11-09|09:06:34.444] Generated genesis file module=main path=/tmp/cometbft-home/config/genesis.json -``` - -Now rebuild the app: - -```bash -go build -mod=mod # use -mod=mod to automatically refresh the dependencies -``` - -Everything is now in place to run your application. Run: - -```bash -./kvstore -cmt-home /tmp/cometbft-home -``` - -The application will start and you should see a continuous output starting with: - -```bash -badger 2022/11/09 09:08:50 INFO: All 0 tables opened in 0s -badger 2022/11/09 09:08:50 INFO: Discard stats nextEmptySlot: 0 -badger 2022/11/09 09:08:50 INFO: Set nextTxnTs to 0 -I[2022-11-09|09:08:50.085] service start module=proxy msg="Starting multiAppConn service" impl=multiAppConn -I[2022-11-09|09:08:50.085] service start module=abci-client connection=query msg="Starting localClient service" impl=localClient -I[2022-11-09|09:08:50.085] service start module=abci-client connection=snapshot msg="Starting localClient service" impl=localClient -... -``` - -More importantly, the application using CometBFT is producing blocks 🎉🎉 and you can see this reflected in the log output in lines like this: - -```bash -I[2022-11-09|09:08:52.147] received proposal module=consensus proposal="Proposal{2/0 (F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C:1:C73D3D1273F2, -1) AD19AE292A45 @ 2022-11-09T12:08:52.143393Z}" -I[2022-11-09|09:08:52.152] received complete proposal block module=consensus height=2 hash=F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C -I[2022-11-09|09:08:52.160] finalizing commit of block module=consensus height=2 hash=F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C root= num_txs=0 -I[2022-11-09|09:08:52.167] executed block module=state height=2 num_valid_txs=0 num_invalid_txs=0 -I[2022-11-09|09:08:52.171] committed state module=state height=2 num_txs=0 app_hash= -``` - -The blocks, as you can see from the `num_valid_txs=0` part, are empty, but let's remedy that next. - -## 1.6 Using the application - -Let's try submitting a transaction to our new application. -Open another terminal window and run the following curl command: - - -```bash -curl -s 'localhost:26657/broadcast_tx_commit?tx="cometbft=rocks"' -``` - -If everything went well, you should see a response indicating which height the -transaction was included in the blockchain. - -Finally, let's make sure that transaction really was persisted by the application. -Run the following command: - -```bash -curl -s 'localhost:26657/abci_query?data="cometbft"' -``` - -Let's examine the response object that this request returns. -The request returns a `json` object with a `key` and `value` field set. - -```json -... - "key": "dGVuZGVybWludA==", - "value": "cm9ja3M=", -... -``` - -Those values don't look like the `key` and `value` we sent to CometBFT. -What's going on here? - -The response contains a `base64` encoded representation of the data we submitted. -To get the original value out of this data, we can use the `base64` command line utility: - -```bash -echo "cm9ja3M=" | base64 -d -``` - -## Outro - -I hope everything went smoothly and your first, but hopefully not the last, -CometBFT application is up and running. If not, please [open an issue on -Github](https://github.com/cometbft/cometbft/issues/new/choose). diff --git a/docs/guides/go.md b/docs/guides/go.md deleted file mode 100644 index bd92b8504d..0000000000 --- a/docs/guides/go.md +++ /dev/null @@ -1,683 +0,0 @@ ---- -order: 4 ---- - -# Creating an application in Go - -## Guide Assumptions - -This guide is designed for beginners who want to get started with a CometBFT -application from scratch. It does not assume that you have any prior -experience with CometBFT. - -CometBFT is a service that provides a Byzantine Fault Tolerant consensus engine -for state-machine replication. The replicated state-machine, or "application", can be written -in any language that can send and receive protocol buffer messages in a client-server model. -Applications written in Go can also use CometBFT as a library and run the service in the same -process as the application. - -By following along this tutorial you will create a CometBFT application called kvstore, -a (very) simple distributed BFT key-value store. -The application will be written in Go and -some understanding of the Go programming language is expected. -If you have never written Go, you may want to go through [Learn X in Y minutes -Where X=Go](https://learnxinyminutes.com/docs/go/) first, to familiarize -yourself with the syntax. - -Note: Please use the latest released version of this guide and of CometBFT. -We strongly advise against using unreleased commits for your development. - -### Built-in app vs external app - -On the one hand, to get maximum performance you can run your application in -the same process as the CometBFT, as long as your application is written in Go. -[Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is written -this way. -If that is the way you wish to proceed, use the [Creating a built-in application in Go](./go-built-in.md) guide instead of this one. - -On the other hand, having a separate application might give you better security -guarantees as two processes would be communicating via established binary protocol. -CometBFT will not have access to application's state. -This is the approach followed in this tutorial. - -## 1.1 Installing Go - -Verify that you have the latest version of Go installed (refer to the [official guide for installing Go](https://golang.org/doc/install)): - -```bash -$ go version -go version go1.22.7 darwin/amd64 -``` - -## 1.2 Creating a new Go project - -We'll start by creating a new Go project. - -```bash -mkdir kvstore -``` - -Inside the example directory, create a `main.go` file with the following content: - -```go -package main - -import ( - "fmt" -) - -func main() { - fmt.Println("Hello, CometBFT") -} -``` - -When run, this should print "Hello, CometBFT" to the standard output. - -```bash -cd kvstore -$ go run main.go -Hello, CometBFT -``` - -We are going to use [Go modules](https://github.com/golang/go/wiki/Modules) for -dependency management, so let's start by including a dependency on this version of -CometBFT. - -```bash -go mod init kvstore -go get github.com/cometbft/cometbft@v0.34.27 -``` - -After running the above commands you will see two generated files, `go.mod` and `go.sum`. -The go.mod file should look similar to: - -```go -module github.com/me/example - -go 1.22 - -require ( - github.com/cometbft/cometbft v0.34.27 -) -``` - -As you write the kvstore application, you can rebuild the binary by -pulling any new dependencies and recompiling it. - -```sh -go get -go build -``` - - -## 1.3 Writing a CometBFT application - -CometBFT communicates with the application through the Application -BlockChain Interface (ABCI). The messages exchanged through the interface are -defined in the ABCI [protobuf -file](https://github.com/cometbft/cometbft/blob/v0.34.x/proto/tendermint/abci/types.proto). - -We begin by creating the basic scaffolding for an ABCI application by -creating a new type, `KVStoreApplication`, which implements the -methods defined by the `abcitypes.Application` interface. - -Create a file called `app.go` with the following contents: - -```go -package main - -import ( - abcitypes "github.com/cometbft/cometbft/abci/types" -) - -type KVStoreApplication struct{} - -var _ abcitypes.Application = (*KVStoreApplication)(nil) - -func NewKVStoreApplication() *KVStoreApplication { - return &KVStoreApplication{} -} - -func (app *KVStoreApplication) Info(info abcitypes.RequestInfo) abcitypes.ResponseInfo { - return abcitypes.ResponseInfo{} -} - -func (app *KVStoreApplication) Query(query abcitypes.RequestQuery) abcitypes.ResponseQuery { - return abcitypes.ResponseQuery{} -} - -func (app *KVStoreApplication) CheckTx(tx abcitypes.RequestCheckTx) abcitypes.ResponseCheckTx { - return abcitypes.ResponseCheckTx{} -} - -func (app *KVStoreApplication) InitChain(chain abcitypes.RequestInitChain) abcitypes.ResponseInitChain { - return abcitypes.ResponseInitChain{} -} - -func (app *KVStoreApplication) BeginBlock(block abcitypes.RequestBeginBlock) abcitypes.ResponseBeginBlock { - return abcitypes.ResponseBeginBlock{} -} - -func (app *KVStoreApplication) DeliverTx(tx abcitypes.RequestDeliverTx) abcitypes.ResponseDeliverTx { - return abcitypes.ResponseDeliverTx{} -} - -func (app *KVStoreApplication) EndBlock(block abcitypes.RequestEndBlock) abcitypes.ResponseEndBlock { - return abcitypes.ResponseEndBlock{} -} - -func (app *KVStoreApplication) Commit() abcitypes.ResponseCommit { - return abcitypes.ResponseCommit{} -} - -func (app *KVStoreApplication) ListSnapshots(snapshots abcitypes.RequestListSnapshots) abcitypes.ResponseListSnapshots { - return abcitypes.ResponseListSnapshots{} -} - -func (app *KVStoreApplication) OfferSnapshot(snapshot abcitypes.RequestOfferSnapshot) abcitypes.ResponseOfferSnapshot { - return abcitypes.ResponseOfferSnapshot{} -} - -func (app *KVStoreApplication) LoadSnapshotChunk(chunk abcitypes.RequestLoadSnapshotChunk) abcitypes.ResponseLoadSnapshotChunk { - return abcitypes.ResponseLoadSnapshotChunk{} -} - -func (app *KVStoreApplication) ApplySnapshotChunk(chunk abcitypes.RequestApplySnapshotChunk) abcitypes.ResponseApplySnapshotChunk { - return abcitypes.ResponseApplySnapshotChunk{} -} -``` - -The types used here are defined in the CometBFT library and were added as a dependency -to the project when you ran `go get`. If your IDE is not recognizing the types, go ahead and run the command again. - -```bash -go get github.com/cometbft/cometbft@v0.34.27 -``` - -Now go back to the `main.go` and modify the `main` function so it matches the following, -where an instance of the `KVStoreApplication` type is created. - -```go -func main() { - fmt.Println("Hello, CometBFT") - - _ = NewKVStoreApplication() -} -``` - -You can recompile and run the application now by running `go get` and `go build`, but it does -not do anything. -So let's revisit the code adding the logic needed to implement our minimal key/value store -and to start it along with the CometBFT Service. - - -### 1.3.1 Add a persistent data store - -Our application will need to write its state out to persistent storage so that it -can stop and start without losing all of its data. - -For this tutorial, we will use [BadgerDB](https://github.com/dgraph-io/badger), a -a fast embedded key-value store. - -First, add Badger as a dependency of your go module using the `go get` command: - -`go get github.com/dgraph-io/badger/v3` - -Next, let's update the application and its constructor to receive a handle to the database, as follows: - -```go -type KVStoreApplication struct { - db *badger.DB - onGoingBlock *badger.Txn -} - -var _ abcitypes.Application = (*KVStoreApplication)(nil) - -func NewKVStoreApplication(db *badger.DB) *KVStoreApplication { - return &KVStoreApplication{db: db} -} -``` - -The `onGoingBlock` keeps track of the Badger transaction that will update the application's state when a block -is completed. Don't worry about it for now, we'll get to that later. - -Next, update the `import` stanza at the top to include the Badger library: - -```go -import( - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - -Finally, update the `main.go` file to invoke the updated constructor: - -```go - _ = NewKVStoreApplication(nil) -``` - -### 1.3.2 CheckTx - -When CometBFT receives a new transaction from a client, or from another full node, -CometBFT asks the application if the transaction is acceptable, using the `CheckTx` method. -Invalid transactions will not be shared with other nodes and will not become part of any blocks and, therefore, will not be executed by the application. - -In our application, a transaction is a string with the form `key=value`, indicating a key and value to write to the store. - -The most basic validation check we can perform is to check if the transaction conforms to the `key=value` pattern. -For that, let's add the following helper method to app.go: - -```go -func (app *KVStoreApplication) isValid(tx []byte) uint32 { - // check format - parts := bytes.Split(tx, []byte("=")) - if len(parts) != 2 { - return 1 - } - - return 0 -} -``` - -Now you can rewrite the `CheckTx` method to use the helper function: - -```go -func (app *KVStoreApplication) CheckTx(req abcitypes.RequestCheckTx) abcitypes.ResponseCheckTx { - code := app.isValid(req.Tx) - return abcitypes.ResponseCheckTx{Code: code} -} -``` - -While this `CheckTx` is simple and only validates that the transaction is well-formed, -it is very common for `CheckTx` to make more complex use of the state of an application. -For example, you may refuse to overwrite an existing value, or you can associate -versions to the key/value pairs and allow the caller to specify a version to -perform a conditional update. - -Depending on the checks and on the conditions violated, the function may return -different values, but any response with a non-zero code will be considered invalid -by CometBFT. Our `CheckTx` logic returns 0 to CometBFT when a transaction passes -its validation checks. The specific value of the code is meaningless to CometBFT. -Non-zero codes are logged by CometBFT so applications can provide more specific -information on why the transaction was rejected. - -Note that `CheckTx` does not execute the transaction, it only verifies that that the transaction could be executed. We do not know yet if the rest of the network has agreed to accept this transaction into a block. - - -Finally, make sure to add the bytes package to the `import` stanza at the top of `app.go`: - -```go -import( - "bytes" - - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - - -### 1.3.3 BeginBlock -> DeliverTx -> EndBlock -> Commit - -When the CometBFT consensus engine has decided on the block, the block is transferred to the -application over three ABCI method calls: `BeginBlock`, `DeliverTx`, and `EndBlock`. - -- `BeginBlock` is called once to indicate to the application that it is about to -receive a block. -- `DeliverTx` is called repeatedly, once for each application transaction that was included in the block. -- `EndBlock` is called once to indicate to the application that no more transactions -will be delivered to the application in within this block. - -Note that, to implement these calls in our application we're going to make use of Badger's -transaction mechanism. We will always refer to these as Badger transactions, not to -confuse them with the transactions included in the blocks delivered by CometBFT, -the _application transactions_. - -First, let's create a new Badger transaction during `BeginBlock`. All application transactions in the -current block will be executed within this Badger transaction. -Then, return informing CometBFT that the application is ready to receive application transactions: - -```go -func (app *KVStoreApplication) BeginBlock(req abcitypes.RequestBeginBlock) abcitypes.ResponseBeginBlock { - app.onGoingBlock = app.db.NewTransaction(true) - return abcitypes.ResponseBeginBlock{} -} -``` - -Next, let's modify `DeliverTx` to add the `key` and `value` to the database transaction every time our application -receives a new application transaction through `RequestDeliverTx`. - -```go -func (app *KVStoreApplication) DeliverTx(req abcitypes.RequestDeliverTx) abcitypes.ResponseDeliverTx { - if code := app.isValid(req.Tx); code != 0 { - return abcitypes.ResponseDeliverTx{Code: code} - } - - parts := bytes.SplitN(req.Tx, []byte("="), 2) - key, value := parts[0], parts[1] - - if err := app.onGoingBlock.Set(key, value); err != nil { - log.Panicf("Error writing to database, unable to execute tx: %v", err) - } - - return abcitypes.ResponseDeliverTx{Code: 0} -} -``` - -Note that we check the validity of the transaction _again_ during `DeliverTx`. -Transactions are not guaranteed to be valid when they are delivered to an -application, even if they were valid when they were proposed. -This can happen if the application state is used to determine transaction -validity. Application state may have changed between the initial execution of `CheckTx` -and the transaction delivery in `DeliverTx` in a way that rendered the transaction -no longer valid. - -`EndBlock` is called to inform the application that the full block has been delivered -and give the application a chance to perform any other computation needed, before the -effects of the transactions become permanent. - -Note that `EndBlock` **cannot** yet commit the Badger transaction we were building -in during `DeliverTx`. -Since other methods, such as `Query`, rely on a consistent view of the application's -state, the application should only update its state by committing the Badger transactions -when the full block has been delivered and the `Commit` method is invoked. - -The `Commit` method tells the application to make permanent the effects of -the application transactions. -Let's update the method to terminate the pending Badger transaction and -persist the resulting state: - -```go -func (app *KVStoreApplication) Commit() abcitypes.ResponseCommit { - if err := app.onGoingBlock.Commit(); err != nil { - log.Panicf("Error writing to database, unable to commit block: %v", err) - } - return abcitypes.ResponseCommit{Data: []byte{}} -} -``` - -Finally, make sure to add the log library to the `import` stanza as well: - -```go -import ( - "bytes" - "log" - - "github.com/dgraph-io/badger/v3" - abcitypes "github.com/cometbft/cometbft/abci/types" -) -``` - -You may have noticed that the application we are writing will crash if it receives -an unexpected error from the Badger database during the `DeliverTx` or `Commit` methods. -This is not an accident. If the application received an error from the database, there -is no deterministic way for it to make progress so the only safe option is to terminate. - -### 1.3.4 Query - -When a client tries to read some information from the `kvstore`, the request will be -handled in the `Query` method. To do this, let's rewrite the `Query` method in `app.go`: - -```go -func (app *KVStoreApplication) Query(req abcitypes.RequestQuery) abcitypes.ResponseQuery { - resp := abcitypes.ResponseQuery{Key: req.Data} - - dbErr := app.db.View(func(txn *badger.Txn) error { - item, err := txn.Get(req.Data) - if err != nil { - if err != badger.ErrKeyNotFound { - return err - } - resp.Log = "key does not exist" - return nil - } - - return item.Value(func(val []byte) error { - resp.Log = "exists" - resp.Value = val - return nil - }) - }) - if dbErr != nil { - log.Panicf("Error reading database, unable to execute query: %v", dbErr) - } - return resp -} -``` - -Since it reads only committed data from the store, transactions that are part of a block -that is being processed are not reflected in the query result. - - - - -## 1.4 Starting an application and a CometBFT instance - -Now that we have the basic functionality of our application in place, let's put it all together inside of our `main.go` file. - -Change the contents of your `main.go` file to the following. - -```go -package main - -import ( - "flag" - "fmt" - abciserver "github.com/cometbft/cometbft/abci/server" - "log" - "os" - "os/signal" - "path/filepath" - "syscall" - - "github.com/dgraph-io/badger/v3" - cmtlog "github.com/cometbft/cometbft/libs/log" -) - -var homeDir string -var socketAddr string - -func init() { - flag.StringVar(&homeDir, "kv-home", "", "Path to the kvstore directory (if empty, uses $HOME/.kvstore)") - flag.StringVar(&socketAddr, "socket-addr", "unix://example.sock", "Unix domain socket address (if empty, uses \"unix://example.sock\"") -} - -func main() { - flag.Parse() - if homeDir == "" { - homeDir = os.ExpandEnv("$HOME/.kvstore") - } - - dbPath := filepath.Join(homeDir, "badger") - db, err := badger.Open(badger.DefaultOptions(dbPath)) - if err != nil { - log.Fatalf("Opening database: %v", err) - } - defer func() { - if err := db.Close(); err != nil { - log.Fatalf("Closing database: %v", err) - } - }() - - app := NewKVStoreApplication(db) - - logger := cmtlog.NewTMLogger(cmtlog.NewSyncWriter(os.Stdout)) - - server := abciserver.NewSocketServer(socketAddr, app) - server.SetLogger(logger) - - if err := server.Start(); err != nil { - fmt.Fprintf(os.Stderr, "error starting socket server: %v", err) - os.Exit(1) - } - defer server.Stop() - - c := make(chan os.Signal, 1) - signal.Notify(c, os.Interrupt, syscall.SIGTERM) - <-c -} -``` - -This is a huge blob of code, so let's break it down into pieces. - -First, we initialize the Badger database and create an app instance: - -```go - dbPath := filepath.Join(homeDir, "badger") - db, err := badger.Open(badger.DefaultOptions(dbPath)) - if err != nil { - log.Fatalf("Opening database: %v", err) - } - defer func() { - if err := db.Close(); err != nil { - log.Fatalf("Closing database: %v", err) - } - }() - - app := NewKVStoreApplication(db) -``` - -For **Windows** users, restarting this app will make badger throw an error as it requires value log to be truncated. For more information on this, visit [here](https://github.com/dgraph-io/badger/issues/744). -This can be avoided by setting the truncate option to true, like this: - -```go - db, err := badger.Open(badger.DefaultOptions("/tmp/badger").WithTruncate(true)) -``` - -Then we start the ABCI server and add some signal handling to gracefully stop -it upon receiving SIGTERM or Ctrl-C. CometBFT will act as a client, -which connects to our server and send us transactions and other messages. - -```go - server := abciserver.NewSocketServer(socketAddr, app) - server.SetLogger(logger) - - if err := server.Start(); err != nil { - fmt.Fprintf(os.Stderr, "error starting socket server: %v", err) - os.Exit(1) - } - defer server.Stop() - - c := make(chan os.Signal, 1) - signal.Notify(c, os.Interrupt, syscall.SIGTERM) - <-c -``` - -## 1.5 Initializing and Running - -Our application is almost ready to run, but first we'll need to populate the CometBFT configuration files. -The following command will create a `cometbft-home` directory in your project and add a basic set of configuration files in `cometbft-home/config/`. -For more information on what these files contain see [the configuration documentation](https://github.com/cometbft/cometbft/blob/v0.34.x/docs/core/configuration.md). - -From the root of your project, run: - -```bash -go run github.com/cometbft/cometbft/cmd/cometbft@v0.34.27 init --home /tmp/cometbft-home -``` - -You should see an output similar to the following: - -```bash -I[2022-11-09|09:06:34.444] Generated private validator module=main keyFile=/tmp/cometbft-home/config/priv_validator_key.json stateFile=/tmp/cometbft-home/data/priv_validator_state.json -I[2022-11-09|09:06:34.444] Generated node key module=main path=/tmp/cometbft-home/config/node_key.json -I[2022-11-09|09:06:34.444] Generated genesis file module=main path=/tmp/cometbft-home/config/genesis.json -``` - -Now rebuild the app: - -```bash -go build -mod=mod # use -mod=mod to automatically refresh the dependencies -``` - -Everything is now in place to run your application. Run: - -```bash -./kvstore -kv-home /tmp/badger-home -``` - -The application will start and you should see an output similar to the following: - -```bash -badger 2022/11/09 17:01:28 INFO: All 0 tables opened in 0s -badger 2022/11/09 17:01:28 INFO: Discard stats nextEmptySlot: 0 -badger 2022/11/09 17:01:28 INFO: Set nextTxnTs to 0 -I[2022-11-09|17:01:28.726] service start msg="Starting ABCIServer service" impl=ABCIServer -I[2022-11-09|17:01:28.726] Waiting for new connection... -``` - -Then we need to start CometBFT service and point it to our application. -Open a new terminal window and cd to the same folder where the app is running. -Then execute the following command: - -```bash -go run github.com/cometbft/cometbft/cmd/cometbft@v0.34.27 node --home /tmp/cometbft-home --proxy_app=unix://example.sock -``` - -This should start the full node and connect to our ABCI application, which will be -reflected in the application output. - -```sh -I[2022-11-09|17:07:08.124] service start msg="Starting ABCIServer service" impl=ABCIServer -I[2022-11-09|17:07:08.124] Waiting for new connection... -I[2022-11-09|17:08:12.702] Accepted a new connection -I[2022-11-09|17:08:12.703] Waiting for new connection... -I[2022-11-09|17:08:12.703] Accepted a new connection -I[2022-11-09|17:08:12.703] Waiting for new connection... -``` - -Also, the application using CometBFT Core is producing blocks 🎉🎉 and you can see this reflected in the log output of the service in lines like this: - -```bash -I[2022-11-09|09:08:52.147] received proposal module=consensus proposal="Proposal{2/0 (F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C:1:C73D3D1273F2, -1) AD19AE292A45 @ 2022-11-09T12:08:52.143393Z}" -I[2022-11-09|09:08:52.152] received complete proposal block module=consensus height=2 hash=F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C -I[2022-11-09|09:08:52.160] finalizing commit of block module=consensus height=2 hash=F518444C0E348270436A73FD0F0B9DFEA758286BEB29482F1E3BEA75330E825C root= num_txs=0 -I[2022-11-09|09:08:52.167] executed block module=state height=2 num_valid_txs=0 num_invalid_txs=0 -I[2022-11-09|09:08:52.171] committed state module=state height=2 num_txs=0 app_hash= -``` - -The blocks, as you can see from the `num_valid_txs=0` part, are empty, but let's remedy that next. - -## 1.6 Using the application - -Let's try submitting a transaction to our new application. -Open another terminal window and run the following curl command: - - -```bash -curl -s 'localhost:26657/broadcast_tx_commit?tx="cometbft=rocks"' -``` - -If everything went well, you should see a response indicating which height the -transaction was included in the blockchain. - -Finally, let's make sure that transaction really was persisted by the application. -Run the following command: - -```bash -curl -s 'localhost:26657/abci_query?data="cometbft"' -``` - -Let's examine the response object that this request returns. -The request returns a `json` object with a `key` and `value` field set. - -```json -... - "key": "dGVuZGVybWludA==", - "value": "cm9ja3M=", -... -``` - -Those values don't look like the `key` and `value` we sent to CometBFT. -What's going on here? - -The response contains a `base64` encoded representation of the data we submitted. -To get the original value out of this data, we can use the `base64` command line utility: - -```bash -echo "cm9ja3M=" | base64 -d -``` - -## Outro - -I hope everything went smoothly and your first, but hopefully not the last, -CometBFT application is up and running. If not, please [open an issue on -Github](https://github.com/cometbft/cometbft/issues/new/choose). diff --git a/docs/guides/install.md b/docs/guides/install.md deleted file mode 100644 index 4b2d3166e0..0000000000 --- a/docs/guides/install.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -order: 1 ---- - -# Install CometBFT - -## From Binary - -To download pre-built binaries, see the [releases page](https://github.com/cometbft/cometbft/releases). - -## From Source - -You'll need `go` [installed](https://golang.org/doc/install) and the required -environment variables set, which can be done with the following commands: - -```sh -echo export GOPATH=\"\$HOME/go\" >> ~/.bash_profile -echo export PATH=\"\$PATH:\$GOPATH/bin\" >> ~/.bash_profile -``` - -### Get Source Code - -```sh -git clone https://github.com/cometbft/cometbft.git -cd cometbft -``` - -### Compile - -```sh -make install -``` - -to put the binary in `$GOPATH/bin` or use: - -```sh -make build -``` - -to put the binary in `./build`. - -_DISCLAIMER_ The binary of CometBFT is build/installed without the DWARF -symbol table. If you would like to build/install CometBFT with the DWARF -symbol and debug information, remove `-s -w` from `BUILD_FLAGS` in the make -file. - -The latest CometBFT is now installed. You can verify the installation by -running: - -```sh -cometbft version -``` - -## Run - -To start a one-node blockchain with a simple in-process application: - -```sh -cometbft init -cometbft node --proxy_app=kvstore -``` - -## Reinstall - -If you already have CometBFT installed, and you make updates, simply - -```sh -make install -``` - -To upgrade, run - -```sh -git pull origin main -make install -``` - -## Compile with CLevelDB support - -Install [LevelDB](https://github.com/google/leveldb) (minimum version is 1.7). - -Install LevelDB with snappy (optionally). Below are commands for Ubuntu: - -```sh -sudo apt-get update -sudo apt install build-essential - -sudo apt-get install libsnappy-dev - -wget https://github.com/google/leveldb/archive/v1.20.tar.gz && \ - tar -zxvf v1.20.tar.gz && \ - cd leveldb-1.20/ && \ - make && \ - sudo cp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \ - cd include/ && \ - sudo cp -r leveldb /usr/local/include/ && \ - sudo ldconfig && \ - rm -f v1.20.tar.gz -``` - -Set a database backend to `cleveldb`: - -```toml -# config/config.toml -db_backend = "cleveldb" -``` - -To install CometBFT, run: - -```sh -CGO_LDFLAGS="-lsnappy" make install COMETBFT_BUILD_OPTIONS=cleveldb -``` - -or run: - -```sh -CGO_LDFLAGS="-lsnappy" make build COMETBFT_BUILD_OPTIONS=cleveldb -``` - -which puts the binary in `./build`. diff --git a/docs/guides/quick-start.md b/docs/guides/quick-start.md deleted file mode 100644 index 21dcdc91d6..0000000000 --- a/docs/guides/quick-start.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -order: 2 ---- - -# Quick Start - -## Overview - -This is a quick start guide. If you have a vague idea about how CometBFT -works and want to get started right away, continue. - -## Install - -See the [install guide](./install.md). - -## Initialization - -Running: - -```sh -cometbft init -``` - -will create the required files for a single, local node. - -These files are found in `$HOME/.cometbft`: - -```sh -$ ls $HOME/.cometbft - -config data - -$ ls $HOME/.cometbft/config/ - -config.toml genesis.json node_key.json priv_validator.json -``` - -For a single, local node, no further configuration is required. -Configuring a cluster is covered further below. - -## Local Node - -Start CometBFT with a simple in-process application: - -```sh -cometbft node --proxy_app=kvstore -``` - -> Note: `kvstore` is a non persistent app, if you would like to run an application with persistence run `--proxy_app=persistent_kvstore` - -and blocks will start to stream in: - -```sh -I[01-06|01:45:15.592] Executed block module=state height=1 validTxs=0 invalidTxs=0 -I[01-06|01:45:15.624] Committed state module=state height=1 txs=0 appHash= -``` - -Check the status with: - -```sh -curl -s localhost:26657/status -``` - -### Sending Transactions - -With the KVstore app running, we can send transactions: - -```sh -curl -s 'localhost:26657/broadcast_tx_commit?tx="abcd"' -``` - -and check that it worked with: - -```sh -curl -s 'localhost:26657/abci_query?data="abcd"' -``` - -We can send transactions with a key and value too: - -```sh -curl -s 'localhost:26657/broadcast_tx_commit?tx="name=satoshi"' -``` - -and query the key: - -```sh -curl -s 'localhost:26657/abci_query?data="name"' -``` - -where the value is returned in hex. - -## Cluster of Nodes - -First create four Ubuntu cloud machines. The following was tested on Digital -Ocean Ubuntu 16.04 x64 (3GB/1CPU, 20GB SSD). We'll refer to their respective IP -addresses below as IP1, IP2, IP3, IP4. - -Then, `ssh` into each machine and install CometBFT following the [guide](./install.md). - -Next, use the `cometbft testnet` command to create four directories of config files (found in `./mytestnet`) and copy each directory to the relevant machine in the cloud, so that each machine has `$HOME/mytestnet/node[0-3]` directory. - -Before you can start the network, you'll need peers identifiers (IPs are not enough and can change). We'll refer to them as ID1, ID2, ID3, ID4. - -```sh -cometbft show_node_id --home ./mytestnet/node0 -cometbft show_node_id --home ./mytestnet/node1 -cometbft show_node_id --home ./mytestnet/node2 -cometbft show_node_id --home ./mytestnet/node3 -``` - -Finally, from each machine, run: - -```sh -cometbft node --home ./mytestnet/node0 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656" -cometbft node --home ./mytestnet/node1 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656" -cometbft node --home ./mytestnet/node2 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656" -cometbft node --home ./mytestnet/node3 --proxy_app=kvstore --p2p.persistent_peers="ID1@IP1:26656,ID2@IP2:26656,ID3@IP3:26656,ID4@IP4:26656" -``` - -Note that after the third node is started, blocks will start to stream in -because >2/3 of validators (defined in the `genesis.json`) have come online. -Persistent peers can also be specified in the `config.toml`. See [here](../core/configuration.md) for more information about configuration options. - -Transactions can then be sent as covered in the single, local node example above. diff --git a/docs/guides/upgrading-from-tm.md b/docs/guides/upgrading-from-tm.md deleted file mode 100644 index 2098485ddc..0000000000 --- a/docs/guides/upgrading-from-tm.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -order: 3 ---- - -# Upgrading from Tendermint Core - -CometBFT was originally forked from [Tendermint Core v0.34.24][v03424] and -subsequently updated in Informal Systems' public fork of Tendermint Core for -[v0.34.25][v03425] and [v0.34.26][v03426]. - -If you already make use of Tendermint Core (either the original Tendermint Core -v0.34.24, or Informal Systems' public fork), you can upgrade to CometBFT -v0.34.27 by replacing your dependency in your `go.mod` file: - -```bash -go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.34.27 -``` - -We make use of the original module URL in order to minimize the impact of -switching to CometBFT. This is only possible in our v0.34 release series, and we -will be switching our module URL to `github.com/cometbft/cometbft` in the next -major release. - -## Home directory - -CometBFT, by default, will consider its home directory in `~/.cometbft` from now -on instead of `~/.tendermint`. - -## Environment variables - -The environment variable prefixes have now changed from `TM` to `CMT`. For -example, `TMHOME` or `TM_HOME` become `CMTHOME` or `CMT_HOME`. - -We have implemented a fallback check in case `TMHOME` is still set and `CMTHOME` -is not, but you will start to see a warning message in the logs if the old -`TMHOME` variable is set. This fallback check will be removed entirely in a -subsequent major release of CometBFT. - -## Building CometBFT - -If you are building CometBFT from scratch, please note that it must be compiled -using Go 1.22 or higher. - -[v03424]: https://github.com/tendermint/tendermint/releases/tag/v0.34.24 -[v03425]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.25 -[v03426]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.26 diff --git a/docs/imgs/abci.png b/docs/imgs/abci.png deleted file mode 100644 index 73111cafd4..0000000000 Binary files a/docs/imgs/abci.png and /dev/null differ diff --git a/docs/imgs/consensus_logic.png b/docs/imgs/consensus_logic.png deleted file mode 100644 index 22b70b2657..0000000000 Binary files a/docs/imgs/consensus_logic.png and /dev/null differ diff --git a/docs/imgs/light_client_bisection_alg.png b/docs/imgs/light_client_bisection_alg.png deleted file mode 100644 index a960ee69f8..0000000000 Binary files a/docs/imgs/light_client_bisection_alg.png and /dev/null differ diff --git a/docs/imgs/sentry_layout.png b/docs/imgs/sentry_layout.png deleted file mode 100644 index 7d7dff44d6..0000000000 Binary files a/docs/imgs/sentry_layout.png and /dev/null differ diff --git a/docs/imgs/sentry_local_config.png b/docs/imgs/sentry_local_config.png deleted file mode 100644 index 4fdb2fe580..0000000000 Binary files a/docs/imgs/sentry_local_config.png and /dev/null differ diff --git a/docs/introduction/README.md b/docs/introduction/README.md deleted file mode 100644 index acafe992f5..0000000000 --- a/docs/introduction/README.md +++ /dev/null @@ -1,331 +0,0 @@ ---- -order: false -parent: - title: Introduction - order: 1 ---- - -# What is CometBFT - -CometBFT is software for securely and consistently replicating an -application on many machines. By securely, we mean that CometBFT works -as long as less than 1/3 of machines fail in arbitrary ways. By consistently, -we mean that every non-faulty machine sees the same transaction log and -computes the same state. Secure and consistent replication is a -fundamental problem in distributed systems; it plays a critical role in -the fault tolerance of a broad range of applications, from currencies, -to elections, to infrastructure orchestration, and beyond. - -The ability to tolerate machines failing in arbitrary ways, including -becoming malicious, is known as Byzantine fault tolerance (BFT). The -theory of BFT is decades old, but software implementations have only -became popular recently, due largely to the success of "blockchain -technology" like Bitcoin and Ethereum. Blockchain technology is just a -reformalization of BFT in a more modern setting, with emphasis on -peer-to-peer networking and cryptographic authentication. The name -derives from the way transactions are batched in blocks, where each -block contains a cryptographic hash of the previous one, forming a -chain. - -CometBFT consists of two chief technical components: a blockchain -consensus engine and a generic application interface. -The consensus engine, -which is based on [Tendermint consensus algorithm][tendermint-paper], -ensures that the same transactions are -recorded on every machine in the same order. The application interface, -called the Application BlockChain Interface (ABCI), delivers the transactions -to applications for processing. Unlike other -blockchain and consensus solutions, which come pre-packaged with built -in state machines (like a fancy key-value store, or a quirky scripting -language), developers can use CometBFT for BFT state machine -replication of applications written in whatever programming language and -development environment is right for them. - -CometBFT is designed to be easy-to-use, simple-to-understand, highly -performant, and useful for a wide variety of distributed applications. - -## CometBFT vs. X - -CometBFT is broadly similar to two classes of software. The first -class consists of distributed key-value stores, like Zookeeper, etcd, -and consul, which use non-BFT consensus. The second class is known as -"blockchain technology", and consists of both cryptocurrencies like -Bitcoin and Ethereum, and alternative distributed ledger designs like -Hyperledger's Burrow. - -### Zookeeper, etcd, consul - -Zookeeper, etcd, and consul are all implementations of key-value stores -atop a classical, non-BFT consensus algorithm. Zookeeper uses an -algorithm called Zookeeper Atomic Broadcast, while etcd and consul use -the Raft log replication algorithm. A -typical cluster contains 3-5 machines, and can tolerate crash failures -in less than 1/2 of the machines (e.g., 1 out of 3 or 2 out of 5), -but even a single Byzantine fault can jeopardize the whole system. - -Each offering provides a slightly different implementation of a -featureful key-value store, but all are generally focused around -providing basic services to distributed systems, such as dynamic -configuration, service discovery, locking, leader-election, and so on. - -CometBFT is in essence similar software, but with two key differences: - -- It is Byzantine Fault Tolerant, meaning it can only tolerate less than 1/3 - of machines failing, but those failures can include arbitrary behavior - - including hacking and malicious attacks. -- It does not specify a - particular application, like a fancy key-value store. Instead, it - focuses on arbitrary state machine replication, so developers can build - the application logic that's right for them, from key-value store to - cryptocurrency to e-voting platform and beyond. - -### Bitcoin, Ethereum, etc - -[Tendermint consensus algorithm][tendermint-paper], adopted by CometBFT, -emerged in the tradition of cryptocurrencies like Bitcoin, -Ethereum, etc. with the goal of providing a more efficient and secure -consensus algorithm than Bitcoin's Proof of Work. In the early days, -Tendermint consensus-based blockchains had a simple currency built in, and to participate in -consensus, users had to "bond" units of the currency into a security -deposit which could be revoked if they misbehaved -this is what made -Tendermint consensus a Proof-of-Stake algorithm. - -Since then, CometBFT has evolved to be a general purpose blockchain -consensus engine that can host arbitrary application states. That means -it can be used as a plug-and-play replacement for the consensus engines -of other blockchain software. So one can take the current Ethereum code -base, whether in Rust, or Go, or Haskell, and run it as an ABCI -application using CometBFT. Indeed, [we did that with -Ethereum](https://github.com/cosmos/ethermint). And we plan to do -the same for Bitcoin, ZCash, and various other deterministic -applications as well. - -Another example of a cryptocurrency application built on CometBFT is -[the Cosmos network](http://cosmos.network). - -### Other Blockchain Projects - -[Fabric](https://github.com/hyperledger/fabric) takes a similar approach -to CometBFT, but is more opinionated about how the state is managed, -and requires that all application behavior runs in potentially many -docker containers, modules it calls "chaincode". It uses an -implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf). -from a team at IBM that is [augmented to handle potentially -non-deterministic -chaincode](https://drops.dagstuhl.de/opus/volltexte/2017/7093/pdf/LIPIcs-OPODIS-2016-24.pdf). -It is possible to implement this docker-based behavior as an ABCI app in -CometBFT, though extending CometBFT to handle non-determinism -remains for future work. - -[Burrow](https://github.com/hyperledger/burrow) is an implementation of -the Ethereum Virtual Machine and Ethereum transaction mechanics, with -additional features for a name-registry, permissions, and native -contracts, and an alternative blockchain API. It uses CometBFT as its -consensus engine, and provides a particular application state. - -## ABCI Overview - -The [Application BlockChain Interface -(ABCI)](https://github.com/cometbft/cometbft/tree/main/abci) -allows for Byzantine Fault Tolerant replication of applications -written in any programming language. - -### Motivation - -Thus far, all blockchains "stacks" (such as -[Bitcoin](https://github.com/bitcoin/bitcoin)) have had a monolithic -design. That is, each blockchain stack is a single program that handles -all the concerns of a decentralized ledger; this includes P2P -connectivity, the "mempool" broadcasting of transactions, consensus on -the most recent block, account balances, Turing-complete contracts, -user-level permissions, etc. - -Using a monolithic architecture is typically bad practice in computer -science. It makes it difficult to reuse components of the code, and -attempts to do so result in complex maintenance procedures for forks of -the codebase. This is especially true when the codebase is not modular -in design and suffers from "spaghetti code". - -Another problem with monolithic design is that it limits you to the -language of the blockchain stack (or vice versa). In the case of -Ethereum which supports a Turing-complete bytecode virtual-machine, it -limits you to languages that compile down to that bytecode; while the -[list](https://github.com/pirapira/awesome-ethereum-virtual-machine#programming-languages-that-compile-into-evm) -is growing, it is still very limited. - -In contrast, our approach is to decouple the consensus engine and P2P -layers from the details of the state of the particular -blockchain application. We do this by abstracting away the details of -the application to an interface, which is implemented as a socket -protocol. - -### Intro to ABCI - -[CometBFT](https://github.com/cometbft/cometbft), the -"consensus engine", communicates with the application via a socket -protocol that satisfies the ABCI, the CometBFT Socket Protocol. - -To draw an analogy, let's talk about a well-known cryptocurrency, -Bitcoin. Bitcoin is a cryptocurrency blockchain where each node -maintains a fully audited Unspent Transaction Output (UTXO) database. If -one wanted to create a Bitcoin-like system on top of ABCI, CometBFT -would be responsible for - -- Sharing blocks and transactions between nodes -- Establishing a canonical/immutable order of transactions - (the blockchain) - -The application will be responsible for - -- Maintaining the UTXO database -- Validating cryptographic signatures of transactions -- Preventing transactions from spending non-existent transactions -- Allowing clients to query the UTXO database. - -CometBFT is able to decompose the blockchain design by offering a very -simple API (i.e. the ABCI) between the application process and consensus -process. - -The ABCI consists of 3 primary message types that get delivered from the -core to the application. The application replies with corresponding -response messages. - -The messages are specified here: [ABCI Message -Types](https://github.com/cometbft/cometbft/blob/main/proto/tendermint/abci/types.proto). - -The **DeliverTx** message is the work horse of the application. Each -transaction in the blockchain is delivered with this message. The -application needs to validate each transaction received with the -**DeliverTx** message against the current state, application protocol, -and the cryptographic credentials of the transaction. A validated -transaction then needs to update the application state — by binding a -value into a key values store, or by updating the UTXO database, for -instance. - -The **CheckTx** message is similar to **DeliverTx**, but it's only for -validating transactions. CometBFT's mempool first checks the -validity of a transaction with **CheckTx**, and only relays valid -transactions to its peers. For instance, an application may check an -incrementing sequence number in the transaction and return an error upon -**CheckTx** if the sequence number is old. Alternatively, they might use -a capabilities based system that requires capabilities to be renewed -with every transaction. - -The **Commit** message is used to compute a cryptographic commitment to -the current application state, to be placed into the next block header. -This has some handy properties. Inconsistencies in updating that state -will now appear as blockchain forks which catches a whole class of -programming errors. This also simplifies the development of secure -lightweight clients, as Merkle-hash proofs can be verified by checking -against the block hash, and that the block hash is signed by a quorum. - -There can be multiple ABCI socket connections to an application. -CometBFT creates three ABCI connections to the application; one -for the validation of transactions when broadcasting in the mempool, one -for the consensus engine to run block proposals, and one more for -querying the application state. - -It's probably evident that applications designers need to very carefully -design their message handlers to create a blockchain that does anything -useful but this architecture provides a place to start. The diagram -below illustrates the flow of messages via ABCI. - -![abci](../imgs/abci.png) - -## A Note on Determinism - -The logic for blockchain transaction processing must be deterministic. -If the application logic weren't deterministic, consensus would not be -reached among the CometBFT replica nodes. - -Solidity on Ethereum is a great language of choice for blockchain -applications because, among other reasons, it is a completely -deterministic programming language. However, it's also possible to -create deterministic applications using existing popular languages like -Java, C++, Python, or Go, by avoiding -sources of non-determinism such as: - -- random number generators (without deterministic seeding) -- race conditions on threads (or avoiding threads altogether) -- the system clock -- uninitialized memory (in unsafe programming languages like C - or C++) -- [floating point - arithmetic](http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/) -- language features that are random (e.g. map iteration in Go) - -While programmers can avoid non-determinism by being careful, it is also -possible to create a special linter or static analyzer for each language -to check for determinism. In the future we may work with partners to -create such tools. - -## Consensus Overview - -CometBFT adopts [Tendermint consensus][tendermint-paper], -an easy-to-understand, mostly asynchronous, BFT consensus algorithm. -The algorithm follows a simple state machine that looks like this: - -![consensus-logic](../imgs/consensus_logic.png) - -Participants in the algorithm are called **validators**; they take turns -proposing blocks of transactions and voting on them. Blocks are -committed in a chain, with one block at each **height**. A block may -fail to be committed, in which case the algorithm moves to the next -**round**, and a new validator gets to propose a block for that height. -Two stages of voting are required to successfully commit a block; we -call them **pre-vote** and **pre-commit**. - -There is a picture of a couple doing the polka because validators are -doing something like a polka dance. When more than two-thirds of the -validators pre-vote for the same block, we call that a **polka**. Every -pre-commit must be justified by a polka in the same round. -A block is committed when -more than 2/3 of validators pre-commit for the same block in the same -round. - -Validators may fail to commit a block for a number of reasons; the -current proposer may be offline, or the network may be slow. Tendermint consensus -allows them to establish that a validator should be skipped. Validators -wait a small amount of time to receive a complete proposal block from -the proposer before voting to move to the next round. This reliance on a -timeout is what makes Tendermint consensus a weakly synchronous algorithm, rather -than an asynchronous one. However, the rest of the algorithm is -asynchronous, and validators only make progress after hearing from more -than two-thirds of the validator set. A simplifying element of -Tendermint consensus is that it uses the same mechanism to commit a block as it -does to skip to the next round. - -Assuming less than one-third of the validators are Byzantine, Tendermint consensus algorithm -guarantees that safety will never be violated - that is, validators will -never commit conflicting blocks at the same height. To do this it -introduces a few **locking** rules which modulate which paths can be -followed in the flow diagram. Once a validator precommits a block, it is -locked on that block. Then, - -1. it must prevote for the block it is locked on -2. it can only unlock, and precommit for a new block, if there is a - polka for that block in a later round - -## Stake - -In many systems, not all validators will have the same "weight" in the -consensus protocol. Thus, we are not so much interested in one-third or -two-thirds of the validators, but in those proportions of the total -voting power, which may not be uniformly distributed across individual -validators. - -Since CometBFT can replicate arbitrary applications, it is possible to -define a currency, and denominate the voting power in that currency. -When voting power is denominated in a native currency, the system is -often referred to as Proof-of-Stake. Validators can be forced, by logic -in the application, to "bond" their currency holdings in a security -deposit that can be destroyed if they're found to misbehave in the -consensus protocol. This adds an economic element to the security of the -protocol, allowing one to quantify the cost of violating the assumption -that less than one-third of voting power is Byzantine. - -The [Cosmos Network](https://cosmos.network) is designed to use this -Proof-of-Stake mechanism across an array of cryptocurrencies implemented -as ABCI applications. - -[tendermint-paper]: https://arxiv.org/abs/1807.04938 diff --git a/docs/introduction/upgrading-from-tm.md b/docs/introduction/upgrading-from-tm.md deleted file mode 100644 index 8789f40a1c..0000000000 --- a/docs/introduction/upgrading-from-tm.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -order: 4 ---- - -# Upgrading from Tendermint Core - -CometBFT was originally forked from [Tendermint Core v0.34.24][v03424] and -subsequently updated in Informal Systems' public fork of Tendermint Core for -[v0.34.25][v03425] and [v0.34.26][v03426]. - -If you already make use of Tendermint Core (either the original Tendermint Core -v0.34.24, or Informal Systems' public fork), you can upgrade to CometBFT -v0.34.27 by replacing your dependency in your `go.mod` file: - -```bash -go mod edit -replace github.com/tendermint/tendermint=github.com/cometbft/cometbft@v0.34.27 -``` - -We make use of the original module URL in order to minimize the impact of -switching to CometBFT. This is only possible in our v0.34 release series, and we -will be switching our module URL to `github.com/cometbft/cometbft` in the next -major release. - -## Home directory - -CometBFT, by default, will consider its home directory in `~/.cometbft` from now -on instead of `~/.tendermint`. - -## Environment variables - -The environment variable prefixes have now changed from `TM` to `CMT`. For -example, `TMHOME` or `TM_HOME` become `CMTHOME` or `CMT_HOME`. - -We have implemented a fallback check in case `TMHOME` is still set and `CMTHOME` -is not, but you will start to see a warning message in the logs if the old -`TMHOME` variable is set. This fallback check will be removed entirely in a -subsequent major release of CometBFT. - -## Building CometBFT - -If you are building CometBFT from scratch, pls observe that it must be compiled -using Go 1.19 or higher. The use of Go 1.18 is not supported, since this version -has reached end-of-life with the release of Go 1.20. - - -[v03424]: https://github.com/tendermint/tendermint/releases/tag/v0.34.24 -[v03425]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.25 -[v03426]: https://github.com/informalsystems/tendermint/releases/tag/v0.34.26 diff --git a/docs/introduction/what-is-cometbft.md b/docs/introduction/what-is-cometbft.md deleted file mode 100644 index a694b9ad5e..0000000000 --- a/docs/introduction/what-is-cometbft.md +++ /dev/null @@ -1,332 +0,0 @@ ---- -order: 5 ---- - -# What is CometBFT - -CometBFT is software for securely and consistently replicating an -application on many machines. By securely, we mean that CometBFT works -as long as less than 1/3 of machines fail in arbitrary ways. By consistently, -we mean that every non-faulty machine sees the same transaction log and -computes the same state. Secure and consistent replication is a -fundamental problem in distributed systems; it plays a critical role in -the fault tolerance of a broad range of applications, from currencies, -to elections, to infrastructure orchestration, and beyond. - -The ability to tolerate machines failing in arbitrary ways, including -becoming malicious, is known as Byzantine fault tolerance (BFT). The -theory of BFT is decades old, but software implementations have only -became popular recently, due largely to the success of "blockchain -technology" like Bitcoin and Ethereum. Blockchain technology is just a -reformalization of BFT in a more modern setting, with emphasis on -peer-to-peer networking and cryptographic authentication. The name -derives from the way transactions are batched in blocks, where each -block contains a cryptographic hash of the previous one, forming a -chain. In practice, the blockchain data structure actually optimizes BFT -design. - -CometBFT consists of two chief technical components: a blockchain -consensus engine and a generic application interface. -The consensus engine, -which is based on [Tendermint consensus algorithm][tendermint-paper], -ensures that the same transactions are -recorded on every machine in the same order. The application interface, -called the Application BlockChain Interface (ABCI), enables the -transactions to be processed in any programming language. Unlike other -blockchain and consensus solutions, which come pre-packaged with built -in state machines (like a fancy key-value store, or a quirky scripting -language), developers can use CometBFT for BFT state machine -replication of applications written in whatever programming language and -development environment is right for them. - -CometBFT is designed to be easy-to-use, simple-to-understand, highly -performant, and useful for a wide variety of distributed applications. - -## CometBFT vs. X - -CometBFT is broadly similar to two classes of software. The first -class consists of distributed key-value stores, like Zookeeper, etcd, -and consul, which use non-BFT consensus. The second class is known as -"blockchain technology", and consists of both cryptocurrencies like -Bitcoin and Ethereum, and alternative distributed ledger designs like -Hyperledger's Burrow. - -### Zookeeper, etcd, consul - -Zookeeper, etcd, and consul are all implementations of a key-value store -atop a classical, non-BFT consensus algorithm. Zookeeper uses a version -of Paxos called Zookeeper Atomic Broadcast, while etcd and consul use -the Raft consensus algorithm, which is much younger and simpler. A -typical cluster contains 3-5 machines, and can tolerate crash failures -in up to 1/2 of the machines, but even a single Byzantine fault can -destroy the system. - -Each offering provides a slightly different implementation of a -featureful key-value store, but all are generally focused around -providing basic services to distributed systems, such as dynamic -configuration, service discovery, locking, leader-election, and so on. - -CometBFT is in essence similar software, but with two key differences: - -- It is Byzantine Fault Tolerant, meaning it can only tolerate less than 1/3 - of machines failing, but those failures can include arbitrary behavior - - including hacking and malicious attacks. -- It does not specify a - particular application, like a fancy key-value store. Instead, it - focuses on arbitrary state machine replication, so developers can build - the application logic that's right for them, from key-value store to - cryptocurrency to e-voting platform and beyond. - -### Bitcoin, Ethereum, etc - -[Tendermint consensus algorithm][tendermint-paper], adopted by CometBFT, -emerged in the tradition of cryptocurrencies like Bitcoin, -Ethereum, etc. with the goal of providing a more efficient and secure -consensus algorithm than Bitcoin's Proof of Work. In the early days, -Tendermint consensus-based blockchains had a simple currency built in, and to participate in -consensus, users had to "bond" units of the currency into a security -deposit which could be revoked if they misbehaved -this is what made -Tendermint consensus a Proof-of-Stake algorithm. - -Since then, CometBFT has evolved to be a general purpose blockchain -consensus engine that can host arbitrary application states. That means -it can be used as a plug-and-play replacement for the consensus engines -of other blockchain software. So one can take the current Ethereum code -base, whether in Rust, or Go, or Haskell, and run it as an ABCI -application using CometBFT. Indeed, [we did that with -Ethereum](https://github.com/cosmos/ethermint). And we plan to do -the same for Bitcoin, ZCash, and various other deterministic -applications as well. - -Another example of a cryptocurrency application built on CometBFT is -[the Cosmos network](http://cosmos.network). - -### Other Blockchain Projects - -[Fabric](https://github.com/hyperledger/fabric) takes a similar approach -to CometBFT, but is more opinionated about how the state is managed, -and requires that all application behavior runs in potentially many -docker containers, modules it calls "chaincode". It uses an -implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf). -from a team at IBM that is [augmented to handle potentially -non-deterministic -chaincode](https://drops.dagstuhl.de/opus/volltexte/2017/7093/pdf/LIPIcs-OPODIS-2016-24.pdf). -It is possible to implement this docker-based behavior as an ABCI app in -CometBFT, though extending CometBFT to handle non-determinism -remains for future work. - -[Burrow](https://github.com/hyperledger/burrow) is an implementation of -the Ethereum Virtual Machine and Ethereum transaction mechanics, with -additional features for a name-registry, permissions, and native -contracts, and an alternative blockchain API. It uses CometBFT as its -consensus engine, and provides a particular application state. - -## ABCI Overview - -The [Application BlockChain Interface -(ABCI)](https://github.com/cometbft/cometbft/tree/v0.34.x/abci) -allows for Byzantine Fault Tolerant replication of applications -written in any programming language. - -### Motivation - -Thus far, all blockchains "stacks" (such as -[Bitcoin](https://github.com/bitcoin/bitcoin)) have had a monolithic -design. That is, each blockchain stack is a single program that handles -all the concerns of a decentralized ledger; this includes P2P -connectivity, the "mempool" broadcasting of transactions, consensus on -the most recent block, account balances, Turing-complete contracts, -user-level permissions, etc. - -Using a monolithic architecture is typically bad practice in computer -science. It makes it difficult to reuse components of the code, and -attempts to do so result in complex maintenance procedures for forks of -the codebase. This is especially true when the codebase is not modular -in design and suffers from "spaghetti code". - -Another problem with monolithic design is that it limits you to the -language of the blockchain stack (or vice versa). In the case of -Ethereum which supports a Turing-complete bytecode virtual-machine, it -limits you to languages that compile down to that bytecode; today, those -are Serpent and Solidity. - -In contrast, our approach is to decouple the consensus engine and P2P -layers from the details of the application state of the particular -blockchain application. We do this by abstracting away the details of -the application to an interface, which is implemented as a socket -protocol. - -Thus we have an interface, the Application BlockChain Interface (ABCI), -and its primary implementation, the Tendermint Socket Protocol (TSP, or -Teaspoon). - -### Intro to ABCI - -[CometBFT](https://github.com/cometbft/cometbft), the -"consensus engine", communicates with the application via a socket -protocol that satisfies the ABCI, the CometBFT Socket Protocol. - -To draw an analogy, let's talk about a well-known cryptocurrency, -Bitcoin. Bitcoin is a cryptocurrency blockchain where each node -maintains a fully audited Unspent Transaction Output (UTXO) database. If -one wanted to create a Bitcoin-like system on top of ABCI, CometBFT -would be responsible for - -- Sharing blocks and transactions between nodes -- Establishing a canonical/immutable order of transactions - (the blockchain) - -The application will be responsible for - -- Maintaining the UTXO database -- Validating cryptographic signatures of transactions -- Preventing transactions from spending non-existent transactions -- Allowing clients to query the UTXO database. - -CometBFT is able to decompose the blockchain design by offering a very -simple API (i.e. the ABCI) between the application process and consensus -process. - -The ABCI consists of 3 primary message types that get delivered from the -core to the application. The application replies with corresponding -response messages. - -The messages are specified here: [ABCI Message -Types](https://github.com/cometbft/cometbft/blob/v0.34.x/proto/tendermint/abci/types.proto). - -The **DeliverTx** message is the work horse of the application. Each -transaction in the blockchain is delivered with this message. The -application needs to validate each transaction received with the -**DeliverTx** message against the current state, application protocol, -and the cryptographic credentials of the transaction. A validated -transaction then needs to update the application state — by binding a -value into a key values store, or by updating the UTXO database, for -instance. - -The **CheckTx** message is similar to **DeliverTx**, but it's only for -validating transactions. CometBFT's mempool first checks the -validity of a transaction with **CheckTx**, and only relays valid -transactions to its peers. For instance, an application may check an -incrementing sequence number in the transaction and return an error upon -**CheckTx** if the sequence number is old. Alternatively, they might use -a capabilities based system that requires capabilities to be renewed -with every transaction. - -The **Commit** message is used to compute a cryptographic commitment to -the current application state, to be placed into the next block header. -This has some handy properties. Inconsistencies in updating that state -will now appear as blockchain forks which catches a whole class of -programming errors. This also simplifies the development of secure -lightweight clients, as Merkle-hash proofs can be verified by checking -against the block hash, and that the block hash is signed by a quorum. - -There can be multiple ABCI socket connections to an application. -CometBFT creates three ABCI connections to the application; one -for the validation of transactions when broadcasting in the mempool, one -for the consensus engine to run block proposals, and one more for -querying the application state. - -It's probably evident that applications designers need to very carefully -design their message handlers to create a blockchain that does anything -useful but this architecture provides a place to start. The diagram -below illustrates the flow of messages via ABCI. - -![abci](../imgs/abci.png) - -## A Note on Determinism - -The logic for blockchain transaction processing must be deterministic. -If the application logic weren't deterministic, consensus would not be -reached among the CometBFT replica nodes. - -Solidity on Ethereum is a great language of choice for blockchain -applications because, among other reasons, it is a completely -deterministic programming language. However, it's also possible to -create deterministic applications using existing popular languages like -Java, C++, Python, or Go. Game programmers and blockchain developers are -already familiar with creating deterministic programs by avoiding -sources of non-determinism such as: - -- random number generators (without deterministic seeding) -- race conditions on threads (or avoiding threads altogether) -- the system clock -- uninitialized memory (in unsafe programming languages like C - or C++) -- [floating point - arithmetic](http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/) -- language features that are random (e.g. map iteration in Go) - -While programmers can avoid non-determinism by being careful, it is also -possible to create a special linter or static analyzer for each language -to check for determinism. In the future we may work with partners to -create such tools. - -## Consensus Overview - -CometBFT adopts [Tendermint consensus][tendermint-paper], -an easy-to-understand, mostly asynchronous, BFT consensus algorithm. -The algorithm follows a simple state machine that looks like this: - -![consensus-logic](../imgs/consensus_logic.png) - -Participants in the algorithm are called **validators**; they take turns -proposing blocks of transactions and voting on them. Blocks are -committed in a chain, with one block at each **height**. A block may -fail to be committed, in which case the algorithm moves to the next -**round**, and a new validator gets to propose a block for that height. -Two stages of voting are required to successfully commit a block; we -call them **pre-vote** and **pre-commit**. A block is committed when -more than 2/3 of validators pre-commit for the same block in the same -round. - -There is a picture of a couple doing the polka because validators are -doing something like a polka dance. When more than two-thirds of the -validators pre-vote for the same block, we call that a **polka**. Every -pre-commit must be justified by a polka in the same round. - -Validators may fail to commit a block for a number of reasons; the -current proposer may be offline, or the network may be slow. Tendermint consensus -allows them to establish that a validator should be skipped. Validators -wait a small amount of time to receive a complete proposal block from -the proposer before voting to move to the next round. This reliance on a -timeout is what makes Tendermint consensus a weakly synchronous algorithm, rather -than an asynchronous one. However, the rest of the algorithm is -asynchronous, and validators only make progress after hearing from more -than two-thirds of the validator set. A simplifying element of -Tendermint consensus is that it uses the same mechanism to commit a block as it -does to skip to the next round. - -Assuming less than one-third of the validators are Byzantine, Tendermint consensus algorithm -guarantees that safety will never be violated - that is, validators will -never commit conflicting blocks at the same height. To do this it -introduces a few **locking** rules which modulate which paths can be -followed in the flow diagram. Once a validator precommits a block, it is -locked on that block. Then, - -1. it must prevote for the block it is locked on -2. it can only unlock, and precommit for a new block, if there is a - polka for that block in a later round - -## Stake - -In many systems, not all validators will have the same "weight" in the -consensus protocol. Thus, we are not so much interested in one-third or -two-thirds of the validators, but in those proportions of the total -voting power, which may not be uniformly distributed across individual -validators. - -Since CometBFT can replicate arbitrary applications, it is possible to -define a currency, and denominate the voting power in that currency. -When voting power is denominated in a native currency, the system is -often referred to as Proof-of-Stake. Validators can be forced, by logic -in the application, to "bond" their currency holdings in a security -deposit that can be destroyed if they're found to misbehave in the -consensus protocol. This adds an economic element to the security of the -protocol, allowing one to quantify the cost of violating the assumption -that less than one-third of voting power is Byzantine. - -The [Cosmos Network](https://cosmos.network) is designed to use this -Proof-of-Stake mechanism across an array of cryptocurrencies implemented -as ABCI applications. - -[tendermint-paper]: https://arxiv.org/abs/1807.04938 diff --git a/docs/networks/README.md b/docs/networks/README.md deleted file mode 100644 index ceea235985..0000000000 --- a/docs/networks/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -order: 1 -parent: - title: Networks - order: 5 ---- - -# Overview - -Use [Docker Compose](./docker-compose.md) to spin up CometBFT testnets on your -local machine. - -See the `cometbft testnet --help` command for more help initializing testnets. diff --git a/docs/networks/docker-compose.md b/docs/networks/docker-compose.md deleted file mode 100644 index a8aba45e4c..0000000000 --- a/docs/networks/docker-compose.md +++ /dev/null @@ -1,179 +0,0 @@ ---- -order: 2 ---- - -# Docker Compose - -With Docker Compose, you can spin up local testnets with a single command. - -## Requirements - -1. [Install CometBFT](../introduction/install.md) -2. [Install docker](https://docs.docker.com/engine/installation/) -3. [Install docker-compose](https://docs.docker.com/compose/install/) - -## Build - -Build the `cometbft` binary and, optionally, the `cometbft/localnode` -docker image. - -Note the binary will be mounted into the container so it can be updated without -rebuilding the image. - -```sh -# Build the linux binary in ./build -make build-linux - -# (optionally) Build cometbft/localnode image -make build-docker-localnode -``` - -## Run a testnet - -To start a 4 node testnet run: - -```sh -make localnet-start -``` - -The nodes bind their RPC servers to ports 26657, 26660, 26662, and 26664 on the -host. - -This file creates a 4-node network using the localnode image. - -The nodes of the network expose their P2P and RPC endpoints to the host machine -on ports 26656-26657, 26659-26660, 26661-26662, and 26663-26664 respectively. - -To update the binary, just rebuild it and restart the nodes: - -```sh -make build-linux -make localnet-start -``` - -## Configuration - -The `make localnet-start` creates files for a 4-node testnet in `./build` by -calling the `cometbft testnet` command. - -The `./build` directory is mounted to the `/cometbft` mount point to attach -the binary and config files to the container. - -To change the number of validators / non-validators change the `localnet-start` Makefile target [here](../../Makefile): - -```makefile -localnet-start: localnet-stop - @if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/cometbft:Z cometbft/localnode testnet --v 5 --n 3 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi - docker compose up -d -``` - -The command now will generate config files for 5 validators and 3 -non-validators. Along with generating new config files the docker-compose file needs to be edited. -Adding 4 more nodes is required in order to fully utilize the config files that were generated. - -```yml - node3: # bump by 1 for every node - container_name: node3 # bump by 1 for every node - image: "cometbft/localnode" - environment: - - ID=3 - - LOG=${LOG:-cometbft.log} - ports: - - "26663-26664:26656-26657" # Bump 26663-26664 by one for every node - volumes: - - ./build:/cometbft:Z - networks: - localnet: - ipv4_address: 192.167.10.5 # bump the final digit by 1 for every node -``` - -Before running it, don't forget to cleanup the old files: - -```sh -# Clear the build folder -rm -rf ./build/node* -``` - -## Configuring ABCI containers - -To use your own ABCI applications with 4-node setup edit the [docker-compose.yaml](https://github.com/cometbft/cometbft/blob/v0.34.x/docker-compose.yml) file and add images to your ABCI application. - -```yml - abci0: - container_name: abci0 - image: "abci-image" - build: - context: . - dockerfile: abci.Dockerfile - command: - networks: - localnet: - ipv4_address: 192.167.10.6 - - abci1: - container_name: abci1 - image: "abci-image" - build: - context: . - dockerfile: abci.Dockerfile - command: - networks: - localnet: - ipv4_address: 192.167.10.7 - - abci2: - container_name: abci2 - image: "abci-image" - build: - context: . - dockerfile: abci.Dockerfile - command: - networks: - localnet: - ipv4_address: 192.167.10.8 - - abci3: - container_name: abci3 - image: "abci-image" - build: - context: . - dockerfile: abci.Dockerfile - command: - networks: - localnet: - ipv4_address: 192.167.10.9 - -``` - -Override the [command](https://github.com/cometbft/cometbft/blob/v0.34.x/networks/local/localnode/Dockerfile#L11) in each node to connect to it's ABCI. - -```yml - node0: - container_name: node0 - image: "cometbft/localnode" - ports: - - "26656-26657:26656-26657" - environment: - - ID=0 - - LOG=$${LOG:-cometbft.log} - volumes: - - ./build:/cometbft:Z - command: node --proxy_app=tcp://abci0:26658 - networks: - localnet: - ipv4_address: 192.167.10.2 -``` - -Similarly do for node1, node2 and node3 then [run testnet](#run-a-testnet). - -## Logging - -Log is saved under the attached volume, in the `cometbft.log` file. If the -`LOG` environment variable is set to `stdout` at start, the log is not saved, -but printed on the screen. - -## Special binaries - -If you have multiple binaries with different names, you can specify which one -to run with the `BINARY` environment variable. The path of the binary is relative -to the attached volume. diff --git a/docs/qa/CometBFT-QA-34.md b/docs/qa/CometBFT-QA-34.md deleted file mode 100644 index d633426407..0000000000 --- a/docs/qa/CometBFT-QA-34.md +++ /dev/null @@ -1,370 +0,0 @@ ---- -order: 1 -parent: - title: CometBFT QA Results v0.34.x - description: This is a report on the results obtained when running v0.34.x on testnets - order: 3 ---- - -# CometBFT QA Results v0.34.x - -## v0.34.x - From Tendermint Core to CometBFT - -This section reports on the QA process we followed before releasing the first `v0.34.x` version -from our CometBFT repository. - -The changes with respect to the last version of `v0.34.x` -(namely `v0.34.26`, released from the Informal Systems' Tendermint Core fork) -are minimal, and focus on rebranding our fork of Tendermint Core to CometBFT at places -where there is no substantial risk of breaking compatibility -with earlier Tendermint Core versions of `v0.34.x`. - -Indeed, CometBFT versions of `v0.34.x` (`v0.34.27` and subsequent) should fulfill -the following compatibility-related requirements. - -* Operators can easily upgrade a `v0.34.x` version of Tendermint Core to CometBFT. -* Upgrades from Tendermint Core to CometBFT can be uncoordinated for versions of the `v0.34.x` branch. -* Nodes running CometBFT must be interoperable with those running Tendermint Core in the same chain, - as long as all are running a `v0.34.x` version. - -These QA tests focus on the third bullet, whereas the first two bullets are tested using our _e2e tests_. - -It would be prohibitively time consuming to test mixed networks of all combinations of existing `v0.34.x` -versions, combined with the CometBFT release candidate under test. -Therefore our testing focuses on the last Tendermint Core version (`v0.34.26`) and the CometBFT release -candidate under test. - -We run the _200 node test_, but not the _rotating node test_. The effort of running the latter -is not justified given the amount and nature of the changes we are testing with respect to the -full QA cycle run previously on `v0.34.x`. -Since the changes to the system's logic are minimal, we are interested in these performance requirements: - -* The CometBFT release candidate under test performs similarly to Tendermint Core (i.e., the baseline) - * when used at scale (i.e., in a large network of CometBFT nodes) - * when used at scale in a mixed network (i.e., some nodes are running CometBFT - and others are running an older Tendermint Core version) - -Therefore we carry out a complete run of the _200-node test_ on the following networks: - -* A homogeneous 200-node testnet, where all nodes are running the CometBFT release candidate under test. -* A mixed network where 1/2 (99 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (101 out of 200) are running Tendermint Core `v0.34.26`. -* A mixed network where 1/3 (66 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (134 out of 200) are running Tendermint Core `v0.34.26`. -* A mixed network where 2/3 (133 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (67 out of 200) are running Tendermint Core `v0.34.26`. - -## Configuration and Results -In the following sections we provide the results of the _200 node test_. -Each section reports the baseline results (for reference), the homogeneous network scenario (all CometBFT nodes), -and the mixed networks with 1/2, 1/3 and 2/3 of Tendermint Core nodes. - -### Saturation Point - -As the CometBFT release candidate under test has minimal changes -with respect to Tendermint Core `v0.34.26`, other than the rebranding changes, -we can confidently reuse the results from the `v0.34.x` baseline test regarding -the [saturation point](TMCore-QA-34.md#finding-the-saturation-point). - -Therefore, we will simply use a load of (`r=200,c=2`) -(see the explanation [here](TMCore-QA-34.md#finding-the-saturation-point)) on all experiments. - -We also include the baseline results for quick reference and comparison. - -### Experiments - -On each of the three networks, the test consists of 4 experiments, with the goal of -ensuring the data obtained is consistent across experiments. - -On each of the networks, we pick only one representative run to present and discuss the -results. - - -## Examining latencies -For each network the figures plot the four experiments carried out with the network. -We can see that the latencies follow comparable patterns across all experiments. - -Unique identifiers, UUID, for each execution are presented on top of each graph. -We refer to these UUID to indicate to the representative runs. - -### CometBFT Homogeneous network - -![latencies](img34/homogeneous/all_experiments.png) - -### 1/2 Tendermint Core - 1/2 CometBFT - -![latencies](img34/cmt1tm1/all_experiments.png) - -### 1/3 Tendermint Core - 2/3 CometBFT - -![latencies](img34/cmt2tm1/all_experiments.png) - -### 2/3 Tendermint Core - 1/3 CometBFT - -![latencies_all_tm2_3_cmt1_3](img34/v034_200node_tm2cmt1/all_experiments.png) - - -## Prometheus Metrics - -This section reports on the key Prometheus metrics extracted from the following experiments. - -* Baseline results: `v0.34.x`, obtained in October 2022 and reported [here](TMCore-QA-34.md). -* CometBFT homogeneous network: experiment with UUID starting with `be8c`. -* Mixed network, 1/2 Tendermint Core `v0.34.26` and 1/2 running CometBFT: experiment with UUID starting with `04ee`. -* Mixed network, 1/3 Tendermint Core `v0.34.26` and 2/3 running CometBFT: experiment with UUID starting with `fc5e`. -* Mixed network, 2/3 Tendermint Core `v0.34.26` and 1/3 running CometBFT: experiment with UUID starting with `4759`. - -We make explicit comparisons between the baseline and the homogenous setups, but refrain from -commenting on the mixed network experiment unless they show some exceptional results. - -### Mempool Size - -For each reported experiment we show two graphs. -The first shows the evolution over time of the cumulative number of transactions -inside all full nodes' mempools at a given time. - -The second one shows the evolution of the average over all full nodes. - -#### Baseline - -![mempool-cumulative](img34/baseline/mempool_size.png) - -![mempool-avg](img34/baseline/avg_mempool_size.png) - -#### CometBFT Homogeneous network - -The results for the homogeneous network and the baseline are similar in terms of outstanding transactions. - -![mempool-cumulative-homogeneous](img34/homogeneous/mempool_size.png) - -![mempool-avg-homogeneous](img34/homogeneous/avg_mempool_size.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![mempool size](img34/cmt1tm1/mempool_size.png) - -![average mempool size](img34/cmt1tm1/avg_mempool_size.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![mempool size](img34/cmt2tm1/mempool_size.png) - -![average mempool size](img34/cmt2tm1/avg_mempool_size.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![mempool_tm2_3_cmt_1_3](img34/v034_200node_tm2cmt1/mempool_size.png) - -![mempool-avg_tm2_3_cmt_1_3](img34/v034_200node_tm2cmt1/avg_mempool_size.png) - -### Consensus Rounds per Height - -The following graphs show the rounds needed to complete each height and agree on a block. - -A value of `0` shows that only one round was required (with id `0`), and a value of `1` shows that two rounds were required. - -#### Baseline -We can see that round 1 is reached with a certain frequency. - -![rounds](img34/baseline/rounds.png) - -#### CometBFT Homogeneous network - -Most heights finished in round 0, some nodes needed to advance to round 1 at various moments, -and a few nodes even needed to advance to round 2 at one point. -This coincides with the time at which we observed the biggest peak in mempool size -on the corresponding plot, shown above. - -![rounds-homogeneous](img34/homogeneous/rounds.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![peers](img34/cmt1tm1/rounds.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![peers](img34/cmt2tm1/rounds.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![rounds-tm2_3_cmt1_3](img34/v034_200node_tm2cmt1/rounds.png) - -### Peers - -The following plots show how many peers a node had throughtout the experiment. - -The thick red dashed line represents the moving average over a sliding window of 20 seconds. - -#### Baseline - -The following graph shows the that the number of peers was stable throughout the experiment. -Seed nodes typically have a higher number of peers. -The fact that non-seed nodes reach more than 50 peers is due to -[#9548](https://github.com/tendermint/tendermint/issues/9548). - -![peers](img34/baseline/peers.png) - -#### CometBFT Homogeneous network - -The results for the homogeneous network are very similar to the baseline. -The only difference being that the seed nodes seem to loose peers in the middle of the experiment. -However this cannot be attributed to the differences in the code, which are mainly rebranding. - -![peers-homogeneous](img34/homogeneous/peers.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![peers](img34/cmt1tm1/peers.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![peers](img34/cmt2tm1/peers.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -As in the homogeneous case, there is some variation in the number of peers for some nodes. -These, however, do not affect the average. - -![peers-tm2_3_cmt1_3](img34/v034_200node_tm2cmt1/peers.png) - -### Blocks Produced per Minute, Transactions Processed per Minute - -The following plot show the rate of block production and the rate of transactions delivered, throughout the experiments. - -In both graphs, rates are calculated over a sliding window of 20 seconds. -The thick red dashed line show the rates' moving averages. - -#### Baseline - -The average number of blocks/minute oscilate between 10 and 40. - -![heights](img34/baseline/block_rate_regular.png) - -The number of transactions/minute tops around 30k. - -![total-txs](img34/baseline/total_txs_rate_regular.png) - - -#### CometBFT Homogeneous network - -The plot showing the block production rate shows that the rate oscillates around 20 blocks/minute, -mostly within the same range as the baseline. - -![heights-homogeneous-rate](img34/homogeneous/block_rate_regular.png) - -The plot showing the transaction rate shows the rate stays around 20000 transactions per minute, -also topping around 30k. - -![txs-homogeneous-rate](img34/homogeneous/total_txs_rate_regular.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![height rate](img34/cmt1tm1/block_rate_regular.png) - -![transaction rate](img34/cmt1tm1/total_txs_rate_regular.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![height rate](img34/cmt2tm1/block_rate_regular.png) - -![transaction rate](img34/cmt2tm1/total_txs_rate_regular.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![height rate](img34/v034_200node_tm2cmt1/block_rate_regular.png) - -![transaction rate](img34/v034_200node_tm2cmt1/total_txs_rate_regular.png) - -### Memory Resident Set Size - -The following graphs show the Resident Set Size (RSS) of all monitored processes and the average value. - -#### Baseline - -![rss](img34/baseline/memory.png) - -![rss-avg](img34/baseline/avg_memory.png) - -#### CometBFT Homogeneous network - -This is the plot for the homogeneous network, which is slightly more stable than the baseline over -the time of the experiment. - -![rss-homogeneous](img34/homogeneous/memory.png) - -And this is the average plot. It oscillates around 560 MiB, which is noticeably lower than the baseline. - -![rss-avg-homogeneous](img34/homogeneous/avg_memory.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![rss](img34/cmt1tm1/memory.png) - -![rss average](img34/cmt1tm1/avg_memory.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![rss](img34/cmt2tm1/memory.png) - -![rss average](img34/cmt2tm1/avg_memory.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![rss](img34/v034_200node_tm2cmt1/memory.png) - -![rss average](img34/v034_200node_tm2cmt1/avg_memory.png) - -### CPU utilization - -The following graphs show the `load1` of nodes, as typically shown in the first line of the Unix `top` -command, and their average value. - -#### Baseline - -![load1](img34/baseline/cpu.png) - -![load1-avg](img34/baseline/avg_cpu.png) - -#### CometBFT Homogeneous network - -The load in the homogenous network is, similarly to the baseline case, below 5 and, therefore, normal. - -![load1-homogeneous](img34/homogeneous/cpu.png) - -As expected, the average plot also looks similar. - -![load1-homogeneous-avg](img34/homogeneous/avg_cpu.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![load1](img34/cmt1tm1/cpu.png) - -![average load1](img34/cmt1tm1/avg_cpu.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![load1](img34/cmt2tm1/cpu.png) - -![average load1](img34/cmt2tm1/avg_cpu.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![load1](img34/v034_200node_tm2cmt1/cpu.png) - -![average load1](img34/v034_200node_tm2cmt1/avg_cpu.png) - -## Test Results - -The comparison of the baseline results and the homogeneous case show that both scenarios had similar numbers and are therefore equivalent. - -The mixed nodes cases show that networks operate normally with a mix of compatible Tendermint Core and CometBFT versions. -Although not the main goal, a comparison of metric numbers with the homogenous case and the baseline scenarios show similar results and therefore we can conclude that mixing compatible Tendermint Core and CometBFT introduces not performance degradation. - -A conclusion of these tests is shown in the following table, along with the commit versions used in the experiments. - -| Scenario | Date | Version | Result | -|--|--|--|--| -|CometBFT Homogeneous network | 2023-02-08 | 3b783434f26b0e87994e6a77c5411927aad9ce3f | Pass -|1/2 Tendermint Core
1/2 CometBFT | 2023-02-14 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass| -|1/3 Tendermint Core
2/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass| -|2/3 Tendermint Core
1/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass | diff --git a/docs/qa/README.md b/docs/qa/README.md deleted file mode 100644 index e4068920d1..0000000000 --- a/docs/qa/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -order: 1 -parent: - title: CometBFT Quality Assurance - description: This is a report on the process followed and results obtained when running v0.34.x on testnets - order: 2 ---- - -# CometBFT Quality Assurance - -This directory keeps track of the process followed by the CometBFT team -for Quality Assurance before cutting a release. -This directory is to live in multiple branches. On each release branch, -the contents of this directory reflect the status of the process -at the time the Quality Assurance process was applied for that release. - -File [method](./method.md) keeps track of the process followed to obtain the results -used to decide if a release is passing the Quality Assurance process. -The results obtained in each release are stored in their own directory. -The following releases have undergone the Quality Assurance process, and the corresponding reports include detailed information on tests and comparison with the baseline. - -* [TM v0.34.x](TMCore-QA-34.md) - Tested prior to releasing Tendermint Core v0.34.22. -* [v0.34.x](CometBFT-QA-34.md) - Tested prior to releasing v0.34.27, using TM v0.34.x results as baseline. diff --git a/docs/qa/TMCore-QA-34.md b/docs/qa/TMCore-QA-34.md deleted file mode 100644 index e5764611c0..0000000000 --- a/docs/qa/TMCore-QA-34.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -order: 1 -parent: - title: Tendermint Core QA Results v0.34.x - description: This is a report on the results obtained when running v0.34.x on testnets - order: 2 ---- - -# Tendermint Core QA Results v0.34.x - -## 200 Node Testnet - -### Finding the Saturation Point - -The first goal when examining the results of the tests is identifying the saturation point. -The saturation point is a setup with a transaction load big enough to prevent the testnet -from being stable: the load runner tries to produce slightly more transactions than can -be processed by the testnet. - -The following table summarizes the results for v0.34.x, for the different experiments -(extracted from file [`v034_report_tabbed.txt`](img34/v034_report_tabbed.txt)). - -The X axis of this table is `c`, the number of connections created by the load runner process to the target node. -The Y axis of this table is `r`, the rate or number of transactions issued per second. - -| | c=1 | c=2 | c=4 | -| :--- | ----: | ----: | ----: | -| r=25 | 2225 | 4450 | 8900 | -| r=50 | 4450 | 8900 | 17800 | -| r=100 | 8900 | 17800 | 35600 | -| r=200 | 17800 | 35600 | 38660 | - -The table shows the number of 1024-byte-long transactions that were produced by the load runner, -and processed by Tendermint Core, during the 90 seconds of the experiment's duration. -Each cell in the table refers to an experiment with a particular number of websocket connections (`c`) -to a chosen validator, and the number of transactions per second that the load runner -tries to produce (`r`). Note that the overall load that the tool attempts to generate is $c \cdot r$. - -We can see that the saturation point is beyond the diagonal that spans cells - -* `r=200,c=2` -* `r=100,c=4` - -given that the total number of transactions should be close to the product rate X the number of connections x experiment time. - -All experiments below the saturation diagonal (`r=200,c=4`) have in common that the total -number of transactions processed is noticeably less than the product $c \cdot r \cdot 89$ (89 seconds, since the last batch never gets sent), -which is the expected number of transactions when the system is able to deal well with the -load. -With (`r=200,c=4`), we obtained 38660 whereas the theoretical number of transactions should -have been $200 \cdot 4 \cdot 89 = 71200$. - -At this point, we chose an experiment at the limit of the saturation diagonal, -in order to further study the performance of this release. -**The chosen experiment is (`r=200,c=2`)**. - -This is a plot of the CPU load (average over 1 minute, as output by `top`) of the load runner for (`r=200,c=2`), -where we can see that the load stays close to 0 most of the time. - -![load-load-runner](img34/v034_r200c2_load-runner.png) - -### Examining latencies - -The method described [here](method.md) allows us to plot the latencies of transactions -for all experiments. - -![all-latencies](img34/v034_200node_latencies.png) - -As we can see, even the experiments beyond the saturation diagonal managed to keep -transaction latency stable (i.e. not constantly increasing). -Our interpretation for this is that contention within Tendermint Core was propagated, -via the websockets, to the load runner, -hence the load runner could not produce the target load, but a fraction of it. - -Further examination of the Prometheus data (see below), showed that the mempool contained many transactions -at steady state, but did not grow much without quickly returning to this steady state. This demonstrates -that Tendermint Core network was able to process transactions at least as quickly as they -were submitted to the mempool. Finally, the test script made sure that, at the end of an experiment, the -mempool was empty so that all transactions submitted to the chain were processed. - -Finally, the number of points present in the plot appears to be much less than expected given the -number of transactions in each experiment, particularly close to or above the saturation diagonal. -This is a visual effect of the plot; what appear to be points in the plot are actually potentially huge -clusters of points. To corroborate this, we have zoomed in the plot above by setting (carefully chosen) -tiny axis intervals. The cluster shown below looks like a single point in the plot above. - -![all-latencies-zoomed](img34/v034_200node_latencies_zoomed.png) - -The plot of latencies can we used as a baseline to compare with other releases. - -The following plot summarizes average latencies versus overall throughput -across different numbers of WebSocket connections to the node into which -transactions are being loaded. - -![latency-vs-throughput](img34/v034_latency_throughput.png) - -### Prometheus Metrics on the Chosen Experiment - -As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`. -This section further examines key metrics for this experiment extracted from Prometheus data. - -#### Mempool Size - -The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous -at all full nodes. It did not exhibit any unconstrained growth. -The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools -at a given time. -The two spikes that can be observed correspond to a period where consensus instances proceeded beyond the initial round -at some nodes. - -![mempool-cumulative](img34/v034_r200c2_mempool_size.png) - -The plot below shows evolution of the average over all full nodes, which oscillates between 1500 and 2000 -outstanding transactions. - -![mempool-avg](img34/v034_r200c2_mempool_size_avg.png) - -The peaks observed coincide with the moments when some nodes proceeded beyond the initial round of consensus (see below). - -#### Peers - -The number of peers was stable at all nodes. -It was higher for the seed nodes (around 140) than for the rest (between 21 and 74). -The fact that non-seed nodes reach more than 50 peers is due to #9548. - -![peers](img34/v034_r200c2_peers.png) - -#### Consensus Rounds per Height - -Most nodes used only round 0 for most heights, but some nodes needed to advance to round 1 for some heights. - -![rounds](img34/v034_r200c2_rounds.png) - -#### Blocks Produced per Minute, Transactions Processed per Minute - -The blocks produced per minute are the slope of this plot. - -![heights](img34/v034_r200c2_heights.png) - -Over a period of 2 minutes, the height goes from 530 to 569. -This results in an average of 19.5 blocks produced per minute. - -The transactions processed per minute are the slope of this plot. - -![total-txs](img34/v034_r200c2_total-txs.png) - -Over a period of 2 minutes, the total goes from 64525 to 100125 transactions, -resulting in 17800 transactions per minute. However, we can see in the plot that -all transactions in the load are processed long before the two minutes. -If we adjust the time window when transactions are processed (approx. 105 seconds), -we obtain 20343 transactions per minute. - -#### Memory Resident Set Size - -Resident Set Size of all monitored processes is plotted below. - -![rss](img34/v034_r200c2_rss.png) - -The average over all processes oscillates around 1.2 GiB and does not demonstrate unconstrained growth. - -![rss-avg](img34/v034_r200c2_rss_avg.png) - -#### CPU utilization - -The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`, -as it usually appears in the -[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux). - -![load1](img34/v034_r200c2_load1.png) - -It is contained in most cases below 5, which is generally considered acceptable load. - -### Test Result - -**Result: N/A** (v0.34.x is the baseline) - -Date: 2022-10-14 - -Version: 3ec6e424d6ae4c96867c2dcf8310572156068bb6 - -## Rotating Node Testnet - -For this testnet, we will use a load that can safely be considered below the saturation -point for the size of this testnet (between 13 and 38 full nodes): `c=4,r=800`. - -N.B.: The version of CometBFT used for these tests is affected by #9539. -However, the reduced load that reaches the mempools is orthogonal to functionality -we are focusing on here. - -### Latencies - -The plot of all latencies can be seen in the following plot. - -![rotating-all-latencies](img34/v034_rotating_latencies.png) - -We can observe there are some very high latencies, towards the end of the test. -Upon suspicion that they are duplicate transactions, we examined the latencies -raw file and discovered there are more than 100K duplicate transactions. - -The following plot shows the latencies file where all duplicate transactions have -been removed, i.e., only the first occurrence of a duplicate transaction is kept. - -![rotating-all-latencies-uniq](img34/v034_rotating_latencies_uniq.png) - -This problem, existing in `v0.34.x`, will need to be addressed, perhaps in the same way -we addressed it when running the 200 node test with high loads: increasing the `cache_size` -configuration parameter. - -### Prometheus Metrics - -The set of metrics shown here are less than for the 200 node experiment. -We are only interested in those for which the catch-up process (blocksync) may have an impact. - -#### Blocks and Transactions per minute - -Just as shown for the 200 node test, the blocks produced per minute are the gradient of this plot. - -![rotating-heights](img34/v034_rotating_heights.png) - -Over a period of 5229 seconds, the height goes from 2 to 3638. -This results in an average of 41 blocks produced per minute. - -The following plot shows only the heights reported by ephemeral nodes -(which are also included in the plot above). Note that the _height_ metric -is only showed _once the node has switched to consensus_, hence the gaps -when nodes are killed, wiped out, started from scratch, and catching up. - -![rotating-heights-ephe](img34/v034_rotating_heights_ephe.png) - -The transactions processed per minute are the gradient of this plot. - -![rotating-total-txs](img34/v034_rotating_total-txs.png) - -The small lines we see periodically close to `y=0` are the transactions that -ephemeral nodes start processing when they are caught up. - -Over a period of 5229 minutes, the total goes from 0 to 387697 transactions, -resulting in 4449 transactions per minute. We can see some abrupt changes in -the plot's gradient. This will need to be investigated. - -#### Peers - -The plot below shows the evolution in peers throughout the experiment. -The periodic changes observed are due to the ephemeral nodes being stopped, -wiped out, and recreated. - -![rotating-peers](img34/v034_rotating_peers.png) - -The validators' plots are concentrated at the higher part of the graph, whereas the ephemeral nodes -are mostly at the lower part. - -#### Memory Resident Set Size - -The average Resident Set Size (RSS) over all processes seems stable, and slightly growing toward the end. -This might be related to the increased in transaction load observed above. - -![rotating-rss-avg](img34/v034_rotating_rss_avg.png) - -The memory taken by the validators and the ephemeral nodes (when they are up) is comparable. - -#### CPU utilization - -The plot shows metric `load1` for all nodes. - -![rotating-load1](img34/v034_rotating_load1.png) - -It is contained under 5 most of the time, which is considered normal load. -The purple line, which follows a different pattern is the validator receiving all -transactions, via RPC, from the load runner process. - -### Test Result - -**Result: N/A** - -Date: 2022-10-10 - -Version: a28c987f5a604ff66b515dd415270063e6fb069d diff --git a/docs/qa/img34/baseline/avg_cpu.png b/docs/qa/img34/baseline/avg_cpu.png deleted file mode 100644 index 622456df64..0000000000 Binary files a/docs/qa/img34/baseline/avg_cpu.png and /dev/null differ diff --git a/docs/qa/img34/baseline/avg_memory.png b/docs/qa/img34/baseline/avg_memory.png deleted file mode 100644 index 55f213f5e1..0000000000 Binary files a/docs/qa/img34/baseline/avg_memory.png and /dev/null differ diff --git a/docs/qa/img34/baseline/avg_mempool_size.png b/docs/qa/img34/baseline/avg_mempool_size.png deleted file mode 100644 index ec74072950..0000000000 Binary files a/docs/qa/img34/baseline/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/baseline/block_rate_regular.png b/docs/qa/img34/baseline/block_rate_regular.png deleted file mode 100644 index bdc7aa28d7..0000000000 Binary files a/docs/qa/img34/baseline/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/baseline/cpu.png b/docs/qa/img34/baseline/cpu.png deleted file mode 100644 index ac4fc2695f..0000000000 Binary files a/docs/qa/img34/baseline/cpu.png and /dev/null differ diff --git a/docs/qa/img34/baseline/memory.png b/docs/qa/img34/baseline/memory.png deleted file mode 100644 index 17336bd1b9..0000000000 Binary files a/docs/qa/img34/baseline/memory.png and /dev/null differ diff --git a/docs/qa/img34/baseline/mempool_size.png b/docs/qa/img34/baseline/mempool_size.png deleted file mode 100644 index fafba68c1a..0000000000 Binary files a/docs/qa/img34/baseline/mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/baseline/peers.png b/docs/qa/img34/baseline/peers.png deleted file mode 100644 index 05a288a356..0000000000 Binary files a/docs/qa/img34/baseline/peers.png and /dev/null differ diff --git a/docs/qa/img34/baseline/rounds.png b/docs/qa/img34/baseline/rounds.png deleted file mode 100644 index 79f3348a25..0000000000 Binary files a/docs/qa/img34/baseline/rounds.png and /dev/null differ diff --git a/docs/qa/img34/baseline/total_txs_rate_regular.png b/docs/qa/img34/baseline/total_txs_rate_regular.png deleted file mode 100644 index d80bef12c0..0000000000 Binary files a/docs/qa/img34/baseline/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/all_experiments.png b/docs/qa/img34/cmt1tm1/all_experiments.png deleted file mode 100644 index 4dc857edca..0000000000 Binary files a/docs/qa/img34/cmt1tm1/all_experiments.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/avg_cpu.png b/docs/qa/img34/cmt1tm1/avg_cpu.png deleted file mode 100644 index cabd273a55..0000000000 Binary files a/docs/qa/img34/cmt1tm1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/avg_memory.png b/docs/qa/img34/cmt1tm1/avg_memory.png deleted file mode 100644 index c8e5761772..0000000000 Binary files a/docs/qa/img34/cmt1tm1/avg_memory.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/avg_mempool_size.png b/docs/qa/img34/cmt1tm1/avg_mempool_size.png deleted file mode 100644 index b41199dc00..0000000000 Binary files a/docs/qa/img34/cmt1tm1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/block_rate_regular.png b/docs/qa/img34/cmt1tm1/block_rate_regular.png deleted file mode 100644 index 9b3a0b8276..0000000000 Binary files a/docs/qa/img34/cmt1tm1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/cpu.png b/docs/qa/img34/cmt1tm1/cpu.png deleted file mode 100644 index cd5acdeb29..0000000000 Binary files a/docs/qa/img34/cmt1tm1/cpu.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/memory.png b/docs/qa/img34/cmt1tm1/memory.png deleted file mode 100644 index 6f56b3ccf1..0000000000 Binary files a/docs/qa/img34/cmt1tm1/memory.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/mempool_size.png b/docs/qa/img34/cmt1tm1/mempool_size.png deleted file mode 100644 index 862a0bdd49..0000000000 Binary files a/docs/qa/img34/cmt1tm1/mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/peers.png b/docs/qa/img34/cmt1tm1/peers.png deleted file mode 100644 index 737cf3dffb..0000000000 Binary files a/docs/qa/img34/cmt1tm1/peers.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/rounds.png b/docs/qa/img34/cmt1tm1/rounds.png deleted file mode 100644 index 17884813af..0000000000 Binary files a/docs/qa/img34/cmt1tm1/rounds.png and /dev/null differ diff --git a/docs/qa/img34/cmt1tm1/total_txs_rate_regular.png b/docs/qa/img34/cmt1tm1/total_txs_rate_regular.png deleted file mode 100644 index 8b0cc0d426..0000000000 Binary files a/docs/qa/img34/cmt1tm1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/all_experiments.png b/docs/qa/img34/cmt2tm1/all_experiments.png deleted file mode 100644 index 4e6f73d355..0000000000 Binary files a/docs/qa/img34/cmt2tm1/all_experiments.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/avg_cpu.png b/docs/qa/img34/cmt2tm1/avg_cpu.png deleted file mode 100644 index 92fea31bd1..0000000000 Binary files a/docs/qa/img34/cmt2tm1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/avg_memory.png b/docs/qa/img34/cmt2tm1/avg_memory.png deleted file mode 100644 index f362798d8f..0000000000 Binary files a/docs/qa/img34/cmt2tm1/avg_memory.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/avg_mempool_size.png b/docs/qa/img34/cmt2tm1/avg_mempool_size.png deleted file mode 100644 index b73e577b75..0000000000 Binary files a/docs/qa/img34/cmt2tm1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/block_rate_regular.png b/docs/qa/img34/cmt2tm1/block_rate_regular.png deleted file mode 100644 index 5fc7a5560b..0000000000 Binary files a/docs/qa/img34/cmt2tm1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/cpu.png b/docs/qa/img34/cmt2tm1/cpu.png deleted file mode 100644 index 15df58abbe..0000000000 Binary files a/docs/qa/img34/cmt2tm1/cpu.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/memory.png b/docs/qa/img34/cmt2tm1/memory.png deleted file mode 100644 index b0feab1074..0000000000 Binary files a/docs/qa/img34/cmt2tm1/memory.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/mempool_size.png b/docs/qa/img34/cmt2tm1/mempool_size.png deleted file mode 100644 index b3a1514f92..0000000000 Binary files a/docs/qa/img34/cmt2tm1/mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/peers.png b/docs/qa/img34/cmt2tm1/peers.png deleted file mode 100644 index 558d4c129e..0000000000 Binary files a/docs/qa/img34/cmt2tm1/peers.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/rounds.png b/docs/qa/img34/cmt2tm1/rounds.png deleted file mode 100644 index 3c22a5cf30..0000000000 Binary files a/docs/qa/img34/cmt2tm1/rounds.png and /dev/null differ diff --git a/docs/qa/img34/cmt2tm1/total_txs_rate_regular.png b/docs/qa/img34/cmt2tm1/total_txs_rate_regular.png deleted file mode 100644 index ae98df2176..0000000000 Binary files a/docs/qa/img34/cmt2tm1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/all_experiments.png b/docs/qa/img34/homogeneous/all_experiments.png deleted file mode 100644 index d8768f6a5d..0000000000 Binary files a/docs/qa/img34/homogeneous/all_experiments.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/avg_cpu.png b/docs/qa/img34/homogeneous/avg_cpu.png deleted file mode 100644 index 7df188951f..0000000000 Binary files a/docs/qa/img34/homogeneous/avg_cpu.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/avg_memory.png b/docs/qa/img34/homogeneous/avg_memory.png deleted file mode 100644 index e800cbce22..0000000000 Binary files a/docs/qa/img34/homogeneous/avg_memory.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/avg_mempool_size.png b/docs/qa/img34/homogeneous/avg_mempool_size.png deleted file mode 100644 index beb323e646..0000000000 Binary files a/docs/qa/img34/homogeneous/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/block_rate_regular.png b/docs/qa/img34/homogeneous/block_rate_regular.png deleted file mode 100644 index 2a71ab70df..0000000000 Binary files a/docs/qa/img34/homogeneous/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/cpu.png b/docs/qa/img34/homogeneous/cpu.png deleted file mode 100644 index 8e8c9227af..0000000000 Binary files a/docs/qa/img34/homogeneous/cpu.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/memory.png b/docs/qa/img34/homogeneous/memory.png deleted file mode 100644 index 190c622a34..0000000000 Binary files a/docs/qa/img34/homogeneous/memory.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/mempool_size.png b/docs/qa/img34/homogeneous/mempool_size.png deleted file mode 100644 index ec1c79a242..0000000000 Binary files a/docs/qa/img34/homogeneous/mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/peers.png b/docs/qa/img34/homogeneous/peers.png deleted file mode 100644 index 3c8b0a2e0d..0000000000 Binary files a/docs/qa/img34/homogeneous/peers.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/rounds.png b/docs/qa/img34/homogeneous/rounds.png deleted file mode 100644 index 660f31d939..0000000000 Binary files a/docs/qa/img34/homogeneous/rounds.png and /dev/null differ diff --git a/docs/qa/img34/homogeneous/total_txs_rate_regular.png b/docs/qa/img34/homogeneous/total_txs_rate_regular.png deleted file mode 100644 index a9025b6665..0000000000 Binary files a/docs/qa/img34/homogeneous/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_latencies.png b/docs/qa/img34/v034_200node_latencies.png deleted file mode 100644 index afd1060caf..0000000000 Binary files a/docs/qa/img34/v034_200node_latencies.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_latencies_zoomed.png b/docs/qa/img34/v034_200node_latencies_zoomed.png deleted file mode 100644 index 1ff9364422..0000000000 Binary files a/docs/qa/img34/v034_200node_latencies_zoomed.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/all_experiments.png b/docs/qa/img34/v034_200node_tm2cmt1/all_experiments.png deleted file mode 100644 index e91a87effd..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/all_experiments.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/avg_cpu.png b/docs/qa/img34/v034_200node_tm2cmt1/avg_cpu.png deleted file mode 100644 index a1b0ef79e4..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/avg_memory.png b/docs/qa/img34/v034_200node_tm2cmt1/avg_memory.png deleted file mode 100644 index f9d9b99334..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/avg_memory.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/avg_mempool_size.png b/docs/qa/img34/v034_200node_tm2cmt1/avg_mempool_size.png deleted file mode 100644 index c2b896060a..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/block_rate_regular.png b/docs/qa/img34/v034_200node_tm2cmt1/block_rate_regular.png deleted file mode 100644 index 5a5417bdf3..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/c2r200_merged.png b/docs/qa/img34/v034_200node_tm2cmt1/c2r200_merged.png deleted file mode 100644 index 45de9ce72d..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/c2r200_merged.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/cpu.png b/docs/qa/img34/v034_200node_tm2cmt1/cpu.png deleted file mode 100644 index eabfa96617..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/cpu.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/memory.png b/docs/qa/img34/v034_200node_tm2cmt1/memory.png deleted file mode 100644 index 70014c1f96..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/memory.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/mempool_size.png b/docs/qa/img34/v034_200node_tm2cmt1/mempool_size.png deleted file mode 100644 index 5f4c44b2a6..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/peers.png b/docs/qa/img34/v034_200node_tm2cmt1/peers.png deleted file mode 100644 index c35c84675c..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/peers.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/rounds.png b/docs/qa/img34/v034_200node_tm2cmt1/rounds.png deleted file mode 100644 index 7d1034bcbc..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/rounds.png and /dev/null differ diff --git a/docs/qa/img34/v034_200node_tm2cmt1/total_txs_rate_regular.png b/docs/qa/img34/v034_200node_tm2cmt1/total_txs_rate_regular.png deleted file mode 100644 index 2e8a40af6a..0000000000 Binary files a/docs/qa/img34/v034_200node_tm2cmt1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/img34/v034_latency_throughput.png b/docs/qa/img34/v034_latency_throughput.png deleted file mode 100644 index 3674fe47b4..0000000000 Binary files a/docs/qa/img34/v034_latency_throughput.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_heights.png b/docs/qa/img34/v034_r200c2_heights.png deleted file mode 100644 index 11f3bba432..0000000000 Binary files a/docs/qa/img34/v034_r200c2_heights.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_load-runner.png b/docs/qa/img34/v034_r200c2_load-runner.png deleted file mode 100644 index 70211b0d21..0000000000 Binary files a/docs/qa/img34/v034_r200c2_load-runner.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_load1.png b/docs/qa/img34/v034_r200c2_load1.png deleted file mode 100644 index 11012844dc..0000000000 Binary files a/docs/qa/img34/v034_r200c2_load1.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_mempool_size.png b/docs/qa/img34/v034_r200c2_mempool_size.png deleted file mode 100644 index c5d690200a..0000000000 Binary files a/docs/qa/img34/v034_r200c2_mempool_size.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_mempool_size_avg.png b/docs/qa/img34/v034_r200c2_mempool_size_avg.png deleted file mode 100644 index bda399fe5d..0000000000 Binary files a/docs/qa/img34/v034_r200c2_mempool_size_avg.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_peers.png b/docs/qa/img34/v034_r200c2_peers.png deleted file mode 100644 index a0aea7ada3..0000000000 Binary files a/docs/qa/img34/v034_r200c2_peers.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_rounds.png b/docs/qa/img34/v034_r200c2_rounds.png deleted file mode 100644 index 215be100de..0000000000 Binary files a/docs/qa/img34/v034_r200c2_rounds.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_rss.png b/docs/qa/img34/v034_r200c2_rss.png deleted file mode 100644 index 6d14dced0b..0000000000 Binary files a/docs/qa/img34/v034_r200c2_rss.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_rss_avg.png b/docs/qa/img34/v034_r200c2_rss_avg.png deleted file mode 100644 index 8dec67da29..0000000000 Binary files a/docs/qa/img34/v034_r200c2_rss_avg.png and /dev/null differ diff --git a/docs/qa/img34/v034_r200c2_total-txs.png b/docs/qa/img34/v034_r200c2_total-txs.png deleted file mode 100644 index 177d5f1c31..0000000000 Binary files a/docs/qa/img34/v034_r200c2_total-txs.png and /dev/null differ diff --git a/docs/qa/img34/v034_report_tabbed.txt b/docs/qa/img34/v034_report_tabbed.txt deleted file mode 100644 index 2514954743..0000000000 --- a/docs/qa/img34/v034_report_tabbed.txt +++ /dev/null @@ -1,52 +0,0 @@ -Experiment ID: 3d5cf4ef-1a1a-4b46-aa2d-da5643d2e81e │Experiment ID: 80e472ec-13a1-4772-a827-3b0c907fb51d │Experiment ID: 07aca6cf-c5a4-4696-988f-e3270fc6333b - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 25 │ Rate: 25 │ Rate: 25 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 2225 │ Total Valid Tx: 4450 │ Total Valid Tx: 8900 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 599.404362ms │ Minimum Latency: 448.145181ms │ Minimum Latency: 412.485729ms - Maximum Latency: 3.539686885s │ Maximum Latency: 3.237392049s │ Maximum Latency: 12.026665368s - Average Latency: 1.441485349s │ Average Latency: 1.441267946s │ Average Latency: 2.150192457s - Standard Deviation: 541.049869ms │ Standard Deviation: 525.040007ms │ Standard Deviation: 2.233852478s - │ │ -Experiment ID: 953dc544-dd40-40e8-8712-20c34c3ce45e │Experiment ID: d31fc258-16e7-45cd-9dc8-13ab87bc0b0a │Experiment ID: 15d90a7e-b941-42f4-b411-2f15f857739e - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 50 │ Rate: 50 │ Rate: 50 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 4450 │ Total Valid Tx: 8900 │ Total Valid Tx: 17800 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 482.046942ms │ Minimum Latency: 435.458913ms │ Minimum Latency: 510.746448ms - Maximum Latency: 3.761483455s │ Maximum Latency: 7.175583584s │ Maximum Latency: 6.551497882s - Average Latency: 1.450408183s │ Average Latency: 1.681673116s │ Average Latency: 1.738083875s - Standard Deviation: 587.560056ms │ Standard Deviation: 1.147902047s │ Standard Deviation: 943.46522ms - │ │ -Experiment ID: 9a0b9980-9ce6-4db5-a80a-65ca70294b87 │Experiment ID: df8fa4f4-80af-4ded-8a28-356d15018b43 │Experiment ID: d0e41c2c-89c0-4f38-8e34-ca07adae593a - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 100 │ Rate: 100 │ Rate: 100 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 8900 │ Total Valid Tx: 17800 │ Total Valid Tx: 35600 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 477.417219ms │ Minimum Latency: 564.29247ms │ Minimum Latency: 840.71089ms - Maximum Latency: 6.63744785s │ Maximum Latency: 6.988553219s │ Maximum Latency: 9.555312398s - Average Latency: 1.561216103s │ Average Latency: 1.76419063s │ Average Latency: 3.200941683s - Standard Deviation: 1.011333552s │ Standard Deviation: 1.068459423s │ Standard Deviation: 1.732346601s - │ │ -Experiment ID: 493df3ee-4a36-4bce-80f8-6d65da66beda │Experiment ID: 13060525-f04f-46f6-8ade-286684b2fe50 │Experiment ID: 1777cbd2-8c96-42e4-9ec7-9b21f2225e4d - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 200 │ Rate: 200 │ Rate: 200 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 17800 │ Total Valid Tx: 35600 │ Total Valid Tx: 38660 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 493.705261ms │ Minimum Latency: 955.090573ms │ Minimum Latency: 1.9485821s - Maximum Latency: 7.440921872s │ Maximum Latency: 10.086673491s │ Maximum Latency: 17.73103976s - Average Latency: 1.875510582s │ Average Latency: 3.438130099s │ Average Latency: 8.143862237s - Standard Deviation: 1.304336995s │ Standard Deviation: 1.966391574s │ Standard Deviation: 3.943140002s - diff --git a/docs/qa/img34/v034_rotating_heights.png b/docs/qa/img34/v034_rotating_heights.png deleted file mode 100644 index 47913c282f..0000000000 Binary files a/docs/qa/img34/v034_rotating_heights.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_heights_ephe.png b/docs/qa/img34/v034_rotating_heights_ephe.png deleted file mode 100644 index 981b93d6c4..0000000000 Binary files a/docs/qa/img34/v034_rotating_heights_ephe.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_latencies.png b/docs/qa/img34/v034_rotating_latencies.png deleted file mode 100644 index f0a54ed5b6..0000000000 Binary files a/docs/qa/img34/v034_rotating_latencies.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_latencies_uniq.png b/docs/qa/img34/v034_rotating_latencies_uniq.png deleted file mode 100644 index e5d694a16e..0000000000 Binary files a/docs/qa/img34/v034_rotating_latencies_uniq.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_load1.png b/docs/qa/img34/v034_rotating_load1.png deleted file mode 100644 index e9c385b85e..0000000000 Binary files a/docs/qa/img34/v034_rotating_load1.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_peers.png b/docs/qa/img34/v034_rotating_peers.png deleted file mode 100644 index ab5c8732d3..0000000000 Binary files a/docs/qa/img34/v034_rotating_peers.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_rss_avg.png b/docs/qa/img34/v034_rotating_rss_avg.png deleted file mode 100644 index 9a4167320c..0000000000 Binary files a/docs/qa/img34/v034_rotating_rss_avg.png and /dev/null differ diff --git a/docs/qa/img34/v034_rotating_total-txs.png b/docs/qa/img34/v034_rotating_total-txs.png deleted file mode 100644 index 1ce5f47e9b..0000000000 Binary files a/docs/qa/img34/v034_rotating_total-txs.png and /dev/null differ diff --git a/docs/qa/method.md b/docs/qa/method.md deleted file mode 100644 index 6de0cbcf80..0000000000 --- a/docs/qa/method.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -order: 1 -parent: - title: Method - order: 1 ---- - -# Method - -This document provides a detailed description of the QA process. -It is intended to be used by engineers reproducing the experimental setup for future tests of CometBFT. - -The (first iteration of the) QA process as described [in the RELEASES.md document][releases] -was applied to version v0.34.x in order to have a set of results acting as benchmarking baseline. -This baseline is then compared with results obtained in later versions. - -Out of the testnet-based test cases described in [the releases document][releases] we focused on two of them: -_200 Node Test_, and _Rotating Nodes Test_. - -[releases]: https://github.com/cometbft/cometbft/blob/main/RELEASES.md#large-scale-testnets - -## Software Dependencies - -### Infrastructure Requirements to Run the Tests - -* An account at Digital Ocean (DO), with a high droplet limit (>202) -* The machine to orchestrate the tests should have the following installed: - * A clone of the [testnet repository][testnet-repo] - * This repository contains all the scripts mentioned in the reminder of this section - * [Digital Ocean CLI][doctl] - * [Terraform CLI][Terraform] - * [Ansible CLI][Ansible] - -[testnet-repo]: https://github.com/cometbft/qa-infra -[Ansible]: https://docs.ansible.com/ansible/latest/index.html -[Terraform]: https://www.terraform.io/docs -[doctl]: https://docs.digitalocean.com/reference/doctl/how-to/install/ - -### Requirements for Result Extraction - -* Matlab or Octave -* [Prometheus][prometheus] server installed -* blockstore DB of one of the full nodes in the testnet -* Prometheus DB - -[prometheus]: https://prometheus.io/ - -## 200 Node Testnet - -### Running the test - -This section explains how the tests were carried out for reproducibility purposes. - -1. [If you haven't done it before] - Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`. -2. Copy file `testnets/testnet200.toml` onto `testnet.toml` (do NOT commit this change) -3. Set the variable `VERSION_TAG` in the `Makefile` to the git hash that is to be tested. - * If you are running the base test, which implies an homogeneous network (all nodes are running the same version), - then make sure makefile variable `VERSION2_WEIGHT` is set to 0 - * If you are running a mixed network, set the variable `VERSION_TAG2` to the other version you want deployed - in the network. The, adjust the weight variables `VERSION_WEIGHT` and `VERSION2_WEIGHT` to configure the - desired proportion of nodes running each of the two configured versions. -4. Follow steps 5-10 of the `README.md` to configure and start the 200 node testnet - * WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests (see step 9) -5. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `COMETBFT_CONSENSUS_HEIGHT` metric. - All nodes should be increasing their heights. -6. You now need to start the load runner that will produce transaction load - * If you don't know the saturation load of the version you are testing, you need to discover it. - * `ssh` into the `testnet-load-runner`, then copy script `script/200-node-loadscript.sh` and run it from the load runner node. - * Before running it, you need to edit the script to provide the IP address of a full node. - This node will receive all transactions from the load runner node. - * This script will take about 40 mins to run. - * It is running 90-seconds-long experiments in a loop with different loads. - * If you already know the saturation load, you can simply run the test (several times) for 90 seconds with a load somewhat - below saturation: - * set makefile variables `ROTATE_CONNECTIONS`, `ROTATE_TX_RATE`, to values that will produce the desired transaction load. - * set `ROTATE_TOTAL_TIME` to 90 (seconds). - * run "make runload" and wait for it to complete. You may want to run this several times so the data from different runs can be compared. -7. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine - * Alternatively, you may want to run `make retrieve-prometheus-data` and `make retrieve-blockstore` separately. - The end result will be the same. - * `make retrieve-blockstore` accepts the following values in makefile variable `RETRIEVE_TARGET_HOST` - * `any`: (which is the default) picks up a full node and retrieves the blockstore from that node only. - * `all`: retrieves the blockstore from all full nodes; this is extremely slow, and consumes plenty of bandwidth, - so use it with care. - * the name of a particular full node (e.g., `validator01`): retrieves the blockstore from that node only. -8. Verify that the data was collected without errors - * at least one blockstore DB for a CometBFT validator - * the Prometheus database from the Prometheus node - * for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s) -9. **Run `make terraform-destroy`** - * Don't forget to type `yes`! Otherwise you're in trouble. - -### Result Extraction - -The method for extracting the results described here is highly manual (and exploratory) at this stage. -The CometBFT team should improve it at every iteration to increase the amount of automation. - -#### Steps - -1. Unzip the blockstore into a directory -2. Extract the latency report and the raw latencies for all the experiments. Run these commands from the directory containing the blockstore - * ```bash - mkdir results - go run github.com/cometbft/cometbft/test/loadtime/cmd/report@f1aaa436d --database-type goleveldb --data-dir ./ > results/report.txt` - go run github.com/cometbft/cometbft/test/loadtime/cmd/report@f1aaa436d --database-type goleveldb --data-dir ./ --csv results/raw.csv` - ``` -3. File `report.txt` contains an unordered list of experiments with varying concurrent connections and transaction rate - * If you are looking for the saturation point - * Create files `report01.txt`, `report02.txt`, `report04.txt` and, for each experiment in file `report.txt`, - copy its related lines to the filename that matches the number of connections, for example - ```bash - for cnum in 1 2 3 4; do echo "$cnum"; grep "Connections: $cnum" results/report.txt -B 2 -A 10 > results/report$cnum.txt; done - ``` - - * Sort the experiments in `report01.txt` in ascending tx rate order. Likewise for `report02.txt` and `report04.txt`. - * Otherwise just keep `report.txt`, and skip step 4. -4. Generate file `report_tabbed.txt` by showing the contents `report01.txt`, `report02.txt`, `report04.txt` side by side - * This effectively creates a table where rows are a particular tx rate and columns are a particular number of websocket connections. -5. Extract the raw latencies from file `raw.csv` using the following bash loop. This creates a `.csv` file and a `.dat` file per experiment. - The format of the `.dat` files is amenable to loading them as matrices in Octave. - * Adapt the values of the for loop variables according to the experiments that you ran (check `report.txt`). - * Adapt `report*.txt` to the files you produced in step 3. - - ```bash - uuids=($(cat report01.txt report02.txt report04.txt | grep '^Experiment ID: ' | awk '{ print $3 }')) - c=1 - rm -f *.dat - for i in 01 02 04; do - for j in 0025 0050 0100 0200; do - echo $i $j $c "${uuids[$c]}" - filename=c${i}_r${j} - grep ${uuids[$c]} raw.csv > ${filename}.csv - cat ${filename}.csv | tr , ' ' | awk '{ print $2, $3 }' >> ${filename}.dat - c=$(expr $c + 1) - done - done - ``` - -6. Enter Octave -7. Load all `.dat` files generated in step 5 into matrices using this Octave code snippet - - ```octave - conns = { "01"; "02"; "04" }; - rates = { "0025"; "0050"; "0100"; "0200" }; - for i = 1:length(conns) - for j = 1:length(rates) - filename = strcat("c", conns{i}, "_r", rates{j}, ".dat"); - load("-ascii", filename); - endfor - endfor - ``` - -8. Set variable release to the current release undergoing QA - - ```octave - release = "v0.34.x"; - ``` - -9. Generate a plot with all (or some) experiments, where the X axis is the experiment time, - and the y axis is the latency of transactions. - The following snippet plots all experiments. - - ```octave - legends = {}; - hold off; - for i = 1:length(conns) - for j = 1:length(rates) - data_name = strcat("c", conns{i}, "_r", rates{j}); - l = strcat("c=", conns{i}, " r=", rates{j}); - m = eval(data_name); plot((m(:,1) - min(m(:,1))) / 1e+9, m(:,2) / 1e+9, "."); - hold on; - legends(1, end+1) = l; - endfor - endfor - legend(legends, "location", "northeastoutside"); - xlabel("experiment time (s)"); - ylabel("latency (s)"); - t = sprintf("200-node testnet - %s", release); - title(t); - ``` - -10. Consider adjusting the axis, in case you want to compare your results to the baseline, for instance - - ```octave - axis([0, 100, 0, 30], "tic"); - ``` - -11. Use Octave's GUI menu to save the plot (e.g. as `.png`) - -12. Repeat steps 9 and 10 to obtain as many plots as deemed necessary. - -13. To generate a latency vs throughput plot, using the raw CSV file generated - in step 2, follow the instructions for the [`latency_throughput.py`] script. - This plot is useful to visualize the saturation point. - -[`latency_throughput.py`]: ../../scripts/qa/reporting/README.md#Latency-vs-Throughput-Plotting - -14. Alternatively, follow the instructions for the [`latency_plotter.py`] script. - This script generates a series of plots per experiment and configuration that my - help with visualizing Latency vs Throughput variation. - -[`latency_plotter.py`]: ../../scripts/qa/reporting/README.md#Latency-vs-Throughput-Plotting-version-2 - -#### Extracting Prometheus Metrics - -1. Stop the prometheus server if it is running as a service (e.g. a `systemd` unit). -2. Unzip the prometheus database retrieved from the testnet, and move it to replace the - local prometheus database. -3. Start the prometheus server and make sure no error logs appear at start up. -4. Identify the time window you want to plot in your graphs. -5. Execute the [`prometheus_plotter.py`] script for the time window. - -[`prometheus_plotter.py`]: ../../scripts/qa/reporting/README.md#prometheus-metrics - -## Rotating Node Testnet - -### Running the test - -This section explains how the tests were carried out for reproducibility purposes. - -1. [If you haven't done it before] - Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`. -2. Copy file `testnet_rotating.toml` onto `testnet.toml` (do NOT commit this change) -3. Set variable `VERSION_TAG` to the git hash that is to be tested. -4. Run `make terraform-apply EPHEMERAL_SIZE=25` - * WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests -5. Follow steps 6-10 of the `README.md` to configure and start the "stable" part of the rotating node testnet -6. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `tendermint_consensus_height` metric. - All nodes should be increasing their heights. -7. On a different shell, - * run `make runload ROTATE_CONNECTIONS=X ROTATE_TX_RATE=Y` - * `X` and `Y` should reflect a load below the saturation point (see, e.g., - [this paragraph](CometBFT-QA-34.md#finding-the-saturation-point) for further info) -8. Run `make rotate` to start the script that creates the ephemeral nodes, and kills them when they are caught up. - * WARNING: If you run this command from your laptop, the laptop needs to be up and connected for full length - of the experiment. -9. When the height of the chain reaches 3000, stop the `make rotate` script -10. When the rotate script has made two iterations (i.e., all ephemeral nodes have caught up twice) - after height 3000 was reached, stop `make rotate` -11. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine -12. Verify that the data was collected without errors - * at least one blockstore DB for a CometBFT validator - * the Prometheus database from the Prometheus node - * for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s) -13. **Run `make terraform-destroy`** - -Steps 8 to 10 are highly manual at the moment and will be improved in next iterations. - -### Result Extraction - -In order to obtain a latency plot, follow the instructions above for the 200 node experiment, but: - -* The `results.txt` file contains only one experiment -* Therefore, no need for any `for` loops - -As for prometheus, the same method as for the 200 node experiment can be applied. diff --git a/docs/qa/v034/README.md b/docs/qa/v034/README.md deleted file mode 100644 index f3ac53e1d5..0000000000 --- a/docs/qa/v034/README.md +++ /dev/null @@ -1,368 +0,0 @@ ---- -order: 1 -parent: - title: CometBFT Quality Assurance Results for v0.34.x - description: This is a report on the results obtained when running v0.34.x on testnets - order: 2 ---- - -# v0.34.x - From Tendermint Core to CometBFT - -This section reports on the QA process we followed before releasing the first `v0.34.x` version -from our CometBFT repository. - -The changes with respect to the last version of `v0.34.x` -(namely `v0.34.26`, released from the Informal Systems' Tendermint Core fork) -are minimal, and focus on rebranding our fork of Tendermint Core to CometBFT at places -where there is no substantial risk of breaking compatibility -with earlier Tendermint Core versions of `v0.34.x`. - -Indeed, CometBFT versions of `v0.34.x` (`v0.34.27` and subsequent) should fulfill -the following compatibility-related requirements. - -* Operators can easily upgrade a `v0.34.x` version of Tendermint Core to CometBFT. -* Upgrades from Tendermint Core to CometBFT can be uncoordinated for versions of the `v0.34.x` branch. -* Nodes running CometBFT must be interoperable with those running Tendermint Core in the same chain, - as long as all are running a `v0.34.x` version. - -These QA tests focus on the third bullet, whereas the first two bullets are tested using our _e2e tests_. - -It would be prohibitively time consuming to test mixed networks of all combinations of existing `v0.34.x` -versions, combined with the CometBFT release candidate under test. -Therefore our testing focuses on the last Tendermint Core version (`v0.34.26`) and the CometBFT release -candidate under test. - -We run the _200 node test_, but not the _rotating node test_. The effort of running the latter -is not justified given the amount and nature of the changes we are testing with respect to the -full QA cycle run previously on `v0.34.x`. -Since the changes to the system's logic are minimal, we are interested in these performance requirements: - -* The CometBFT release candidate under test performs similarly to Tendermint Core (i.e., the baseline) - * when used at scale (i.e., in a large network of CometBFT nodes) - * when used at scale in a mixed network (i.e., some nodes are running CometBFT - and others are running an older Tendermint Core version) - -Therefore we carry out a complete run of the _200-node test_ on the following networks: - -* A homogeneous 200-node testnet, where all nodes are running the CometBFT release candidate under test. -* A mixed network where 1/2 (99 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (101 out of 200) are running Tendermint Core `v0.34.26`. -* A mixed network where 1/3 (66 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (134 out of 200) are running Tendermint Core `v0.34.26`. -* A mixed network where 2/3 (133 out of 200) of the nodes are running the CometBFT release candidate under test, - and the rest (67 out of 200) are running Tendermint Core `v0.34.26`. - -## Configuration and Results -In the following sections we provide the results of the _200 node test_. -Each section reports the baseline results (for reference), the homogeneous network scenario (all CometBFT nodes), -and the mixed networks with 1/2, 1/3 and 2/3 of Tendermint Core nodes. - -### Saturation Point - -As the CometBFT release candidate under test has minimal changes -with respect to Tendermint Core `v0.34.26`, other than the rebranding changes, -we can confidently reuse the results from the `v0.34.x` baseline test regarding -the [saturation point](./TMCore.md#finding-the-saturation-point). - -Therefore, we will simply use a load of (`r=200,c=2`) -(see the explanation [here](./TMCore.md#finding-the-saturation-point)) on all experiments. - -We also include the baseline results for quick reference and comparison. - -### Experiments - -On each of the three networks, the test consists of 4 experiments, with the goal of -ensuring the data obtained is consistent across experiments. - -On each of the networks, we pick only one representative run to present and discuss the -results. - - -## Examining latencies -For each network the figures plot the four experiments carried out with the network. -We can see that the latencies follow comparable patterns across all experiments. - -Unique identifiers, UUID, for each execution are presented on top of each graph. -We refer to these UUID to indicate to the representative runs. - -### CometBFT Homogeneous network - -![latencies](./img/homogeneous/all_experiments.png) - -### 1/2 Tendermint Core - 1/2 CometBFT - -![latencies](./img/cmt1tm1/all_experiments.png) - -### 1/3 Tendermint Core - 2/3 CometBFT - -![latencies](./img/cmt2tm1/all_experiments.png) - -### 2/3 Tendermint Core - 1/3 CometBFT - -![latencies_all_tm2_3_cmt1_3](./img/v034_200node_tm2cmt1/all_experiments.png) - - -## Prometheus Metrics - -This section reports on the key Prometheus metrics extracted from the following experiments. - -* Baseline results: `v0.34.x`, obtained in October 2022 and reported [here](./TMCore.md). -* CometBFT homogeneous network: experiment with UUID starting with `be8c`. -* Mixed network, 1/2 Tendermint Core `v0.34.26` and 1/2 running CometBFT: experiment with UUID starting with `04ee`. -* Mixed network, 1/3 Tendermint Core `v0.34.26` and 2/3 running CometBFT: experiment with UUID starting with `fc5e`. -* Mixed network, 2/3 Tendermint Core `v0.34.26` and 1/3 running CometBFT: experiment with UUID starting with `4759`. - -We make explicit comparisons between the baseline and the homogenous setups, but refrain from -commenting on the mixed network experiment unless they show some exceptional results. - -### Mempool Size - -For each reported experiment we show two graphs. -The first shows the evolution over time of the cumulative number of transactions -inside all full nodes' mempools at a given time. - -The second one shows the evolution of the average over all full nodes. - -#### Baseline - -![mempool-cumulative](./img/baseline/mempool_size.png) - -![mempool-avg](./img/baseline/avg_mempool_size.png) - -#### CometBFT Homogeneous network - -The results for the homogeneous network and the baseline are similar in terms of outstanding transactions. - -![mempool-cumulative-homogeneous](./img/homogeneous/mempool_size.png) - -![mempool-avg-homogeneous](./img/homogeneous/avg_mempool_size.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![mempool size](./img/cmt1tm1/mempool_size.png) - -![average mempool size](./img/cmt1tm1/avg_mempool_size.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![mempool size](./img/cmt2tm1/mempool_size.png) - -![average mempool size](./img/cmt2tm1/avg_mempool_size.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![mempool_tm2_3_cmt_1_3](./img/v034_200node_tm2cmt1/mempool_size.png) - -![mempool-avg_tm2_3_cmt_1_3](./img/v034_200node_tm2cmt1/avg_mempool_size.png) - -### Consensus Rounds per Height - -The following graphs show the rounds needed to complete each height and agree on a block. - -A value of `0` shows that only one round was required (with id `0`), and a value of `1` shows that two rounds were required. - -#### Baseline -We can see that round 1 is reached with a certain frequency. - -![rounds](./img/baseline/rounds.png) - -#### CometBFT Homogeneous network - -Most heights finished in round 0, some nodes needed to advance to round 1 at various moments, -and a few nodes even needed to advance to round 2 at one point. -This coincides with the time at which we observed the biggest peak in mempool size -on the corresponding plot, shown above. - -![rounds-homogeneous](./img/homogeneous/rounds.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![peers](./img/cmt1tm1/rounds.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![peers](./img/cmt2tm1/rounds.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![rounds-tm2_3_cmt1_3](./img/v034_200node_tm2cmt1/rounds.png) - -### Peers - -The following plots show how many peers a node had throughtout the experiment. - -The thick red dashed line represents the moving average over a sliding window of 20 seconds. - -#### Baseline - -The following graph shows the that the number of peers was stable throughout the experiment. -Seed nodes typically have a higher number of peers. -The fact that non-seed nodes reach more than 50 peers is due to -[#9548](https://github.com/tendermint/tendermint/issues/9548). - -![peers](./img/baseline/peers.png) - -#### CometBFT Homogeneous network - -The results for the homogeneous network are very similar to the baseline. -The only difference being that the seed nodes seem to loose peers in the middle of the experiment. -However this cannot be attributed to the differences in the code, which are mainly rebranding. - -![peers-homogeneous](./img/homogeneous/peers.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![peers](./img/cmt1tm1/peers.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![peers](./img/cmt2tm1/peers.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -As in the homogeneous case, there is some variation in the number of peers for some nodes. -These, however, do not affect the average. - -![peers-tm2_3_cmt1_3](./img/v034_200node_tm2cmt1/peers.png) - -### Blocks Produced per Minute, Transactions Processed per Minute - -The following plot show the rate of block production and the rate of transactions delivered, throughout the experiments. - -In both graphs, rates are calculated over a sliding window of 20 seconds. -The thick red dashed line show the rates' moving averages. - -#### Baseline - -The average number of blocks/minute oscilate between 10 and 40. - -![heights](./img/baseline/block_rate_regular.png) - -The number of transactions/minute tops around 30k. - -![total-txs](./img/baseline/total_txs_rate_regular.png) - - -#### CometBFT Homogeneous network - -The plot showing the block production rate shows that the rate oscillates around 20 blocks/minute, -mostly within the same range as the baseline. - -![heights-homogeneous-rate](./img/homogeneous/block_rate_regular.png) - -The plot showing the transaction rate shows the rate stays around 20000 transactions per minute, -also topping around 30k. - -![txs-homogeneous-rate](./img/homogeneous/total_txs_rate_regular.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![height rate](./img/cmt1tm1/block_rate_regular.png) - -![transaction rate](./img/cmt1tm1/total_txs_rate_regular.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![height rate](./img/cmt2tm1/block_rate_regular.png) - -![transaction rate](./img/cmt2tm1/total_txs_rate_regular.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![height rate](./img/v034_200node_tm2cmt1/block_rate_regular.png) - -![transaction rate](./img/v034_200node_tm2cmt1/total_txs_rate_regular.png) - -### Memory Resident Set Size - -The following graphs show the Resident Set Size (RSS) of all monitored processes and the average value. - -#### Baseline - -![rss](./img/baseline/memory.png) - -![rss-avg](./img/baseline/avg_memory.png) - -#### CometBFT Homogeneous network - -This is the plot for the homogeneous network, which is slightly more stable than the baseline over -the time of the experiment. - -![rss-homogeneous](./img/homogeneous/memory.png) - -And this is the average plot. It oscillates around 560 MiB, which is noticeably lower than the baseline. - -![rss-avg-homogeneous](./img/homogeneous/avg_memory.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![rss](./img/cmt1tm1/memory.png) - -![rss average](./img/cmt1tm1/avg_memory.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![rss](./img/cmt2tm1/memory.png) - -![rss average](./img/cmt2tm1/avg_memory.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![rss](./img/v034_200node_tm2cmt1/memory.png) - -![rss average](./img/v034_200node_tm2cmt1/avg_memory.png) - -### CPU utilization - -The following graphs show the `load1` of nodes, as typically shown in the first line of the Unix `top` -command, and their average value. - -#### Baseline - -![load1](./img/baseline/cpu.png) - -![load1-avg](./img/baseline/avg_cpu.png) - -#### CometBFT Homogeneous network - -The load in the homogenous network is, similarly to the baseline case, below 5 and, therefore, normal. - -![load1-homogeneous](./img/homogeneous/cpu.png) - -As expected, the average plot also looks similar. - -![load1-homogeneous-avg](./img/homogeneous/avg_cpu.png) - -#### 1/2 Tendermint Core - 1/2 CometBFT - -![load1](./img/cmt1tm1/cpu.png) - -![average load1](./img/cmt1tm1/avg_cpu.png) - -#### 1/3 Tendermint Core - 2/3 CometBFT - -![load1](./img/cmt2tm1/cpu.png) - -![average load1](./img/cmt2tm1/avg_cpu.png) - -#### 2/3 Tendermint Core - 1/3 CometBFT - -![load1](./img/v034_200node_tm2cmt1/cpu.png) - -![average load1](./img/v034_200node_tm2cmt1/avg_cpu.png) - -## Test Results - -The comparison of the baseline results and the homogeneous case show that both scenarios had similar numbers and are therefore equivalent. - -The mixed nodes cases show that networks operate normally with a mix of compatible Tendermint Core and CometBFT versions. -Although not the main goal, a comparison of metric numbers with the homogenous case and the baseline scenarios show similar results and therefore we can conclude that mixing compatible Tendermint Core and CometBFT introduces not performance degradation. - -A conclusion of these tests is shown in the following table, along with the commit versions used in the experiments. - -| Scenario | Date | Version | Result | -|--|--|--|--| -|CometBFT Homogeneous network | 2023-02-08 | 3b783434f26b0e87994e6a77c5411927aad9ce3f | Pass -|1/2 Tendermint Core
1/2 CometBFT | 2023-02-14 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass| -|1/3 Tendermint Core
2/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass| -|2/3 Tendermint Core
1/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass | diff --git a/docs/qa/v034/TMCore.md b/docs/qa/v034/TMCore.md deleted file mode 100644 index 5fc1225d97..0000000000 --- a/docs/qa/v034/TMCore.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -order: 1 -parent: - title: Tendermint Core Quality Assurance Results for v0.34.x - description: This is a report on the results obtained when running v0.34.x on testnets - order: 2 ---- - -# Tendermint Core v0.34.x - -## 200 Node Testnet - -### Finding the Saturation Point - -The first goal when examining the results of the tests is identifying the saturation point. -The saturation point is a setup with a transaction load big enough to prevent the testnet -from being stable: the load runner tries to produce slightly more transactions than can -be processed by the testnet. - -The following table summarizes the results for v0.34.x, for the different experiments -(extracted from file [`v034_report_tabbed.txt`](./img/v034_report_tabbed.txt)). - -The X axis of this table is `c`, the number of connections created by the load runner process to the target node. -The Y axis of this table is `r`, the rate or number of transactions issued per second. - -| | c=1 | c=2 | c=4 | -| :--- | ----: | ----: | ----: | -| r=25 | 2225 | 4450 | 8900 | -| r=50 | 4450 | 8900 | 17800 | -| r=100 | 8900 | 17800 | 35600 | -| r=200 | 17800 | 35600 | 38660 | - -The table shows the number of 1024-byte-long transactions that were produced by the load runner, -and processed by Tendermint Core, during the 90 seconds of the experiment's duration. -Each cell in the table refers to an experiment with a particular number of websocket connections (`c`) -to a chosen validator, and the number of transactions per second that the load runner -tries to produce (`r`). Note that the overall load that the tool attempts to generate is $c \cdot r$. - -We can see that the saturation point is beyond the diagonal that spans cells - -* `r=200,c=2` -* `r=100,c=4` - -given that the total number of transactions should be close to the product rate X the number of connections x experiment time. - -All experiments below the saturation diagonal (`r=200,c=4`) have in common that the total -number of transactions processed is noticeably less than the product $c \cdot r \cdot 89$ (89 seconds, since the last batch never gets sent), -which is the expected number of transactions when the system is able to deal well with the -load. -With (`r=200,c=4`), we obtained 38660 whereas the theoretical number of transactions should -have been $200 \cdot 4 \cdot 89 = 71200$. - -At this point, we chose an experiment at the limit of the saturation diagonal, -in order to further study the performance of this release. -**The chosen experiment is (`r=200,c=2`)**. - -This is a plot of the CPU load (average over 1 minute, as output by `top`) of the load runner for (`r=200,c=2`), -where we can see that the load stays close to 0 most of the time. - -![load-load-runner](./img/v034_r200c2_load-runner.png) - -### Examining latencies - -The method described [here](../method.md) allows us to plot the latencies of transactions -for all experiments. - -![all-latencies](./img/v034_200node_latencies.png) - -As we can see, even the experiments beyond the saturation diagonal managed to keep -transaction latency stable (i.e. not constantly increasing). -Our interpretation for this is that contention within Tendermint Core was propagated, -via the websockets, to the load runner, -hence the load runner could not produce the target load, but a fraction of it. - -Further examination of the Prometheus data (see below), showed that the mempool contained many transactions -at steady state, but did not grow much without quickly returning to this steady state. This demonstrates -that Tendermint Core network was able to process transactions at least as quickly as they -were submitted to the mempool. Finally, the test script made sure that, at the end of an experiment, the -mempool was empty so that all transactions submitted to the chain were processed. - -Finally, the number of points present in the plot appears to be much less than expected given the -number of transactions in each experiment, particularly close to or above the saturation diagonal. -This is a visual effect of the plot; what appear to be points in the plot are actually potentially huge -clusters of points. To corroborate this, we have zoomed in the plot above by setting (carefully chosen) -tiny axis intervals. The cluster shown below looks like a single point in the plot above. - -![all-latencies-zoomed](./img/v034_200node_latencies_zoomed.png) - -The plot of latencies can we used as a baseline to compare with other releases. - -The following plot summarizes average latencies versus overall throughput -across different numbers of WebSocket connections to the node into which -transactions are being loaded. - -![latency-vs-throughput](./img/v034_latency_throughput.png) - -### Prometheus Metrics on the Chosen Experiment - -As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`. -This section further examines key metrics for this experiment extracted from Prometheus data. - -#### Mempool Size - -The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous -at all full nodes. It did not exhibit any unconstrained growth. -The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools -at a given time. -The two spikes that can be observed correspond to a period where consensus instances proceeded beyond the initial round -at some nodes. - -![mempool-cumulative](./img/v034_r200c2_mempool_size.png) - -The plot below shows evolution of the average over all full nodes, which oscillates between 1500 and 2000 -outstanding transactions. - -![mempool-avg](./img/v034_r200c2_mempool_size_avg.png) - -The peaks observed coincide with the moments when some nodes proceeded beyond the initial round of consensus (see below). - -#### Peers - -The number of peers was stable at all nodes. -It was higher for the seed nodes (around 140) than for the rest (between 21 and 74). -The fact that non-seed nodes reach more than 50 peers is due to #9548. - -![peers](./img/v034_r200c2_peers.png) - -#### Consensus Rounds per Height - -Most nodes used only round 0 for most heights, but some nodes needed to advance to round 1 for some heights. - -![rounds](./img/v034_r200c2_rounds.png) - -#### Blocks Produced per Minute, Transactions Processed per Minute - -The blocks produced per minute are the slope of this plot. - -![heights](./img/v034_r200c2_heights.png) - -Over a period of 2 minutes, the height goes from 530 to 569. -This results in an average of 19.5 blocks produced per minute. - -The transactions processed per minute are the slope of this plot. - -![total-txs](./img/v034_r200c2_total-txs.png) - -Over a period of 2 minutes, the total goes from 64525 to 100125 transactions, -resulting in 17800 transactions per minute. However, we can see in the plot that -all transactions in the load are processed long before the two minutes. -If we adjust the time window when transactions are processed (approx. 105 seconds), -we obtain 20343 transactions per minute. - -#### Memory Resident Set Size - -Resident Set Size of all monitored processes is plotted below. - -![rss](./img/v034_r200c2_rss.png) - -The average over all processes oscillates around 1.2 GiB and does not demonstrate unconstrained growth. - -![rss-avg](./img/v034_r200c2_rss_avg.png) - -#### CPU utilization - -The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`, -as it usually appears in the -[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux). - -![load1](./img/v034_r200c2_load1.png) - -It is contained in most cases below 5, which is generally considered acceptable load. - -### Test Result - -**Result: N/A** (v0.34.x is the baseline) - -Date: 2022-10-14 - -Version: 3ec6e424d6ae4c96867c2dcf8310572156068bb6 - -## Rotating Node Testnet - -For this testnet, we will use a load that can safely be considered below the saturation -point for the size of this testnet (between 13 and 38 full nodes): `c=4,r=800`. - -N.B.: The version of CometBFT used for these tests is affected by #9539. -However, the reduced load that reaches the mempools is orthogonal to functionality -we are focusing on here. - -### Latencies - -The plot of all latencies can be seen in the following plot. - -![rotating-all-latencies](./img/v034_rotating_latencies.png) - -We can observe there are some very high latencies, towards the end of the test. -Upon suspicion that they are duplicate transactions, we examined the latencies -raw file and discovered there are more than 100K duplicate transactions. - -The following plot shows the latencies file where all duplicate transactions have -been removed, i.e., only the first occurrence of a duplicate transaction is kept. - -![rotating-all-latencies-uniq](./img/v034_rotating_latencies_uniq.png) - -This problem, existing in `v0.34.x`, will need to be addressed, perhaps in the same way -we addressed it when running the 200 node test with high loads: increasing the `cache_size` -configuration parameter. - -### Prometheus Metrics - -The set of metrics shown here are less than for the 200 node experiment. -We are only interested in those for which the catch-up process (blocksync) may have an impact. - -#### Blocks and Transactions per minute - -Just as shown for the 200 node test, the blocks produced per minute are the gradient of this plot. - -![rotating-heights](./img/v034_rotating_heights.png) - -Over a period of 5229 seconds, the height goes from 2 to 3638. -This results in an average of 41 blocks produced per minute. - -The following plot shows only the heights reported by ephemeral nodes -(which are also included in the plot above). Note that the _height_ metric -is only showed _once the node has switched to consensus_, hence the gaps -when nodes are killed, wiped out, started from scratch, and catching up. - -![rotating-heights-ephe](./img/v034_rotating_heights_ephe.png) - -The transactions processed per minute are the gradient of this plot. - -![rotating-total-txs](./img/v034_rotating_total-txs.png) - -The small lines we see periodically close to `y=0` are the transactions that -ephemeral nodes start processing when they are caught up. - -Over a period of 5229 minutes, the total goes from 0 to 387697 transactions, -resulting in 4449 transactions per minute. We can see some abrupt changes in -the plot's gradient. This will need to be investigated. - -#### Peers - -The plot below shows the evolution in peers throughout the experiment. -The periodic changes observed are due to the ephemeral nodes being stopped, -wiped out, and recreated. - -![rotating-peers](./img/v034_rotating_peers.png) - -The validators' plots are concentrated at the higher part of the graph, whereas the ephemeral nodes -are mostly at the lower part. - -#### Memory Resident Set Size - -The average Resident Set Size (RSS) over all processes seems stable, and slightly growing toward the end. -This might be related to the increased in transaction load observed above. - -![rotating-rss-avg](./img/v034_rotating_rss_avg.png) - -The memory taken by the validators and the ephemeral nodes (when they are up) is comparable. - -#### CPU utilization - -The plot shows metric `load1` for all nodes. - -![rotating-load1](./img/v034_rotating_load1.png) - -It is contained under 5 most of the time, which is considered normal load. -The purple line, which follows a different pattern is the validator receiving all -transactions, via RPC, from the load runner process. - -### Test Result - -**Result: N/A** - -Date: 2022-10-10 - -Version: a28c987f5a604ff66b515dd415270063e6fb069d diff --git a/docs/qa/v034/img/baseline/avg_cpu.png b/docs/qa/v034/img/baseline/avg_cpu.png deleted file mode 100644 index 622456df64..0000000000 Binary files a/docs/qa/v034/img/baseline/avg_cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/avg_memory.png b/docs/qa/v034/img/baseline/avg_memory.png deleted file mode 100644 index 55f213f5e1..0000000000 Binary files a/docs/qa/v034/img/baseline/avg_memory.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/avg_mempool_size.png b/docs/qa/v034/img/baseline/avg_mempool_size.png deleted file mode 100644 index ec74072950..0000000000 Binary files a/docs/qa/v034/img/baseline/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/block_rate_regular.png b/docs/qa/v034/img/baseline/block_rate_regular.png deleted file mode 100644 index bdc7aa28d7..0000000000 Binary files a/docs/qa/v034/img/baseline/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/cpu.png b/docs/qa/v034/img/baseline/cpu.png deleted file mode 100644 index ac4fc2695f..0000000000 Binary files a/docs/qa/v034/img/baseline/cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/memory.png b/docs/qa/v034/img/baseline/memory.png deleted file mode 100644 index 17336bd1b9..0000000000 Binary files a/docs/qa/v034/img/baseline/memory.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/mempool_size.png b/docs/qa/v034/img/baseline/mempool_size.png deleted file mode 100644 index fafba68c1a..0000000000 Binary files a/docs/qa/v034/img/baseline/mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/peers.png b/docs/qa/v034/img/baseline/peers.png deleted file mode 100644 index 05a288a356..0000000000 Binary files a/docs/qa/v034/img/baseline/peers.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/rounds.png b/docs/qa/v034/img/baseline/rounds.png deleted file mode 100644 index 79f3348a25..0000000000 Binary files a/docs/qa/v034/img/baseline/rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/baseline/total_txs_rate_regular.png b/docs/qa/v034/img/baseline/total_txs_rate_regular.png deleted file mode 100644 index d80bef12c0..0000000000 Binary files a/docs/qa/v034/img/baseline/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/all_experiments.png b/docs/qa/v034/img/cmt1tm1/all_experiments.png deleted file mode 100644 index 4dc857edca..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/all_experiments.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/avg_cpu.png b/docs/qa/v034/img/cmt1tm1/avg_cpu.png deleted file mode 100644 index cabd273a55..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/avg_memory.png b/docs/qa/v034/img/cmt1tm1/avg_memory.png deleted file mode 100644 index c8e5761772..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/avg_memory.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/avg_mempool_size.png b/docs/qa/v034/img/cmt1tm1/avg_mempool_size.png deleted file mode 100644 index b41199dc00..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/block_rate_regular.png b/docs/qa/v034/img/cmt1tm1/block_rate_regular.png deleted file mode 100644 index 9b3a0b8276..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/cpu.png b/docs/qa/v034/img/cmt1tm1/cpu.png deleted file mode 100644 index cd5acdeb29..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/memory.png b/docs/qa/v034/img/cmt1tm1/memory.png deleted file mode 100644 index 6f56b3ccf1..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/memory.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/mempool_size.png b/docs/qa/v034/img/cmt1tm1/mempool_size.png deleted file mode 100644 index 862a0bdd49..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/peers.png b/docs/qa/v034/img/cmt1tm1/peers.png deleted file mode 100644 index 737cf3dffb..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/peers.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/rounds.png b/docs/qa/v034/img/cmt1tm1/rounds.png deleted file mode 100644 index 17884813af..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt1tm1/total_txs_rate_regular.png b/docs/qa/v034/img/cmt1tm1/total_txs_rate_regular.png deleted file mode 100644 index 8b0cc0d426..0000000000 Binary files a/docs/qa/v034/img/cmt1tm1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/all_experiments.png b/docs/qa/v034/img/cmt2tm1/all_experiments.png deleted file mode 100644 index 4e6f73d355..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/all_experiments.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/avg_cpu.png b/docs/qa/v034/img/cmt2tm1/avg_cpu.png deleted file mode 100644 index 92fea31bd1..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/avg_memory.png b/docs/qa/v034/img/cmt2tm1/avg_memory.png deleted file mode 100644 index f362798d8f..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/avg_memory.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/avg_mempool_size.png b/docs/qa/v034/img/cmt2tm1/avg_mempool_size.png deleted file mode 100644 index b73e577b75..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/block_rate_regular.png b/docs/qa/v034/img/cmt2tm1/block_rate_regular.png deleted file mode 100644 index 5fc7a5560b..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/cpu.png b/docs/qa/v034/img/cmt2tm1/cpu.png deleted file mode 100644 index 15df58abbe..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/memory.png b/docs/qa/v034/img/cmt2tm1/memory.png deleted file mode 100644 index b0feab1074..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/memory.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/mempool_size.png b/docs/qa/v034/img/cmt2tm1/mempool_size.png deleted file mode 100644 index b3a1514f92..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/peers.png b/docs/qa/v034/img/cmt2tm1/peers.png deleted file mode 100644 index 558d4c129e..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/peers.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/rounds.png b/docs/qa/v034/img/cmt2tm1/rounds.png deleted file mode 100644 index 3c22a5cf30..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/cmt2tm1/total_txs_rate_regular.png b/docs/qa/v034/img/cmt2tm1/total_txs_rate_regular.png deleted file mode 100644 index ae98df2176..0000000000 Binary files a/docs/qa/v034/img/cmt2tm1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/all_experiments.png b/docs/qa/v034/img/homogeneous/all_experiments.png deleted file mode 100644 index d8768f6a5d..0000000000 Binary files a/docs/qa/v034/img/homogeneous/all_experiments.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/avg_cpu.png b/docs/qa/v034/img/homogeneous/avg_cpu.png deleted file mode 100644 index 7df188951f..0000000000 Binary files a/docs/qa/v034/img/homogeneous/avg_cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/avg_memory.png b/docs/qa/v034/img/homogeneous/avg_memory.png deleted file mode 100644 index e800cbce22..0000000000 Binary files a/docs/qa/v034/img/homogeneous/avg_memory.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/avg_mempool_size.png b/docs/qa/v034/img/homogeneous/avg_mempool_size.png deleted file mode 100644 index beb323e646..0000000000 Binary files a/docs/qa/v034/img/homogeneous/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/block_rate_regular.png b/docs/qa/v034/img/homogeneous/block_rate_regular.png deleted file mode 100644 index 2a71ab70df..0000000000 Binary files a/docs/qa/v034/img/homogeneous/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/cpu.png b/docs/qa/v034/img/homogeneous/cpu.png deleted file mode 100644 index 8e8c9227af..0000000000 Binary files a/docs/qa/v034/img/homogeneous/cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/memory.png b/docs/qa/v034/img/homogeneous/memory.png deleted file mode 100644 index 190c622a34..0000000000 Binary files a/docs/qa/v034/img/homogeneous/memory.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/mempool_size.png b/docs/qa/v034/img/homogeneous/mempool_size.png deleted file mode 100644 index ec1c79a242..0000000000 Binary files a/docs/qa/v034/img/homogeneous/mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/peers.png b/docs/qa/v034/img/homogeneous/peers.png deleted file mode 100644 index 3c8b0a2e0d..0000000000 Binary files a/docs/qa/v034/img/homogeneous/peers.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/rounds.png b/docs/qa/v034/img/homogeneous/rounds.png deleted file mode 100644 index 660f31d939..0000000000 Binary files a/docs/qa/v034/img/homogeneous/rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png b/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png deleted file mode 100644 index a9025b6665..0000000000 Binary files a/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_latencies.png b/docs/qa/v034/img/v034_200node_latencies.png deleted file mode 100644 index afd1060caf..0000000000 Binary files a/docs/qa/v034/img/v034_200node_latencies.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_latencies_zoomed.png b/docs/qa/v034/img/v034_200node_latencies_zoomed.png deleted file mode 100644 index 1ff9364422..0000000000 Binary files a/docs/qa/v034/img/v034_200node_latencies_zoomed.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/all_experiments.png b/docs/qa/v034/img/v034_200node_tm2cmt1/all_experiments.png deleted file mode 100644 index e91a87effd..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/all_experiments.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_cpu.png b/docs/qa/v034/img/v034_200node_tm2cmt1/avg_cpu.png deleted file mode 100644 index a1b0ef79e4..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_memory.png b/docs/qa/v034/img/v034_200node_tm2cmt1/avg_memory.png deleted file mode 100644 index f9d9b99334..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_memory.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_mempool_size.png b/docs/qa/v034/img/v034_200node_tm2cmt1/avg_mempool_size.png deleted file mode 100644 index c2b896060a..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/avg_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/block_rate_regular.png b/docs/qa/v034/img/v034_200node_tm2cmt1/block_rate_regular.png deleted file mode 100644 index 5a5417bdf3..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/block_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/c2r200_merged.png b/docs/qa/v034/img/v034_200node_tm2cmt1/c2r200_merged.png deleted file mode 100644 index 45de9ce72d..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/c2r200_merged.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/cpu.png b/docs/qa/v034/img/v034_200node_tm2cmt1/cpu.png deleted file mode 100644 index eabfa96617..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/cpu.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/memory.png b/docs/qa/v034/img/v034_200node_tm2cmt1/memory.png deleted file mode 100644 index 70014c1f96..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/memory.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/mempool_size.png b/docs/qa/v034/img/v034_200node_tm2cmt1/mempool_size.png deleted file mode 100644 index 5f4c44b2a6..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/peers.png b/docs/qa/v034/img/v034_200node_tm2cmt1/peers.png deleted file mode 100644 index c35c84675c..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/peers.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/rounds.png b/docs/qa/v034/img/v034_200node_tm2cmt1/rounds.png deleted file mode 100644 index 7d1034bcbc..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_200node_tm2cmt1/total_txs_rate_regular.png b/docs/qa/v034/img/v034_200node_tm2cmt1/total_txs_rate_regular.png deleted file mode 100644 index 2e8a40af6a..0000000000 Binary files a/docs/qa/v034/img/v034_200node_tm2cmt1/total_txs_rate_regular.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_latency_throughput.png b/docs/qa/v034/img/v034_latency_throughput.png deleted file mode 100644 index 3674fe47b4..0000000000 Binary files a/docs/qa/v034/img/v034_latency_throughput.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_heights.png b/docs/qa/v034/img/v034_r200c2_heights.png deleted file mode 100644 index 11f3bba432..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_heights.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_load-runner.png b/docs/qa/v034/img/v034_r200c2_load-runner.png deleted file mode 100644 index 70211b0d21..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_load-runner.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_load1.png b/docs/qa/v034/img/v034_r200c2_load1.png deleted file mode 100644 index 11012844dc..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_load1.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_mempool_size.png b/docs/qa/v034/img/v034_r200c2_mempool_size.png deleted file mode 100644 index c5d690200a..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_mempool_size.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_mempool_size_avg.png b/docs/qa/v034/img/v034_r200c2_mempool_size_avg.png deleted file mode 100644 index bda399fe5d..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_mempool_size_avg.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_peers.png b/docs/qa/v034/img/v034_r200c2_peers.png deleted file mode 100644 index a0aea7ada3..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_peers.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_rounds.png b/docs/qa/v034/img/v034_r200c2_rounds.png deleted file mode 100644 index 215be100de..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_rounds.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_rss.png b/docs/qa/v034/img/v034_r200c2_rss.png deleted file mode 100644 index 6d14dced0b..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_rss.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_rss_avg.png b/docs/qa/v034/img/v034_r200c2_rss_avg.png deleted file mode 100644 index 8dec67da29..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_rss_avg.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_r200c2_total-txs.png b/docs/qa/v034/img/v034_r200c2_total-txs.png deleted file mode 100644 index 177d5f1c31..0000000000 Binary files a/docs/qa/v034/img/v034_r200c2_total-txs.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_report_tabbed.txt b/docs/qa/v034/img/v034_report_tabbed.txt deleted file mode 100644 index 2514954743..0000000000 --- a/docs/qa/v034/img/v034_report_tabbed.txt +++ /dev/null @@ -1,52 +0,0 @@ -Experiment ID: 3d5cf4ef-1a1a-4b46-aa2d-da5643d2e81e │Experiment ID: 80e472ec-13a1-4772-a827-3b0c907fb51d │Experiment ID: 07aca6cf-c5a4-4696-988f-e3270fc6333b - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 25 │ Rate: 25 │ Rate: 25 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 2225 │ Total Valid Tx: 4450 │ Total Valid Tx: 8900 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 599.404362ms │ Minimum Latency: 448.145181ms │ Minimum Latency: 412.485729ms - Maximum Latency: 3.539686885s │ Maximum Latency: 3.237392049s │ Maximum Latency: 12.026665368s - Average Latency: 1.441485349s │ Average Latency: 1.441267946s │ Average Latency: 2.150192457s - Standard Deviation: 541.049869ms │ Standard Deviation: 525.040007ms │ Standard Deviation: 2.233852478s - │ │ -Experiment ID: 953dc544-dd40-40e8-8712-20c34c3ce45e │Experiment ID: d31fc258-16e7-45cd-9dc8-13ab87bc0b0a │Experiment ID: 15d90a7e-b941-42f4-b411-2f15f857739e - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 50 │ Rate: 50 │ Rate: 50 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 4450 │ Total Valid Tx: 8900 │ Total Valid Tx: 17800 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 482.046942ms │ Minimum Latency: 435.458913ms │ Minimum Latency: 510.746448ms - Maximum Latency: 3.761483455s │ Maximum Latency: 7.175583584s │ Maximum Latency: 6.551497882s - Average Latency: 1.450408183s │ Average Latency: 1.681673116s │ Average Latency: 1.738083875s - Standard Deviation: 587.560056ms │ Standard Deviation: 1.147902047s │ Standard Deviation: 943.46522ms - │ │ -Experiment ID: 9a0b9980-9ce6-4db5-a80a-65ca70294b87 │Experiment ID: df8fa4f4-80af-4ded-8a28-356d15018b43 │Experiment ID: d0e41c2c-89c0-4f38-8e34-ca07adae593a - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 100 │ Rate: 100 │ Rate: 100 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 8900 │ Total Valid Tx: 17800 │ Total Valid Tx: 35600 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 477.417219ms │ Minimum Latency: 564.29247ms │ Minimum Latency: 840.71089ms - Maximum Latency: 6.63744785s │ Maximum Latency: 6.988553219s │ Maximum Latency: 9.555312398s - Average Latency: 1.561216103s │ Average Latency: 1.76419063s │ Average Latency: 3.200941683s - Standard Deviation: 1.011333552s │ Standard Deviation: 1.068459423s │ Standard Deviation: 1.732346601s - │ │ -Experiment ID: 493df3ee-4a36-4bce-80f8-6d65da66beda │Experiment ID: 13060525-f04f-46f6-8ade-286684b2fe50 │Experiment ID: 1777cbd2-8c96-42e4-9ec7-9b21f2225e4d - │ │ - Connections: 1 │ Connections: 2 │ Connections: 4 - Rate: 200 │ Rate: 200 │ Rate: 200 - Size: 1024 │ Size: 1024 │ Size: 1024 - │ │ - Total Valid Tx: 17800 │ Total Valid Tx: 35600 │ Total Valid Tx: 38660 - Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0 - Minimum Latency: 493.705261ms │ Minimum Latency: 955.090573ms │ Minimum Latency: 1.9485821s - Maximum Latency: 7.440921872s │ Maximum Latency: 10.086673491s │ Maximum Latency: 17.73103976s - Average Latency: 1.875510582s │ Average Latency: 3.438130099s │ Average Latency: 8.143862237s - Standard Deviation: 1.304336995s │ Standard Deviation: 1.966391574s │ Standard Deviation: 3.943140002s - diff --git a/docs/qa/v034/img/v034_rotating_heights.png b/docs/qa/v034/img/v034_rotating_heights.png deleted file mode 100644 index 47913c282f..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_heights.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_heights_ephe.png b/docs/qa/v034/img/v034_rotating_heights_ephe.png deleted file mode 100644 index 981b93d6c4..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_heights_ephe.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_latencies.png b/docs/qa/v034/img/v034_rotating_latencies.png deleted file mode 100644 index f0a54ed5b6..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_latencies.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_latencies_uniq.png b/docs/qa/v034/img/v034_rotating_latencies_uniq.png deleted file mode 100644 index e5d694a16e..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_latencies_uniq.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_load1.png b/docs/qa/v034/img/v034_rotating_load1.png deleted file mode 100644 index e9c385b85e..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_load1.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_peers.png b/docs/qa/v034/img/v034_rotating_peers.png deleted file mode 100644 index ab5c8732d3..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_peers.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_rss_avg.png b/docs/qa/v034/img/v034_rotating_rss_avg.png deleted file mode 100644 index 9a4167320c..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_rss_avg.png and /dev/null differ diff --git a/docs/qa/v034/img/v034_rotating_total-txs.png b/docs/qa/v034/img/v034_rotating_total-txs.png deleted file mode 100644 index 1ce5f47e9b..0000000000 Binary files a/docs/qa/v034/img/v034_rotating_total-txs.png and /dev/null differ diff --git a/docs/tools/README.md b/docs/tools/README.md deleted file mode 100644 index de29e17f12..0000000000 --- a/docs/tools/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -order: 1 -parent: - title: Tools - order: 6 ---- - -# Overview - -CometBFT has some tools that are associated with it for: - -- [Debugging](./debugging.md) -- [Benchmarking](#benchmarking) - -## Benchmarking - -- - -`tm-load-test` is a distributed load testing tool (and framework) for load -testing CometBFT networks. diff --git a/docs/tools/debugging.md b/docs/tools/debugging.md deleted file mode 100644 index 69449a93db..0000000000 --- a/docs/tools/debugging.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -order: 1 ---- - -# Debugging - -## CometBFT debug kill - -CometBFT comes with a `debug` sub-command that allows you to kill a live -CometBFT process while collecting useful information in a compressed archive. -The information includes the configuration used, consensus state, network -state, the node' status, the WAL, and even the stack trace of the process -before exit. These files can be useful to examine when debugging a faulty -CometBFT process. - -```bash -cometbft debug kill --home= -``` - -will write debug info into a compressed archive. The archive will contain the -following: - -```sh -├── config.toml -├── consensus_state.json -├── net_info.json -├── stacktrace.out -├── status.json -└── wal -``` - -Under the hood, `debug kill` fetches info from `/status`, `/net_info`, and -`/dump_consensus_state` HTTP endpoints, and kills the process with `-6`, which -catches the go-routine dump. - -## CometBFT debug dump - -Also, the `debug dump` sub-command allows you to dump debugging data into -compressed archives at a regular interval. These archives contain the goroutine -and heap profiles in addition to the consensus state, network info, node -status, and even the WAL. - -```bash -cometbft debug dump --home= -``` - -will perform similarly to `kill` except it only polls the node and -dumps debugging data every frequency seconds to a compressed archive under a -given destination directory. Each archive will contain: - -```sh -├── consensus_state.json -├── goroutine.out -├── heap.out -├── net_info.json -├── status.json -└── wal -``` - -Note: goroutine.out and heap.out will only be written if a profile address is -provided and is operational. This command is blocking and will log any error. diff --git a/docs/tutorials/README.md b/docs/tutorials/README.md deleted file mode 100644 index 8a7fda4ca1..0000000000 --- a/docs/tutorials/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -order: false -parent: - order: 2 ---- - -# Guides - -- [Creating a built-in application in Go](./go-built-in.md) -- [Creating an external application in Go](./go.md)