-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consensus-Full-Nodes - resources usage issue #2935
Comments
Thanks for the issue! A few questions:
|
I think we ran into this before, and becuase of all of the ibc memo spam on mocha, the tx-index will eat gobs of memory. Was the tx-index configured to be on @jrmanes ? |
hey guys!
This happens for every node, the image shows the
We could see it also in Arabica, the problem of reproducing it in other chains is that it mostly happens when the chain is big in terms of data, so we cannot reproduce it easily in robusta for example
I would say yes, we have it defined here, please let us know if there is something we can tweak The main problem that I see guys, is that this kind of issue are hard to detect in Robusta, we need to have a chain with a lot of data already and connect a node to sync it, then, we will see this scenario. |
Based on the lines you linked, it looks like the tx indexer is enabled and set to |
if this is purely related to the kv indexer, can we perhaps close this issue and open a new one to improve the KV? |
we should be able to close this issue once we are able to run v2 in production, as that includes a new version of the kv that should remedy the massive amounts of memory used by the existing kv store |
Nina backported celestiaorg/celestia-core#1405 to celestia-core v1.38.0-tm-v0.34.29 which was released in celestia-app v1.13.0. Rachid bumped celestia-node to that release in this PR. TLDR: we don't need to wait until celestia-app v2 is running in production. As soon as celestia-node cuts a release from main (likely v0.15.0), we can use the lightweight tx status work. |
hello! |
Summary of Bug
Hello team! 👋
I want to report to you an issue that we are facing with the consensus-full-nodes, we have been facing issues when trying to run the nodes with less than 20GB of RAM, by the time the nodes have to sync the chain (for example in mocha) the nodes cannot sync with less than this amount of resources. Even if we set the maximum to 20GB they are trying to get all the resources of the server until they get OOM and crash.
This happens if the nodes have to sync either from scratch or since some days ago.
I consider, that they should work with the resources that they have, even if that means that it's going to take longer to sync, but crashing because of that, might be an issue.
I assume that the nodes should work even with 8GB, as we have here
cc: @celestiaorg/devops
Version
v1.3.0
Steps to Reproduce
Start a consensus-full-node and connect it to an existing chain (mocha for example), then, it will have to sync to catch up.
For Admin Use
The text was updated successfully, but these errors were encountered: