-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jvm/Cassandra metrics stopped flowing after some time #861
Comments
The other behavior is the following metrics stuck in number "454211". Does anyone have this experience before? otelcol_receiver_accepted_metric_points{receiver="jmx",service_instance_id="***",service_name="refinery",service_version="v0.3.0",transport="grpc"} 454211 |
Found the following error 2023-05-09T19:19:58.474Z debug subprocess/subprocess.go:287 java.lang.OutOfMemoryError: Java heap space {"kind": "receiver", "name": "jmx", "pipeline": "metrics"} |
@junhuangli were you able to increase the heap space and resolve the issue? |
Thanks for taking a look at this @trask. This OutOfMemoryError only shows up when I set the collection_interval to 1s. Since the waiting time is long(from 10 hours to 10 days) I am not sure if I wait long enough will I see the same error with the longer collection_interval yet. The current workaround is to set the collection_interval to 5 minutes. I suspect, there might be a leaking somewhere. The other tricky part is I am using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/jmxreceiver which calls "[OpenTelemetry JMX Metric Gatherer]" to get JMX/Cassandra metrics. So I am not sure how I can control the resource usage here. |
@breedx-splk @dehaansa @Mrod1598 @rmfitzpatrick do you know if it's possible to configure |
From what I can recall, and a quick parse of the source, the collector does not support setting The most vulnerable part of the JMX receiver as far as code execution goes is that it runs the |
I can try but it is still kind of a workaround. One more info, I am running the receiver in aws kubernetes as a sidecar container. And this situation is consistently happening. |
This issue might be related to JMXGatherer memory leak issue we are experiencing.. #926 |
Thanks @smamidala1 , will follow #926 |
#949 was merged and should be available in 1.29.0. Among other issues addressed in that PR there were memory leaks resolved which may affect this behavior. Let us know if this issue persists after that release has been made available. |
Thanks @dehaansa ! |
Description
Steps to reproduce
Deploy and then wait
Expectation
Jvm/Cassandra metrics continue flowing
What applicable config did you use?
Relevant Environment Information
NAME="CentOS Linux" VERSION="7 (Core)" ID="centos"
Additional context
The text was updated successfully, but these errors were encountered: