You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an odd one, filing it before I have the full picture, since I'm balancing a few things.
Reproduced with the following setup:
Current build of metricbeat and elastic-agent
default elastic-agent.yml
running on linux
run in standalone mode with ./elastic-agent
After a few seconds, send elastic-agent a sigint with ^C
Instead of shutting down in the usual 3-5 seconds, elastic-agent will continue running for 30 seconds until agent hard-stops the beat.
Once metricbeat gets the sigint, instead of shutting down, it seems to end up in some kind of loop where it will continually restart the reloader with the cloud metadata processor:
{"log.level":"debug","@timestamp":"2023-01-12T13:23:27.516-0800","message":"Generated new processors: add_cloud_metadata={}, add_fields={\"@metadata\":{\"input_id\":\"unique-system-metrics-input\"}}, add_fields={\"data_stream\":{\"dataset\":\"generic\",\"namespace\":\"default\",\"type\":\"metrics\"}}, add_fields={\"event\":{\"dataset\":\"generic\"}}, add_fields={\"elastic_agent\":{\"id\":\"5291b614-c8ba-4c73-8e81-4cc09cfdcc44\",\"snapshot\":false,\"version\":\"8.7.0\"}}, add_fields={\"agent\":{\"id\":\"5291b614-c8ba-4c73-8e81-4cc09cfdcc44\"}}","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log.logger":"processors","log.origin":{"file.line":121,"file.name":"processors/processor.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-01-12T13:23:30.519-0800","message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":81,"file.name":"add_cloud_metadata/provider_aws_ec2.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-01-12T13:23:30.519-0800","message":"add_cloud_metadata: starting to fetch metadata, timeout=3s","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"add_cloud_metadata","log.origin":{"file.line":130,"file.name":"add_cloud_metadata/providers.go"},"ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-01-12T13:23:33.522-0800","message":"add_cloud_metadata: received disposition for gcp after 3.002658035s. result=[provider:gcp, error=failed requesting gcp metadata: Get \"http://169.254.169.254/computeMetadata/v1/?recursive=true&alt=json\": dial tcp 169.254.169.254:80: i/o timeout, metadata={}]","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":167,"file.name":"add_cloud_metadata/providers.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-01-12T13:23:33.522-0800","message":"add_cloud_metadata: timed-out waiting for all responses","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"add_cloud_metadata","log.origin":{"file.line":174,"file.name":"add_cloud_metadata/providers.go"},"ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-01-12T13:23:33.522-0800","message":"add_cloud_metadata: fetchMetadata ran for 3.002800745s","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log.logger":"add_cloud_metadata","log.origin":{"file.line":133,"file.name":"add_cloud_metadata/providers.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-12T13:23:33.522-0800","message":"add_cloud_metadata: hosting provider type not detected.","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"add_cloud_metadata","log.origin":{"file.line":102,"file.name":"add_cloud_metadata/add_cloud_metadata.go"},"ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-01-12T13:23:33.522-0800","message":"Generated new processors: add_cloud_metadata={}, add_fields={\"@metadata\":{\"input_id\":\"unique-system-metrics-input\"}}, add_fields={\"data_stream\":{\"dataset\":\"generic\",\"namespace\":\"default\",\"type\":\"metrics\"}}, add_fields={\"event\":{\"dataset\":\"generic\"}}, add_fields={\"elastic_agent\":{\"id\":\"5291b614-c8ba-4c73-8e81-4cc09cfdcc44\",\"snapshot\":false,\"version\":\"8.7.0\"}}, add_fields={\"agent\":{\"id\":\"5291b614-c8ba-4c73-8e81-4cc09cfdcc44\"}}","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log.logger":"processors","log.origin":{"file.line":121,"file.name":"processors/processor.go"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}
[repeat the above log lines 3-4 times until agent hard-stops the beat]
Commenting out the processors here fixes the issue. This is not reproducible with standalone metricbeat as well, the weird init-loop-after-sigint only seems to happen while we're running under agent.
The text was updated successfully, but these errors were encountered:
Hi!
We just realized that we haven't looked into this issue in a while. We're sorry!
We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1.
Thank you for your contribution!
This is an odd one, filing it before I have the full picture, since I'm balancing a few things.
Reproduced with the following setup:
elastic-agent.yml
./elastic-agent
^C
Once metricbeat gets the sigint, instead of shutting down, it seems to end up in some kind of loop where it will continually restart the reloader with the cloud metadata processor:
Commenting out the processors here fixes the issue. This is not reproducible with standalone metricbeat as well, the weird init-loop-after-sigint only seems to happen while we're running under agent.
The text was updated successfully, but these errors were encountered: