-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TCP socket poisoned by agent restart #99
Comments
Thanks for the detailed report! The race condition in #100 is not ideal but the idea of reading from the input stream seems to be the best way to detect unexpected socket closure. We do want to try to reconnect on socket closure instead of throwing an error right away. |
After performing a couple of more tests I found that the TCP client does indeed reconnect on its own (after minimum 2 retries), but still drops packets in the meantime. I've modified the PR to include a retry queue. |
This isn't really something we can avoid -- it will always be possible for us to send a packet and the server receive it, but the connection gets severed prior to any mechanism confirming the message was received. Dealing with this is an inherent issue with networking communication in general. We'd need to solve duplicate sends in the application layer by having some idempotency mechanism (which isn't available in the EMF specification or the cloudwatch agent protocol). |
Problem
When using EMF in an agent-based environment with a TCP endpoint, all metrics are silently dropped if the agent is restarted after the first metric publish.
Reproduction
Minimal Code
Steps
nc -l 127.0.0.1 25888 -k
Sleeping, restart agent socket now...
is printed, Ctrl-C the netcat command and run it again -nc -l 127.0.0.1 25888 -k
Expected behavior
Two metric entries are printed in the netcat terminal:
Actual behavior
The second metric entry is never received by netcat:
Details
This appears to be caused by how the TCP communication is handled, specifically there is no mechanism that actually verifies that a given log event is actually delivered to the CloudWatch agent. TCP sockets (and by extension Java TCP sockets) do not make any guarantees about when the the bytes written to it are sent.
If the receiving side closes the socket in a disorderly manner, our Socket won't recognize that it is closed until an operation on it goes through all of the TCP retries, which takes upwards of 15 minutes by default.
In general, the recommended way to deal with disorderly disconnects is to have the application protocol acknowledge each message sent. If the reading the expected acknowledgement times out (via SO_TIMEOUT), the connection is closed on the client side and handled by the application.
Workaround
To work around this, you can switch to the UDP endpoint by setting AgentEndpoint to
udp://127.0.0.1:25888
. This still doesn't confirm that events are written; however, it does not get poisoned by severed connections.The text was updated successfully, but these errors were encountered: