You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We expect that the agent will never send batch sizes larger than 1MiB as this is the default behaviour for loki.write. But still, we have noticed getting back HTTP 413 Request Entity Too Large from the Loki write endpoint (fronted by NGINX). Nginx allows up to 8MB of payload.
ts=2024-12-18T07:11:39.097614506Z level=error msg="final error sending batch" component_path=/ component_id=loki.write.logs_default component=client host=REDACTED status=413 tenant="" error="server returned HTTP status 413 Request Entity Too Large (413): <html>"
We have seen this behaviour when the agent spins up the first time, and has to parse and send a lot of data, in case the positions file is not there, and it's configured to read from the beginning.
We have experimented with configuring the agent to not read from the beginning, to try and reduce the data. But still, with the endpoint.block_size at it's default value (1MiB), i'm expecting that it should never have happened in the first place.
Maybe there is something i'm missing?
Appreciate any guidance you can give for troubleshooting this further. Perhaps any agent logs/metrics i can look it, to validate if it actually exceeds the 1MiB, or if this is related to the ingestion route somehow having a limit i'm not aware of.
Running Alloy on Linux, with version
v1.4.3
.We expect that the agent will never send batch sizes larger than
1MiB
as this is the default behaviour forloki.write
. But still, we have noticed getting backHTTP 413 Request Entity Too Large
from the Loki write endpoint (fronted by NGINX). Nginx allows up to 8MB of payload.We have seen this behaviour when the agent spins up the first time, and has to parse and send a lot of data, in case the positions file is not there, and it's configured to read from the beginning.
We have experimented with configuring the agent to not read from the beginning, to try and reduce the data. But still, with the
endpoint.block_size
at it's default value (1MiB
), i'm expecting that it should never have happened in the first place.Maybe there is something i'm missing?
Appreciate any guidance you can give for troubleshooting this further. Perhaps any agent logs/metrics i can look it, to validate if it actually exceeds the
1MiB
, or if this is related to the ingestion route somehow having a limit i'm not aware of.The text was updated successfully, but these errors were encountered: