You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 15, 2020. It is now read-only.
This is a feature request to add results logging from Kolide directly into an S3 bucket with a folder structure based on pack / query_name. This cannot be achieved with Kinesis Firehose as delivery streams will forward all logs indistinctly into the same folder within the S3 bucket configured as destination.
Having this structure based mainly on query name on the same folder, allows to have data processing tools to consume logs with the same schema directly on the S3 bucket and transfer the complexity of segregating them on the source of the logs in this case, the fleet server.
As it is right now, Kinesis will forward the result logs with different schema all under the same folder and requires additional processing to segregate logs into folder based on same schema (osquery query).
The text was updated successfully, but these errors were encountered:
My recommendation for this would be to use something like Logstash to pull the logs from S3 and split them by query.
We could look into supporting Kinesis Streams in addition to Firehose which would help get the logs directly into Logstash rather than having to read them out of S3.
@zwass tools like AWS Athena will work directly with the logs on S3 (read it from them) so that is why having a Kolide Fleet logger directly on S3 is beneficial and to avoid some of those further processing pulling the logs from S3 again just to re-upload them to S3 to split them into S3 folders by query?
The idea is to have folder structure on S3 so tools that parse logs directly from S3 can make the schema work per query. You can see some comments on the Kolide Slack channel of people that have tried this with no success purely based on the lack of folder structure on S3 as the different query result logs will have a different schema on the "columns" key:
This is a feature request to add results logging from Kolide directly into an S3 bucket with a folder structure based on pack / query_name. This cannot be achieved with Kinesis Firehose as delivery streams will forward all logs indistinctly into the same folder within the S3 bucket configured as destination.
Having this structure based mainly on query name on the same folder, allows to have data processing tools to consume logs with the same schema directly on the S3 bucket and transfer the complexity of segregating them on the source of the logs in this case, the fleet server.
As it is right now, Kinesis will forward the result logs with different schema all under the same folder and requires additional processing to segregate logs into folder based on same schema (osquery query).
The text was updated successfully, but these errors were encountered: