Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store applications persist logs (human-readable) to disk or S3 Storage. #766

Closed
gauravbist opened this issue Jul 17, 2019 · 16 comments
Closed
Labels
component/loki stale A stale issue or PR that will automatically be closed. type/feature Something new we should do

Comments

@gauravbist
Copy link

Is your feature request related to a problem? Please describe.
Currently logs are written in Chunks which can only be readable by Grafana. There is no way if I want to download it to human readable file logs.

Describe the solution you'd like
Loki can parallelly send the applications logs in human-readable (json, raw) to disk or S3 Storage, so that we can preserve it for longer time and can share the logs to third party, if needed.

@sh0rez sh0rez added component/loki type/feature Something new we should do labels Jul 17, 2019
@sh0rez
Copy link
Member

sh0rez commented Jul 17, 2019

Currently logs are written in Chunks which can only be readable by Grafana. There is no way if I want to download it to human readable file logs.
This is only partly correct. Loki indeed has an HTTP API-endpoint which is used by logcli and grafana to query the logs.

Nevertheless I see the use-case of archiving the logs to a human readable long term storage.
However, I would not implement this directly in Loki but rather use the tailing feature to create a standalone agent (loki-archive) that does exactly that:

  • accept a set of logql queries
  • persist each query into a separate file on disk, s3, whatever

@cyriltovena
Copy link
Contributor

I definitively think this is a nice to have, we could indeed create a small tool to uncompress chunks.

@rverma-nikiai
Copy link

rverma-nikiai commented Jul 17, 2019

We are in pursuit of something similar. What we concluded based on our research is that loki is much more suitable for what we call hot logs (age < 7(x) days). This is where we want the magic of mixing metrics and logs. The logs which are usually older than x days should be persisted in s3.

Our proposed solution is
Cold logs : Fluentbit -> Kinesis-Firehose -> transform+compress using glue-table to parquet -> store-to-s3 -> query using athena/EMR
Hot logs: Fluentbit -> Loki-ingestor -> store-to-s3 -> query using grafana.

Our concerns:

  1. We are lacking a good fluentbit plugin for loki, there is some community work though https://github.com/cosmo0920/fluent-bit-go-loki, but would be awesome if its managed here https://github.com/fluent/fluent-bit/tree/master/plugins by the community(us) Support fluent-bit also in addition/replacement for fluentd #770.
  2. Currently, Loki does discrete indexing based on a weekly basis and its very optimized. What we are investigating is how can we clean up data/resources older than 2x days.
    We can set up an object expiration in s3 to remove s3 data.
    Any possible solution to run expiration on dynamodb resources to cleanup old tables?

@cyriltovena
Copy link
Contributor

What we concluded based on our research is that loki is much more suitable for what we call hot logs (age < 7(x) days).

I have to disagree. You can use loki for hot logs but also for long term storage.

We are lacking a good fluentbit plugin for loki, there is some community work though https://github.com/cosmo0920/fluent-bit-go-loki, but would be awesome if its managed here https://github.com/fluent/fluent-bit/tree/master/plugins by the community(us).

Yep, I agree, I hope someone in the team will have time for this soon enough.

Currently, Loki does discrete indexing based on a weekly basis and its very optimized. What we are investigating is how can we clean up data/resources older than 2x days.
We can set up an object expiration in s3 to remove s3 data.
Any possible solution to run expiration on dynamodb resources to cleanup old tables?

/cc @sandlis he is the king of retention ;)

@rverma-nikiai
Copy link

rverma-nikiai commented Jul 17, 2019

I do agree with the fact that Loki take care of retrieval and management of logs for long duration. Though what I meant is not capabilities of tool but the usage. For older logs, we tend to use agreegation on logs. What we observed is with limited index the kind of aggregation which we done on logs are very limited. For e.g we want to find which services observed maximum restarts or highest p90-p50 delta etc.

If this is supported and optimised we would be happy to use that.

@sandeepsukhani
Copy link
Contributor

@rverma-nikiai

Any possible solution to run expiration on dynamodb resources to cleanup old tables?

We do have retention in place. More details about it here.

What we observed is with limited index the kind of aggregation which we done on logs are very limited. For e.g we want to find which services observed maximum restarts of highest p90-p50 delta etc.

I guess when this PR from @Kuqd gets merged you can do that.

@gauravbist
Copy link
Author

@sh0rez Is there any progress on the same?

@stale
Copy link

stale bot commented Sep 3, 2019

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Sep 3, 2019
@stale stale bot closed this as completed Sep 10, 2019
@gauravbist
Copy link
Author

hi @sh0rez, if the solution has not been built yet, could you please share, how to do it via API call, so that I can study and create some script or mechanism where I can directly backed up logs to S3 storage.

@sh0rez
Copy link
Member

sh0rez commented Sep 19, 2019

Hi, I have paused developing this mostly because there was no real client library available at that moment. This has changed / will change soon because of the API refactor, then I will look into it!

@tiagoasousa
Copy link

Hi, is there any update on this use case?

@MarjanJordanovski
Copy link

No updates on this?

@userakhila
Copy link

any update on this?

@saurabh-hirani
Copy link

Is there any solution for the above?

@sandeepsukhani the link in your reply #766 (comment) is https://github.com/grafana/loki/blob/master/docs/operations.md#retentiondeleting-old-data - which is not accessible. Can you please update the same?

@cici1111
Copy link

cici1111 commented Dec 9, 2021

Any update on this? Or any solutions about archive log in Loki?

@Meet-S0ni
Copy link

I'm looking for the similar feature is there any update?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/loki stale A stale issue or PR that will automatically be closed. type/feature Something new we should do
Projects
None yet
Development

No branches or pull requests