-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/grafana-loki] Grafana loki retention of logs is only 1h #10824
Comments
Hi @puppeteer701, have you tried setting |
Tried it as well, but no luck. I am pasting in the loki config. auth_enabled: false
server:
http_listen_port: 3100
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- grafana-loki-gossip-ring
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /bitnami/grafana-loki/wal
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
retention_period: 24h
retention_stream:
- selector: '{namespace="niftyswifty-app"}'
priority: 1
period: 24h
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
shared_store: filesystem
active_index_directory: /bitnami/grafana-loki/loki/index
cache_location: /bitnami/grafana-loki/loki/cache
cache_ttl: 168h
filesystem:
directory: /bitnami/grafana-loki/chunks
index_queries_cache_config:
memcached:
batch_size: 100
parallelism: 100
memcached_client:
consistent_hash: true
addresses: dns+grafana-loki-memcachedindexqueries:11211
service: http
chunk_store_config:
max_look_back_period: 0s
chunk_cache_config:
memcached:
batch_size: 100
parallelism: 100
memcached_client:
consistent_hash: true
addresses: dns+grafana-loki-memcachedchunks:11211
table_manager:
retention_deletes_enabled: true
retention_period: 672h
query_range:
align_queries_with_step: true
max_retries: 5
cache_results: true
results_cache:
cache:
memcached_client:
consistent_hash: true
addresses: dns+grafana-loki-memcachedfrontend:11211
max_idle_conns: 16
timeout: 500ms
update_interval: 1m
frontend_worker:
frontend_address: grafana-loki-query-frontend:9095
frontend:
log_queries_longer_than: 5s
compress_responses: true
tail_proxy_url: http://grafana-loki-querier:3100
compactor:
shared_store: filesystem
ruler:
storage:
type: local
local:
directory: /bitnami/grafana-loki/conf/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx |
@puppeteer701 In that case, unfortunately I don't know what can be the cause of the issue. It might be an upstream issue, as we are only packaging the official binaries provided by Grafana, and applying basic configurations to it. I would recommend you to contact the Grafana Loki developers and explain the issue. |
Will do that, thank you very much. |
@puppeteer701 - did you find a solution? I have the same problem. |
@ath88 No, unfortunately I did not yet found a solution. |
what was the resolution if any ? |
Same here! Default values from chart |
not sure but related to this? https://github.com/bitnami/charts/tree/main/bitnami/grafana-loki/#limitation |
Ho, good to know that this is per documents.
But this is huge upset in the default deployment !
I think there was a release that supported local.
There are use cases where local filesystem has more then enough storage and is faster then object storage, so might be desirable.
For what it's worth in my case this was a blocker for using loki.
Ticket at grafana:
grafana/loki#6513
בתאריך שבת, 7 בינו׳ 2023, 12:33, מאת Larry Kim ***@***.***>:
… not sure but related to this?
https://github.com/bitnami/charts/tree/main/bitnami/grafana-loki/#limitation
—
Reply to this email directly, view it on GitHub
<#10824 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADHJICUZZON5GAPECSMLPKDWRFBAZANCNFSM5ZLB2YTQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I ended up using grafanas own helm chart for Loki. It seems to have more sensible defaults. https://grafana.com/docs/loki/next/installation/helm/ |
I got same issue , here is my helm info and use default value(local filesystem)
|
Name and Version
bitnami/grafana-loki
What steps will reproduce the bug?
My Config
Are you using any custom parameters or values?
Yes, I enabled tableManager.
What is the expected behavior?
To still be able to query for logs of my app longer than 1h. I used the timerange of 6 hours and I do not see older logs than 1h.
What do you see instead?
I can only query for logs of my app now-1h, other queries return null result.results
The text was updated successfully, but these errors were encountered: