Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs don't show up unless I expand the window (experimental bolt-db setup) #2700

Closed
silverbp opened this issue Sep 30, 2020 · 2 comments
Closed
Labels
stale A stale issue or PR that will automatically be closed.

Comments

@silverbp
Copy link

silverbp commented Sep 30, 2020

When I execute the following log query (you can replicate this behavior in Grafana as well, logcli query '{job="una-int-current/aurorab"}' --limit=10 --from=2020-09-29T16:10:00-04:00 --to=2020-09-29T17:10:00-04:00 I get no logs.

But if I expand the --to by adding one hour to it..

logcli query '{job="una-int-current/aurorab"}' --limit=10 --from=2020-09-29T16:10:00-04:00 --to=2020-09-29T18:10:00-04:00

I get logs that are in the first query window..

2020-09-29T15:39:59-05:00 {} 2020-09-29T16:39:59.351-04:00

If I do --forward, I get logs at the 16:10-04:00 as well.

I've also seen behavior where I've had to expand the to get all the logs even though the logs are ALL in the initial window I'm querying, setting the limit to higher didn't help.

We are running loki in an eks cluster and are using the helm chart to install it. The following is the values.yaml file that we are using to configure loki

image:
  repository: grafana/loki
  tag: 1.6.0
  pullPolicy: IfNotPresent

config:
  auth_enabled: false
  ingester:
    chunk_idle_period: 3m
    chunk_block_size: 262144
    chunk_retain_period: 1m
    max_transfer_retries: 0
    lifecycler:
      ring:
        kvstore:
          store: inmemory
        replication_factor: 1

  limits_config:
    enforce_metric_name: false
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    ingestion_rate_mb: 8
    ingestion_burst_size_mb: 16
    max_entries_limit_per_query: 50000
  schema_config:
    configs:
      - from: 2018-04-15
        store: boltdb-shipper
        object_store: s3
        schema: v11
        index:
          prefix: unanet_k8s_loki
          period: 24h        
  server:
    http_listen_port: 3100
  storage_config:
    aws:
      s3: s3://us-east-2/unanet-k8s-loki
    boltdb_shipper:
      active_index_directory: /var/index/boltdb
      shared_store: s3
      cache_location: /var/cache/boltdb
  chunk_store_config:
    # Match this with the retention_period (9576h)
    max_look_back_period: 9576h
replicas: 1

serviceAccount:
  create: false
  name: loki


extraVolumes:
  - name: bolt-cache
    persistentVolumeClaim:
      claimName: bolt-cache
  - name: bolt-index
    persistentVolumeClaim:
      claimName: bolt-index
  
extraVolumeMounts:
  - name: bolt-cache
    mountPath: /var/cache/boltdb
    readOnly: false
  - name: bolt-index
    mountPath: /var/index/boltdb
    readOnly: false
@silverbp
Copy link
Author

We're also using the experimental bolt-db stuff...

@silverbp silverbp changed the title Logs don't show up unless I expand the window Logs don't show up unless I expand the window (experimental bolt-db setup) Oct 1, 2020
@stale
Copy link

stale bot commented Nov 1, 2020

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Nov 1, 2020
@stale stale bot closed this as completed Nov 8, 2020
cyriltovena pushed a commit to cyriltovena/loki that referenced this issue Jun 11, 2021
)

* Fix Redis cache error when a query has no chunks to lookup

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Added CHANGELOG entry

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Fixed another case leading to 'wrong number of arguments for 'mget' command'

Signed-off-by: Marco Pracucci <marco@pracucci.com>
cyriltovena added a commit to cyriltovena/loki that referenced this issue Jun 11, 2021
* Fixes an issue in the index chunks/series intersect code.

This was introduce in grafana#2700, more specifically this line https://github.com/cortexproject/cortex/pull/2700/files#diff-10bca0f4f31a2ca1edc507d0289b143dR537

This causes any query with the first label matcher not matching anything to return all matches of all other labels.
This is a nasty one since, the code was relying on empty slice, and so it would skip nil values instead of returning no matches. I've added a regression test proving this is fixed everywhere. I think in cortex it can probably affect performance (since you have to download all chunk not required) but not read integrity.

I have found this with @slim-bean while deploying Loki, all queriers where OOMing because of this.

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>

* Update changelog.

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
cyriltovena added a commit to cyriltovena/loki that referenced this issue Jun 11, 2021
* Fixes an issue in the index chunks/series intersect code.

This was introduce in grafana#2700, more specifically this line https://github.com/cortexproject/cortex/pull/2700/files#diff-10bca0f4f31a2ca1edc507d0289b143dR537

This causes any query with the first label matcher not matching anything to return all matches of all other labels.
This is a nasty one since, the code was relying on empty slice, and so it would skip nil values instead of returning no matches. I've added a regression test proving this is fixed everywhere. I think in cortex it can probably affect performance (since you have to download all chunk not required) but not read integrity.

I have found this with @slim-bean while deploying Loki, all queriers where OOMing because of this.

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>

* Update changelog.

Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale A stale issue or PR that will automatically be closed.
Projects
None yet
Development

No branches or pull requests

1 participant