Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query filter does not work #72

Closed
Neko-Follower opened this issue Sep 6, 2024 · 23 comments
Closed

Query filter does not work #72

Neko-Follower opened this issue Sep 6, 2024 · 23 comments
Labels
bug Something isn't working

Comments

@Neko-Follower
Copy link

Hi. I want to use query filter, but it doesn't seem to work
image

@dmitryk-dk
Copy link
Contributor

dmitryk-dk commented Sep 6, 2024

Hi @Neko-Follower ! I have checked this question on my local, and it works. See examples below

  1. With all values
Screenshot 2024-09-06 at 10 48 46
  1. With only stdout value
Screenshot 2024-09-06 at 10 49 50

Could you please share the version of the grafana?

Also, as far as I can see you have the space between the field and value. Could you remove the space, please?

@dmitryk-dk dmitryk-dk added the question Further information is requested label Sep 6, 2024
@Neko-Follower
Copy link
Author

I removed the space. It didn't work
Снимок экрана 2024-09-06 в 15 41 06

My Grafana version - 11.1.4
I saw in #71 issue you use Grafana version 9.5.21. How you use it if datasource requires version >= 10.0.3?
Снимок экрана 2024-09-06 в 15 46 03

I'll try to downgrade Grafana to 10 and test.

@dmitryk-dk
Copy link
Contributor

Checked with 11.1.4. It takes some time to return a response, and maybe you have a proxy in between the Grafana and Victorialogs that may stop the request for some time. It is just my assumption because you have the error like 504 gateway timeout. But can you check that you do not have any proxy
Screenshot 2024-09-06 at 13 03 31

@Neko-Follower
Copy link
Author

You are right. It is error 504 gateway timeout. I looked and there is no proxy between grafana and victorialogs.
Also I looked at victorialogs logs:

{"ts":"2024-09-06T11:09:12.526Z","level":"warn","caller":"VictoriaMetrics/app/vlselect/main.go:169","msg":"remoteAddr: \"10.1.3.114:58522, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\"; requestURI: /select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999; the request couldn't be executed in 30.000 seconds; possible solutions: to increase -search.maxQueryDuration=30s; to pass bigger value to 'timeout' query arg"}
{"ts":"2024-09-06T11:09:12.526Z","level":"error","caller":"net/http/server.go:3416","msg":"http: response.Write on hijacked connection from github.com/klauspost/compress/gzhttp.(*GzipResponseWriter).startPlain (compress.go:292)"}
{"ts":"2024-09-06T11:09:53.571Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.056 seconds: remoteAddr=\"10.1.3.114:37574, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=consumer&from=1725562800000&query=%2A&to=1725649199999\""}

I have a big amount of logs (~200 million). Maybe the problem is related to this. But I have a loki-stack on another Kubernetes cluster with bigger amount of logs. And there is no such problem.

@dmitryk-dk
Copy link
Contributor

You are right. It is error 504 gateway timeout. I looked and there is no proxy between grafana and victorialogs. Also I looked at victorialogs logs:

{"ts":"2024-09-06T11:09:12.526Z","level":"warn","caller":"VictoriaMetrics/app/vlselect/main.go:169","msg":"remoteAddr: \"10.1.3.114:58522, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\"; requestURI: /select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999; the request couldn't be executed in 30.000 seconds; possible solutions: to increase -search.maxQueryDuration=30s; to pass bigger value to 'timeout' query arg"}
{"ts":"2024-09-06T11:09:12.526Z","level":"error","caller":"net/http/server.go:3416","msg":"http: response.Write on hijacked connection from github.com/klauspost/compress/gzhttp.(*GzipResponseWriter).startPlain (compress.go:292)"}
{"ts":"2024-09-06T11:09:53.571Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.056 seconds: remoteAddr=\"10.1.3.114:37574, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=consumer&from=1725562800000&query=%2A&to=1725649199999\""}

I have a big amount of logs (~200 million). Maybe the problem is related to this. But I have a loki-stack on another Kubernetes cluster with bigger amount of logs. And there is no such problem.

Yeah, I think the problem is with that. We need to discuss with our team about the problem

@dmitryk-dk
Copy link
Contributor

field_values

Could you try to increase the -search.maxQueryDuration to 60s before I will check this problem with our team?

@Neko-Follower
Copy link
Author

Sure

@Neko-Follower
Copy link
Author

I set --search.maxQueryDuration=60 for victorialogs and timeout for datasource plugin, but it still canceled after 30s
Снимок экрана 2024-09-06 в 17 12 36

{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:12","msg":"build version: victoria-logs-20240710-011043-tags-v0.28.0-victorialogs-0-gb173f24041"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:13","msg":"command-line flags"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -envflag.enable=\"true\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -envflag.prefix=\"VM_\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -loggerFormat=\"json\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -retentionPeriod=\"90d\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -search.maxQueryDuration=\"1m0s\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -storageDataPath=\"/storage\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/app/victoria-logs/main.go:42","msg":"starting VictoriaLogs at \"[:9428]\"..."}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/app/vlstorage/main.go:62","msg":"opening storage at -storageDataPath=/storage"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/memory/memory.go:42","msg":"limiting caches to 8326400409 bytes, leaving 5550933607 bytes to the OS according to -memory.allowedPercent=60"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/app/vlstorage/main.go:68","msg":"successfully opened storage in 7.489 seconds; smallParts: 1565; bigParts: 87; smallPartBlocks: 65427; bigPartBlocks: 45591; smallPartRows: 123419855; bigPartRows: 85053177; smallPartSize: 6588406481 bytes; bigPartSize: 4644403000 bytes"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/app/victoria-logs/main.go:50","msg":"started VictoriaLogs in 7.489 seconds; see https://docs.victoriametrics.com/victorialogs/"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/lib/httpserver/httpserver.go:119","msg":"starting server at http://127.0.0.1:9428/"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/lib/httpserver/httpserver.go:120","msg":"pprof handlers are exposed at http://127.0.0.1:9428/debug/pprof/"}
{"ts":"2024-09-06T12:03:03.380Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.063 seconds: remoteAddr=\"10.1.3.114:58062, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999\""}
{"ts":"2024-09-06T12:04:30.422Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.052 seconds: remoteAddr=\"10.1.3.114:59718, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999\""}

@dmitryk-dk
Copy link
Contributor

I set --search.maxQueryDuration=60 for victorialogs and timeout for datasource plugin, but it still canceled after 30s Снимок экрана 2024-09-06 в 17 12 36

{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:12","msg":"build version: victoria-logs-20240710-011043-tags-v0.28.0-victorialogs-0-gb173f24041"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:13","msg":"command-line flags"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -envflag.enable=\"true\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -envflag.prefix=\"VM_\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -loggerFormat=\"json\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -retentionPeriod=\"90d\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -search.maxQueryDuration=\"1m0s\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/logger/flag.go:20","msg":"  -storageDataPath=\"/storage\""}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/app/victoria-logs/main.go:42","msg":"starting VictoriaLogs at \"[:9428]\"..."}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/app/vlstorage/main.go:62","msg":"opening storage at -storageDataPath=/storage"}
{"ts":"2024-09-06T12:02:06.352Z","level":"info","caller":"VictoriaMetrics/lib/memory/memory.go:42","msg":"limiting caches to 8326400409 bytes, leaving 5550933607 bytes to the OS according to -memory.allowedPercent=60"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/app/vlstorage/main.go:68","msg":"successfully opened storage in 7.489 seconds; smallParts: 1565; bigParts: 87; smallPartBlocks: 65427; bigPartBlocks: 45591; smallPartRows: 123419855; bigPartRows: 85053177; smallPartSize: 6588406481 bytes; bigPartSize: 4644403000 bytes"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/app/victoria-logs/main.go:50","msg":"started VictoriaLogs in 7.489 seconds; see https://docs.victoriametrics.com/victorialogs/"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/lib/httpserver/httpserver.go:119","msg":"starting server at http://127.0.0.1:9428/"}
{"ts":"2024-09-06T12:02:13.841Z","level":"info","caller":"VictoriaMetrics/lib/httpserver/httpserver.go:120","msg":"pprof handlers are exposed at http://127.0.0.1:9428/debug/pprof/"}
{"ts":"2024-09-06T12:03:03.380Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.063 seconds: remoteAddr=\"10.1.3.114:58062, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999\""}
{"ts":"2024-09-06T12:04:30.422Z","level":"info","caller":"VictoriaMetrics/app/vlselect/main.go:161","msg":"client has canceled the request after 30.052 seconds: remoteAddr=\"10.1.3.114:59718, X-Forwarded-For: 10.53.221.31, 10.1.8.74, 10.1.8.74\", requestURI: \"/select/logsql/field_values?field=namespace&from=1725562800000&query=%2A&to=1725649199999\""}

Thank you for the provided information. I think i will be back on Monday with more information. Both your issues are related to this problem.

@Neko-Follower
Copy link
Author

Thank you very much. I'll be waiting

hagen1778 added a commit that referenced this issue Sep 10, 2024
The correct query param names for /field_values are `start` `end`.
See https://docs.victoriametrics.com/victorialogs/querying/#querying-field-values

Sending `from` and `to` params meant time range filter wasn't applied at all.
This made VictoriaLogs make the full data scan.
Which probably caused the following issues:
* #71
* #72
dmitryk-dk pushed a commit that referenced this issue Sep 10, 2024
The correct query param names for /field_values are `start` `end`.
See https://docs.victoriametrics.com/victorialogs/querying/#querying-field-values

Sending `from` and `to` params meant time range filter wasn't applied at all.
This made VictoriaLogs make the full data scan.
Which probably caused the following issues:
* #71
* #72
@dmitryk-dk dmitryk-dk added bug Something isn't working and removed question Further information is requested waiting for release labels Sep 12, 2024
@dmitryk-dk
Copy link
Contributor

Hi @Neko-Follower ! The issue was resolved in the release. Please, reopen the issue if you find any problems

@Neko-Follower
Copy link
Author

Hi, I checked the new v0.5.0 release. Issue with Timeout gone. The values ​​are loaded instantly, i like it.

But Issue with query filter is still relevant.
Снимок экрана 2024-09-16 в 11 04 51

Grafana v11.2.0
VictoriaLogs v0.28.0
VictoriaLogs datasource plugin v0.5.0

@dmitryk-dk
Copy link
Contributor

Hi, I checked the new v0.5.0 release. Issue with Timeout gone. The values ​​are loaded instantly, i like it.

But Issue with query filter is still relevant. Снимок экрана 2024-09-16 в 11 04 51

Grafana v11.2.0 VictoriaLogs v0.28.0 VictoriaLogs datasource plugin v0.5.0

Hi! Could you please share the request from Grafana and the response? Could you please check the VictoriaLogs logs, maybe there are another error or warning present?

@Neko-Follower
Copy link
Author

Sure. Here is request and response.
Снимок экрана 2024-09-16 в 12 46 51

Снимок экрана 2024-09-16 в 12 47 22

I checked the logs in the VictoriaLogs. Didn't see any errors or warnings

@Neko-Follower
Copy link
Author

I tried several queries. This doesn't seem to work with certain values.
Снимок экрана 2024-09-16 в 13 16 58
Снимок экрана 2024-09-16 в 13 18 15

This one work fine
Снимок экрана 2024-09-16 в 13 19 54
Снимок экрана 2024-09-16 в 13 20 12

@dmitryk-dk dmitryk-dk reopened this Sep 16, 2024
@dmitryk-dk
Copy link
Contributor

Hi @Neko-Follower! I have investigated the issue. As far as I can see, there is a problem with Victorialogs API. Datasource works well. We will discuss it, and I will be back soon.

valyala added a commit to VictoriaMetrics/VictoriaMetrics that referenced this issue Sep 30, 2024
valyala added a commit to VictoriaMetrics/VictoriaMetrics that referenced this issue Sep 30, 2024
@valyala
Copy link

valyala commented Sep 30, 2024

The issue with returning unexpected field values has been fixed at VictoriaLogs server side in the commit VictoriaMetrics/VictoriaMetrics@0c0f013 . This commit will be included in the next release of VictoriaLogs.

@valyala
Copy link

valyala commented Sep 30, 2024

FYI, the issue with unexpected field values should be fixed in v0.32.1-victorialogs release.

@dmitryk-dk
Copy link
Contributor

Hi @Neko-Follower ! Can you update the VictoriaLogs to the v0.32.1-victorialogs release and check the issue?

@p5i
Copy link

p5i commented Sep 30, 2024

Hi. I had the same issue with unexpected values, returned by 'uniq'. It is fixed now in the v0.32.1.
But there is the same issue with 'count_uniq'.

@dmitryk-dk
Copy link
Contributor

Hi. I had the same issue with unexpected values, returned by 'uniq'. It is fixed now in the v0.32.1. But there is the same issue with 'count_uniq'.

Could you please create an issue in the VictoriaMetrics repository? Because this issue is mostly related to VictoriaLogs API

@Neko-Follower
Copy link
Author

Hi @dmitryk-dk ! I'll try it during the day and tell you about the results

@Neko-Follower
Copy link
Author

Hi! I updated to v0.32.1-victorilogs. Tested it, works great. Issue can be closed :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants