You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 11, 2022. It is now read-only.
Using auto interval calculation fails for a 30 minutes selected time interval
Its not grouping it correctly since there are more rows than the LIMIT causing the graph to be empty at the end
SELECT
TIMESTAMP_SECONDS(DIV(UNIX_SECONDS(`timestamp`), 1) * 1) AS time,
COUNT(*) AS count
FROM
``
WHERE
`timestamp` BETWEEN TIMESTAMP_MILLIS (1587532641900)
AND TIMESTAMP_MILLIS (1587534441900)
GROUP BY
1
ORDER BY
1
LIMIT
1742
Clearly 1800 points can never fit into 1742. 1742 comes from the max data points.
Same problem arises up to last 12 hours, where its flatting out because its doing larger time slices.
So either we need to in those cases skip the maxDataPoint value and increase it or, rather we need to make a bucket which can't be a single second it would rather be 1.033 seconds in this case, not sure how the sql expression would support that. Or the best, it would require a group of 2 seconds when there's a overflow, instead of 1 second which its grouped at currently
The text was updated successfully, but these errors were encountered:
ptomasroos
changed the title
$__timeGroup(...., auto) fails for shorter time spanns
$__timeGroup(...., auto) fails for shorter time spans
Apr 22, 2020
Bug Report
Using auto interval calculation fails for a 30 minutes selected time interval
Its not grouping it correctly since there are more rows than the LIMIT causing the graph to be empty at the end
Expected Behavior
A full graph in the display
Actual Behavior
Missing data for the last few seconds
Steps to Reproduce the Problem
Specifications
Extra
Here are my two timestamps used in this example.
(1587534441900-1587532641900)/1000/60 = 30 minutes.
(1587534441900-1587532641900)/1000 = 1800 seconds.
The LIMIT passed is 1742.
Example query from the inspector.
Clearly 1800 points can never fit into 1742. 1742 comes from the max data points.
Same problem arises up to last 12 hours, where its flatting out because its doing larger time slices.
So either we need to in those cases skip the maxDataPoint value and increase it or, rather we need to make a bucket which can't be a single second it would rather be 1.033 seconds in this case, not sure how the sql expression would support that. Or the best, it would require a group of 2 seconds when there's a overflow, instead of 1 second which its grouped at currently
The text was updated successfully, but these errors were encountered: