Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data alignment feature does not seem to be fully working #1133

Open
yurtesen opened this issue Jan 6, 2021 · 19 comments
Open

Data alignment feature does not seem to be fully working #1133

yurtesen opened this issue Jan 6, 2021 · 19 comments

Comments

@yurtesen
Copy link

yurtesen commented Jan 6, 2021

Describe the bug
@alexanderzobnin explained the feature with screenshots here:
#1109 (comment)

However after upgrading to v4.1.1 still the graphs look like this (there are no gaps for missing points):
image

Expected behavior
I expected graph to look like this:
image

Screenshots
See above. Although I realized when I select null as zero only the left part of the graph is connected to zero but not the right part.

Network data

Software versions

Grafana Zabbix Grafana-Zabbix Plugin
7.3.6 5.0.7 4.1.1
@Shmakovm
Copy link

Shmakovm commented Jan 11, 2021

I have the same problems after upgrading to 4.1.1. (Grafana 7.3.6, Zabbix 5.0.7)

Screenshot 2021-01-11 141013

@alexanderzobnin
Copy link
Collaborator

@yurtesen What's an update interval of the item? Is it a kind of dynamic?

@Shmakovm
Copy link

I have a fixed update interval of 1 minute. I rolled back to plugin version 4.0.2 on this version there are no such problems.

@alexanderzobnin
Copy link
Collaborator

@Shmakovm you can disable data alignment feature in plugin config. What's an issue in your case?

@yurtesen
Copy link
Author

yurtesen commented Jan 20, 2021

@alexanderzobnin The update interval for item in Zabbix is 5m. But in cases if consecutive values are same, there is no value in zabbix due to threshold/discard option.

How is your plugin retrieving item update interval? from where?

@alexanderzobnin
Copy link
Collaborator

It gets it from zabbix item (if it's fixed interval) or tries to autodetect (if no success with previous step).

@alexanderzobnin
Copy link
Collaborator

@yurtesen How did you get the second graph (with gaps)?

@alexanderzobnin
Copy link
Collaborator

Ok, looks like just a photoshop.

@alexanderzobnin
Copy link
Collaborator

@yurtesen make sure data alignment is not disabled in data source config.

Screenshot from 2021-01-20 15-48-29

@yurtesen
Copy link
Author

Yes, 2nd graph is what I expected to see. I edited to emulate it.
I am 100% sure that the alignment is enabled, also enabled for the graph.

I guess you are using the "delay" from the API?
https://www.zabbix.com/documentation/current/manual/api/reference/item/get

Now that I think about it, this item is probably a dependent item. Therefore it does not have its own interval. Could this be the problem? Do you check the interval from master item in these cases? master_itemid ?

@yurtesen
Copy link
Author

Do you need more information?

@alexanderzobnin
Copy link
Collaborator

Try to use latest version - I fixed a bug related to alignment, maybe it affected you as well.

@yurtesen
Copy link
Author

yurtesen commented Feb 6, 2021

I updated to 4.1.2 and something is still strange... I check values from zabbix from 2021-02-06 14:53:23 to 2021-02-06 16:01:09 and there are no missing points
image
If I make graph without data alignment, everything looks good:
image
Right when I enable data alignment it does this (it moved half of the graph to left):
image
Query inspector says:

{
  "request": {
    "url": "api/tsdb/query",
    "method": "POST",
    "data": {
      "queries": [
        {
          "refId": "A",
          "format": "time_series",
          "datasourceId": 2,
          "rawSql": "SELECT to_char(itemid, 'FM99999999999999999999') AS metric, clock / 2 * 2 AS time, AVG(value) AS value FROM history WHERE itemid IN (481113, 481115, 481112, 481114) AND clock > 1612616004 AND clock < 1612620070 GROUP BY 1, 2 ORDER BY time ASC",
          "maxDataPoints": 10000
        }
      ]
    },
    "hideFromInspector": false
  },
  "response": {
    "results": {
      "A": {
        "refId": "A",
        "meta": {
          "executedQueryString": "SELECT to_char(itemid, 'FM99999999999999999999') AS metric, clock / 2 * 2 AS time, AVG(value) AS value FROM history WHERE itemid IN (481113, 481115, 481112, 481114) AND clock > 1612616004 AND clock < 1612620070 GROUP BY 1, 2 ORDER BY time ASC",
          "rowCount": 0
        },
        "series": [
          {
            "name": "481112",
            "points": [
              [
                0.02330496003901708,
                1612616094000
              ],
              [
                0.026616677997638685,
                1612616394000
              ],
              [
                0.0266272099783115,
                1612616694000
              ],
              [
                0.02326428353363466,
                1612616996000
              ],
              [
                0.026597567472896422,
                1612617296000
              ],
              [
                0.026527094775835893,
                1612617598000
              ],
              [
                0.02319706259168484,
                1612617900000
              ],
              [
                0.02665740349760997,
                1612618200000
              ],
              [
                0.02655293474411828,
                1612618500000
              ],
              [
                0.023254523404619712,
                1612618802000
              ],
              [
                0.026573528048104252,
                1612619102000
              ],
              [
                0.0266538439802312,
                1612619404000
              ],
              [
                0.02314400084368662,
                1612619706000
              ],
              [
                0.02654594495232154,
                1612620006000
              ]
            ]
          },
          {
            "name": "481114",
            "points": [
              [
                1.9909094433331733,
                1612616094000
              ],
              [
                2.2258196975525353,
                1612616394000
              ],
              [
                2.359836484327857,
                1612616694000
              ],
              [
                1.977464100358946,
                1612616996000
              ],
              [
                2.699653098498987,
                1612617296000
              ],
              [
                2.2581189427930304,
                1612617598000
              ],
              [
                2.2202902766326917,
                1612617900000
              ],
              [
                2.4791385252777274,
                1612618200000
              ],
              [
                3.4253285819912582,
                1612618500000
              ],
              [
                2.5845741726848765,
                1612618802000
              ],
              [
                2.318540322197096,
                1612619102000
              ],
              [
                2.2555815468270652,
                1612619404000
              ],
              [
                1.83829492415568,
                1612619706000
              ],
              [
                2.3459978851614163,
                1612620006000
              ]
            ]
          },
          {
            "name": "481113",
            "points": [
              [
                0,
                1612618500000
              ]
            ]
          },
          {
            "name": "481115",
            "points": [
              [
                0,
                1612618500000
              ]
            ]
          }
        ],
        "tables": null,
        "dataframes": null
      }
    }
  }
}

@yurtesen
Copy link
Author

yurtesen commented Feb 6, 2021

and also in this case it is removing an existing point
image
image

{
  "request": {
    "url": "api/tsdb/query",
    "method": "POST",
    "data": {
      "queries": [
        {
          "refId": "A",
          "format": "time_series",
          "datasourceId": 2,
          "rawSql": "SELECT to_char(itemid, 'FM99999999999999999999') AS metric, clock / 1.9992458521870287 * 1.9992458521870287 AS time, AVG(value) AS value FROM history WHERE itemid IN (481113, 481115, 481112, 481114) AND clock > 1612474124 AND clock < 1612476775 GROUP BY 1, 2 ORDER BY time ASC",
          "maxDataPoints": 10000
        }
      ]
    },
    "hideFromInspector": false
  },
  "response": {
    "results": {
      "A": {
        "refId": "A",
        "meta": {
          "executedQueryString": "SELECT to_char(itemid, 'FM99999999999999999999') AS metric, clock / 1.9992458521870287 * 1.9992458521870287 AS time, AVG(value) AS value FROM history WHERE itemid IN (481113, 481115, 481112, 481114) AND clock > 1612474124 AND clock < 1612476775 GROUP BY 1, 2 ORDER BY time ASC",
          "rowCount": 0
        },
        "series": [
          {
            "name": "481112",
            "points": [
              [
                0.029942432345913244,
                1612474157000
              ],
              [
                0.03325930250056314,
                1612474458000
              ],
              [
                0.029911361491426137,
                1612474759000
              ],
              [
                0.03329146802690592,
                1612475059000
              ],
              [
                0.03328216258879398,
                1612475360000
              ],
              [
                0.033192960182350795,
                1612475661000
              ],
              [
                0.033311501486590574,
                1612475961000
              ],
              [
                0.033202018891756654,
                1612476262000
              ],
              [
                0.10972514080128397,
                1612476563000
              ]
            ]
          },
          {
            "name": "481113",
            "points": [
              [
                0,
                1612474157000
              ]
            ]
          },
          {
            "name": "481114",
            "points": [
              [
                2.9277044960448504,
                1612474157000
              ],
              [
                3.13967815605316,
                1612474458000
              ],
              [
                3.0542823567356248,
                1612474759000
              ],
              [
                3.1194105541210844,
                1612475059000
              ],
              [
                2.865594198895162,
                1612475360000
              ],
              [
                3.1400540332503852,
                1612475661000
              ],
              [
                2.6382709177379735,
                1612475961000
              ],
              [
                3.987562468899974,
                1612476262000
              ],
              [
                13.243491994288304,
                1612476563000
              ]
            ]
          },
          {
            "name": "481115",
            "points": [
              [
                0,
                1612474157000
              ]
            ]
          }
        ],
        "tables": null,
        "dataframes": null
      }
    }
  }
}

@mjtrangoni
Copy link
Contributor

@alexanderzobnin +1
I see the same effect with Zabbix 4.2.8, Grafana 7.3.7 and Plugins 4.1.2 on some graphs, not all of them!. It looses one or 2 points point in the middle of the graph, no matter which range you take. I can see it for 30m, 1h, 3h until 1d ranges.

@keeks5
Copy link

keeks5 commented Aug 3, 2021

Encountered same problem.
Grafana 7.4.0
Plugin 4.1.5
Zabbix 5.0.8
@alexanderzobnin any updates?

@goreck
Copy link

goreck commented Sep 15, 2021

Similar problem:
Grafana 8.1.2
Plugin 4.1.5
Zabbix 5.4.4

Data polled at 1m interval.
Consider what happens when an item is polled late and recorded during a subsequent interval, and the next one is polled without delay.

Response data:

{
  "result": [{
      "clock": "1631706954",
      "itemid": "62593",
      "ns": "471772119",
      "value": "786044632"
    }, {
      "clock": "1631707013",
      "itemid": "62593",
      "ns": "900752623",
      "value": "813032080"
    }, {
      "clock": "1631707074",
      "itemid": "62593",
      "ns": "91841664",
      "value": "808997872"
    }, {
      "clock": "1631707136",
      "itemid": "62593",
      "ns": "792429903",
      "value": "811282464"
    }, {
      "clock": "1631707194",
      "itemid": "62593",
      "ns": "562055013",
      "value": "802328296"
    }, {
      "clock": "1631707260",
      "itemid": "62593",
      "ns": "640012802",
      "value": "818647920"
    }, {
      "clock": "1631707316",
      "itemid": "62593",
      "ns": "128444864",
      "value": "775530264"
    }, {
      "clock": "1631707376",
      "itemid": "62593",
      "ns": "579132108",
      "value": "808677400"
    }, {
      "clock": "1631707434",
      "itemid": "62593",
      "ns": "233046613",
      "value": "822326360"
    }, {
      "clock": "1631707495",
      "itemid": "62593",
      "ns": "969142375",
      "value": "776823024"
    }, {
      "clock": "1631707560",
      "itemid": "62593",
      "ns": "407341753",
      "value": "778134936"
    }, {
      "clock": "1631707616",
      "itemid": "62593",
      "ns": "839243307",
      "value": "757824760"
    }, {
      "clock": "1631707676",
      "itemid": "62593",
      "ns": "21640834",
      "value": "834242352"
    }, {
      "clock": "1631707735",
      "itemid": "62593",
      "ns": "701393332",
      "value": "815660384"
    }, {
      "clock": "1631707796",
      "itemid": "62593",
      "ns": "765022133",
      "value": "859856400"
    }
  ]
}
Unaliagned timestamp Value   Aligned timestamp: Value
2021-09-15 13:55:54 786 044 632   2021-09-15 13:55:00 786 044 632
2021-09-15 13:56:53 813 032 080   2021-09-15 13:56:00 813 032 080
2021-09-15 13:57:54 808 997 872   2021-09-15 13:57:00 808 997 872
2021-09-15 13:58:56 811 282 464   2021-09-15 13:58:00 811 282 464
2021-09-15 13:59:54 802 328 296   2021-09-15 13:59:00 802 328 296
2021-09-15 14:01:00 818 647 920   2021-09-15 14:00:00 810 488 108
2021-09-15 14:01:56 775 530 264   2021-09-15 14:01:00 818 647 920
2021-09-15 14:02:56 808 677 400   2021-09-15 14:02:00 775 530 264
2021-09-15 14:03:54 822 326 360   2021-09-15 14:03:00 808 677 400
2021-09-15 14:04:55 776 823 024   2021-09-15 14:04:00 822 326 360
2021-09-15 14:06:00 778 134 936   2021-09-15 14:06:00 776 823 024
2021-09-15 14:06:56 757 824 760   2021-09-15 14:07:00 778 134 936
2021-09-15 14:07:56 834 242 352   2021-09-15 14:08:00 757 824 760
2021-09-15 14:08:55 815 660 384   2021-09-15 14:09:00 834 242 352
2021-09-15 14:09:56 859 856 400      

You'll notice two "late" samples at minutes 14:00 and 14:05
When data alignment is enabled a "missing" value for 14:00 is an average of raw samples at 13:59:54 and 14:01:00. Then 14:01 shows value for raw 14:01 but subsequent samples are offset by one, i.e. 14:02 shows value for 14:01:56, 14:02:00 for 14:01:56.
Then something strage happens at 14:05 - this sample is ommited in an aligned dataset.
The following datapoints are offset by 2: 14:06:00 for 776 823 024, 14:07:00 for 14:06:00, 14:08:00 for 14:06:56 etc.
You'll also notice that an aligned dataset is one row shorter than an unaligned one. In fact for number of "late" samples n>1 the number of missing rows in aligned dataset is n-1.

The resulting graph shows values at wrong timestamps. If there are multiple stacked datasets where some of them contain such "late" data points the resulting values are misaligned as well.

Copy link

This issue has been automatically marked as stale because it has not had activity in the last 2 years. It will be closed in 60 days if no further activity occurs. Please feel free to leave a comment if you believe the issue is still relevant. Thank you for your contributions!

@github-actions github-actions bot added the stale label Aug 13, 2024
@iqt4
Copy link

iqt4 commented Dec 4, 2024

The thread is quite old, but let me report another issue with data alignment in current Grafana Cloud implementation.

My data consists of a status expressed as integer values. Data is polled at 1m interval but due to the nature of the setup there is some fluctuation of the timestamp. If I align the data, the plugin "calculates" something like an average. I.e. if there are two data points (status 1 and 2) within one minute the aligned data point results in 1.5 which makes no sense.

Is there a way to stop the calculation and instead use e.g. the last or first value and if there is a data point missing just repeat it from previous or following data point?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants