-
Notifications
You must be signed in to change notification settings - Fork 761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Charge level regression line #4187
Charge level regression line #4187
Conversation
✅ Deploy Preview for teslamate ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@JakobLichterfeld - still experimenting a little bit with some added lines to add value to the charge level dashboard. could you please guide on what is favored? regression via plugin or showing moving percentiles within the graph? I'll update the PR afterwards based on your preference (mine would be the percentiles, not the regression line) I figured out pure PostgreSQL based solutions - see outcome and concepts used in the links below: Making use of TimescaleDB Hyperfunctions would have made the query way easier to write and read but I guess switching our DB is not an option currently.
before updating the PR - could someone run the query below (6 months+ history) and note the query time please? with sample_set as (
select date, battery_level from positions p where p.car_id = 2 and p.battery_level is not null
-- union all
-- select date, battery_level, usable_battery_level from charges c inner join charging_processes cp on c.charging_process_id = cp.id and cp.car_id = 2
), date_series as (
select generate_series('2024-03-08T15:19:38.536Z', '2024-09-08T14:19:38.536Z', (10800000 / 1 || ' milliseconds')::INTERVAL) as series_id
), date_series_bucketing as (
select series_id, lead(series_id) over (order by series_id asc) as next_series_id from date_series
), bucketing as (
select
series_id, avg(battery_level) as battery_level
from date_series_bucketing
left join sample_set on
sample_set.date >= date_series_bucketing.series_id
and sample_set.date < date_series_bucketing.next_series_id
group by series_id
), locf_intermediate as (
select series_id, battery_level, count(battery_level) over (order by series_id) as i from bucketing
), bucketing_gapfill as (
select series_id, max(battery_level) over (partition by i) as battery_level from locf_intermediate
), trick as (
select series_id, battery_level, array_agg(battery_level) over (rows between 125 preceding and 125 following) as arr from bucketing_gapfill
)
select series_id,
case when battery_level is null then null else(SELECT percentile_cont(0.075) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0075,
case when battery_level is null then null else(SELECT percentile_cont(0.5) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p050,
case when battery_level is null then null else(SELECT percentile_cont(0.925) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0925
from trick;
|
Thanks for your suggestion and effort!
Personally, I don't think we need a forecast for the charge level (which would be a benefit of using regression). I think moving percentiles are well suited as we want more transparency about the distribution of the charge states and detailed insights into the charging behavior.
As TimescaleDB Hyperfunctions is an extenstion to PosgreSQL I could imagine doing the migration effort if we do need the additional performance. I personally do not yet have extensive knowledge of TimescaleDB Hyperfunctions.
I changed it to |
Hi @JakobLichterfeld - the query should output 1472 rows even if you are selecting a non existing car right now. could you run the query again and append an |
Sorry for not providing the non-existent error message in first place :-) |
Super Strange... what version of postgres are you using? @DrMichael could you maybe run the query? |
PostgreSQL16 |
Please terminate the Query with |
😀 That's what I was looking for with
I changed your comment with the query above with --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on trick (cost=112.45..219.45 rows=200 width=32) (actual time=34.540..346.705 rows=1472 loops=1)
-> WindowAgg (cost=112.45..120.45 rows=200 width=72) (actual time=34.533..344.951 rows=1472 loops=1)
-> Subquery Scan on bucketing_gapfill (cost=112.45..117.95 rows=200 width=40) (actual time=34.022..37.424 rows=1472 loops=1)
-> WindowAgg (cost=112.45..115.95 rows=200 width=48) (actual time=34.019..36.203 rows=1472 loops=1)
-> Sort (cost=112.45..112.95 rows=200 width=48) (actual time=30.719..31.332 rows=1472 loops=1)
Sort Key: locf_intermediate.i
Sort Method: quicksort Memory: 90kB
-> Subquery Scan on locf_intermediate (cost=55.28..104.80 rows=200 width=48) (actual time=5.984..28.824 rows=1472 loops=1)
-> WindowAgg (cost=55.28..104.80 rows=200 width=48) (actual time=5.980..27.491 rows=1472 loops=1)
-> GroupAggregate (cost=55.28..101.80 rows=200 width=40) (actual time=5.946..20.159 rows=1472 loops=1)
Group Key: (generate_series('2024-03-08 15:19:38.536+00'::timestamp with time zone, '2024-09-08 14:19:38.536+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
-> Nested Loop Left Join (cost=55.28..94.30 rows=1000 width=10) (actual time=5.905..15.620 rows=1472 loops=1)
Join Filter: ((p.date >= (generate_series('2024-03-08 15:19:38.536+00'::timestamp with time zone, '2024-09-08 14:19:38.536+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) AND (p.date < (lead((generate_series('2024-03-08 15:19:38.536+00'::timestamp with time zone, '2024-09-08 14:19:38.536+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) OVER (?))))
-> WindowAgg (cost=54.85..72.35 rows=1000 width=16) (actual time=2.687..9.706 rows=1472 loops=1)
-> Sort (cost=54.85..57.35 rows=1000 width=8) (actual time=2.659..3.492 rows=1472 loops=1)
Sort Key: (generate_series('2024-03-08 15:19:38.536+00'::timestamp with time zone, '2024-09-08 14:19:38.536+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
Sort Method: quicksort Memory: 56kB
-> ProjectSet (cost=0.00..5.02 rows=1000 width=8) (actual time=0.363..1.611 rows=1472 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.005 rows=1 loops=1)
-> Materialize (cost=0.43..4.46 rows=1 width=10) (actual time=0.003..0.003 rows=0 loops=1472)
-> Index Scan using positions_car_id_index on positions p (cost=0.43..4.45 rows=1 width=10) (actual time=3.193..3.193 rows=0 loops=1)
Index Cond: (car_id = 2)
Filter: (battery_level IS NOT NULL)
SubPlan 1
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (never executed)
-> Function Scan on unnest trick_1 (cost=0.00..0.10 rows=10 width=32) (never executed)
SubPlan 2
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (never executed)
-> Function Scan on unnest trick_2 (cost=0.00..0.10 rows=10 width=32) (never executed)
SubPlan 3
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (never executed)
-> Function Scan on unnest trick_3 (cost=0.00..0.10 rows=10 width=32) (never executed)
Planning Time: 36.497 ms
Execution Time: 349.111 ms
(34 rows)
~
(END) and without series_id | p0075 | p050 | p0925
----------------------------+-------+------+-------
2024-03-08 15:19:38.536+00 | | |
2024-03-08 18:19:38.536+00 | | |
2024-03-08 21:19:38.536+00 | | |
2024-03-09 00:19:38.536+00 | | |
2024-03-09 03:19:38.536+00 | | |
2024-03-09 06:19:38.536+00 | | |
2024-03-09 09:19:38.536+00 | | |
(...)
2024-09-01 21:19:38.536+00 | | |
2024-09-02 00:19:38.536+00 | | |
2024-09-02 03:19:38.536+00 | | |
2024-09-02 06:19:38.536+00 | | |
2024-09-02 09:19:38.536+00 | | |
2024-09-02 12:19:38.536+00 | | |
2024-09-02 15:19:38.536+00 | | |
2024-09-02 18:19:38.536+00 | | |
2024-09-02 21:19:38.536+00 | | |
2024-09-03 00:19:38.536+00 | | |
2024-09-03 03:19:38.536+00 | | |
2024-09-03 06:19:38.536+00 | | |
2024-09-03 09:19:38.536+00 | | |
2024-09-03 12:19:38.536+00 | | |
2024-09-03 15:19:38.536+00 | | |
2024-09-03 18:19:38.536+00 | | |
2024-09-03 21:19:38.536+00 | | |
2024-09-04 00:19:38.536+00 | | |
2024-09-04 03:19:38.536+00 | | |
2024-09-04 06:19:38.536+00 | | |
2024-09-04 09:19:38.536+00 | | |
2024-09-04 12:19:38.536+00 | | |
2024-09-04 15:19:38.536+00 | | |
2024-09-04 18:19:38.536+00 | | |
2024-09-04 21:19:38.536+00 | | |
2024-09-05 00:19:38.536+00 | | |
2024-09-05 03:19:38.536+00 | | |
2024-09-05 06:19:38.536+00 | | |
2024-09-05 09:19:38.536+00 | | |
2024-09-05 12:19:38.536+00 | | |
2024-09-05 15:19:38.536+00 | | |
2024-09-05 18:19:38.536+00 | | |
2024-09-05 21:19:38.536+00 | | |
2024-09-06 00:19:38.536+00 | | |
2024-09-06 03:19:38.536+00 | | |
2024-09-06 06:19:38.536+00 | | |
2024-09-06 09:19:38.536+00 | | |
2024-09-06 12:19:38.536+00 | | |
2024-09-06 15:19:38.536+00 | | |
2024-09-06 18:19:38.536+00 | | |
2024-09-06 21:19:38.536+00 | | |
2024-09-07 00:19:38.536+00 | | |
2024-09-07 03:19:38.536+00 | | |
2024-09-07 06:19:38.536+00 | | |
2024-09-07 09:19:38.536+00 | | |
2024-09-07 12:19:38.536+00 | | |
2024-09-07 15:19:38.536+00 | | |
2024-09-07 18:19:38.536+00 | | |
2024-09-07 21:19:38.536+00 | | |
2024-09-08 00:19:38.536+00 | | |
2024-09-08 03:19:38.536+00 | | |
2024-09-08 06:19:38.536+00 | | |
2024-09-08 09:19:38.536+00 | | |
2024-09-08 12:19:38.536+00 | | |
(1472 rows)
(END) |
Can you change car_id to 1 again to get results? |
For sure, aborted one after 9 minutes right now, as I do not want the Rpi to hang while the car is charging, will redo later today. |
sure - takes less than 1 second on my instance with 2 cars, 25k km in total. let's see what explain analyze returns, hopefully i can find a way to improve speed for lower spec instances. |
I've executed your query a few times and it took about 21 to 24 seconds every time (with car_id=1)
|
The Rpi reboots during the query (not a memory issue but CPu gets to hot), so no output. Will try with shorter timeframe |
Tried with EXPLAIN ANALYZE with sample_set as (
select date, battery_level from positions p where p.car_id = 1 and p.battery_level is not null
-- union all
-- select date, battery_level, usable_battery_level from charges c inner join charging_processes cp on c.charging_process_id = cp.id and cp.car_id = 1
), date_series as (
select generate_series('2024-08-01T00:00:01.000Z', '2024-09-13T23:59:59.000Z', (10800000 / 1 || ' milliseconds')::INTERVAL) as series_id
), date_series_bucketing as (
select series_id, lead(series_id) over (order by series_id asc) as next_series_id from date_series
), bucketing as (
select
series_id, avg(battery_level) as battery_level
from date_series_bucketing
left join sample_set on
sample_set.date >= date_series_bucketing.series_id
and sample_set.date < date_series_bucketing.next_series_id
group by series_id
), locf_intermediate as (
select series_id, battery_level, count(battery_level) over (order by series_id) as i from bucketing
), bucketing_gapfill as (
select series_id, max(battery_level) over (partition by i) as battery_level from locf_intermediate
), trick as (
select series_id, battery_level, array_agg(battery_level) over (rows between 125 preceding and 125 following) as arr from bucketing_gapfill
)
select series_id,
case when battery_level is null then null else(SELECT percentile_cont(0.075) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0075,
case when battery_level is null then null else(SELECT percentile_cont(0.5) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p050,
case when battery_level is null then null else(SELECT percentile_cont(0.925) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0925
from trick; And aborted after 1h11m33s |
@JakobLichterfeld - can you try the following two queries? (sorry but it's hard to improve runtimes without knowing timings and what is slowing it down in your system. query execution is 300ms here... select count(*) from positions p where p.car_id = 1 and date between '2024-09-01T00:00:01.000Z' and '2024-09-13T23:59:59.000Z' and p.battery_level is not null; EXPLAIN ANALYZE with sample_set as (
select date, battery_level from positions p where p.car_id = 1 and date between '2024-09-01T00:00:01.000Z' and '2024-09-13T23:59:59.000Z' and p.battery_level is not null
-- union all
-- select date, battery_level, usable_battery_level from charges c inner join charging_processes cp on c.charging_process_id = cp.id and cp.car_id = 1
), date_series as (
select generate_series('2024-09-01T00:00:01.000Z', '2024-09-13T23:59:59.000Z', (10800000 / 1 || ' milliseconds')::INTERVAL) as series_id
), date_series_bucketing as (
select series_id, lead(series_id) over (order by series_id asc) as next_series_id from date_series
), bucketing as (
select
series_id, avg(battery_level) as battery_level
from date_series_bucketing
left join sample_set on
sample_set.date >= date_series_bucketing.series_id
and sample_set.date < date_series_bucketing.next_series_id
group by series_id
), locf_intermediate as (
select series_id, battery_level, count(battery_level) over (order by series_id) as i from bucketing
), bucketing_gapfill as (
select series_id, max(battery_level) over (partition by i) as battery_level from locf_intermediate
), trick as (
select series_id, battery_level, array_agg(battery_level) over (rows between 125 preceding and 125 following) as arr from bucketing_gapfill
)
select series_id,
case when battery_level is null then null else(SELECT percentile_cont(0.075) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0075,
case when battery_level is null then null else(SELECT percentile_cont(0.5) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p050,
case when battery_level is null then null else(SELECT percentile_cont(0.925) within group (ORDER BY s)
FROM unnest(arr) trick(s)) end as p0925
from trick; |
Yeah sorry, I was thinking about splitting the query to find the root cause as well.
with QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=127766.34..127766.35 rows=1 width=8) (actual time=47428.881..49319.385 rows=1 loops=1)
-> Gather (cost=1000.00..127766.33 rows=1 width=0) (actual time=47230.977..49274.355 rows=108817 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on positions p (cost=0.00..126766.23 rows=1 width=0) (actual time=42907.622..42979.595 rows=36272 loops=3)
Filter: ((battery_level IS NOT NULL) AND (date >= '2024-09-01 00:00:01'::timestamp without time zone) AND (date <= '2024-09-13 23:59:59'::timestamp without time zone) AND (car_id = 1))
Rows Removed by Filter: 2077526
Planning Time: 11.502 ms
JIT:
Functions: 11
Options: Inlining false, Optimization false, Expressions true, Deforming true
Timing: Generation 29.442 ms, Inlining 0.000 ms, Optimization 480.674 ms, Emission 2836.354 ms, Total 3346.471 ms
Execution Time: 49342.482 ms
(13 rows) And QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------
Subquery Scan on trick (cost=127874.33..127981.33 rows=200 width=32) (actual time=64148.879..64302.192 rows=104 loops
=1)
-> WindowAgg (cost=127874.33..127882.33 rows=200 width=72) (actual time=64137.677..64139.951 rows=104 loops=1)
-> Subquery Scan on bucketing_gapfill (cost=127874.33..127879.83 rows=200 width=40) (actual time=63213.098..
63214.196 rows=104 loops=1)
-> WindowAgg (cost=127874.33..127877.83 rows=200 width=48) (actual time=63213.077..63214.097 rows=104
loops=1)
-> Sort (cost=127874.33..127874.83 rows=200 width=48) (actual time=63212.904..63213.531 rows=104
loops=1)
Sort Key: locf_intermediate.i
Sort Method: quicksort Memory: 21kB
-> Subquery Scan on locf_intermediate (cost=1054.85..127866.69 rows=200 width=48) (actual time=35064.298..63212.815 rows=104 loops=1)
-> WindowAgg (cost=1054.85..127866.69 rows=200 width=48) (actual time=35064.266..63212.419 rows=104 loops=1)
-> GroupAggregate (cost=1054.85..127863.69 rows=200 width=40) (actual time=34725.644..63209.998 rows=104 loops=1)
Group Key: (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
-> Nested Loop Left Join (cost=1054.85..127856.19 rows=1000 width=10) (actual time=33650.833..63134.761 rows=108895 loops=1)
Join Filter: ((p.date >= (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) AND (p.date < (lead((generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) OVER (?))))
Rows Removed by Join Filter: 11208151
-> WindowAgg (cost=54.85..72.35 rows=1000 width=16) (actual time=0.458..2.327 rows=104 loops=1)
-> Sort (cost=54.85..57.35 rows=1000 width=8) (actual time=0.360..0.661 rows=104 loops=1)
Sort Key: (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
Sort Method: quicksort Memory: 18kB
-> ProjectSet (cost=0.00..5.02 rows=1000 width=8) (actual time=0.166..0.263 rows=104 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.024..0.028 rows=1 loops=1)
-> Materialize (cost=1000.00..127766.34 rows=1 width=10) (actual time=323.460..423.510 rows=108817 loops=104)
-> Gather (cost=1000.00..127766.33 rows=1 width=10) (actual time=33638.189..33901.109 rows=108817 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on positions p (cost=0.00..126766.23 rows=1 width=10) (actual time=31140.078..31208.287 rows=36272 loops=3)
Filter: ((battery_level IS NOT NULL) AND (date >= '2024-09-01 00:00:01'::timestamp without time zone) AND (date <= '2024-09-13 23:59:59'::timestamp without time zone) AND (car_id = 1))
Rows Removed by Filter: 2077526
SubPlan 1
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.575..0.575 rows=1 loops=104)
-> Function Scan on unnest trick_1 (cost=0.00..0.10 rows=10 width=32) (actual time=0.153..0.205 rows=104 loops=104)
SubPlan 2
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.487..0.487 rows=1 loops=104)
-> Function Scan on unnest trick_2 (cost=0.00..0.10 rows=10 width=32) (actual time=0.073..0.125 rows=104 loops=104)
SubPlan 3
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.478..0.479 rows=1 loops=104)
-> Function Scan on unnest trick_3 (cost=0.00..0.10 rows=10 width=32) (actual time=0.072..0.122 rows=104 loops=104)
Planning Time: 15.421 ms
JIT:
Functions: 65
Options: Inlining false, Optimization false, Expressions true, Deforming true
Timing: Generation 47.192 ms, Inlining 0.000 ms, Optimization 157.808 ms, Emission 1451.909 ms, Total 1656.909 ms
Execution Time: 64347.231 ms
(42 rows) Ready to test more on the weekend |
@JakobLichterfeld - there is an index in positions over date that should have been used for the first query - i wonder why it's not in your case... could you run: SELECT indexname AS index_name,
tablename AS table_name
FROM pg_indexes
WHERE schemaname = 'public' and tablename = 'positions'; if the result shows an index named |
I think you may have hit on something there. We had a user who reported that the position table was not restored correctly from the backup, I could not find the issue/pr at the moment. I do not have the teslamate=# SELECT indexname AS index_name,
tablename AS table_name
FROM pg_indexes
WHERE schemaname = 'public' and tablename = 'positions';
index_name | table_name
------------------------+------------
positions_pkey | positions
positions_car_id_index | positions
(2 rows) |
I'll prepare an overview of all indexes and a sql script to add all missing indexes later today. If others are affected we could add it to the docs section. I restored from a backup last week (testing pg17) - worked without any issue here. Let's see how the query performs afterwards, tried some other indexes, seeing enough potential to speed it and get this feature over the line! I do have more time tomorrow, may ping you with some queries if ok for you. |
I'll plug https://github.com/jheredianet/Teslamate-CustomGrafanaDashboards/raw/main/dashboards/DatabaseDashboadInfo.json for details about the database |
@JakobLichterfeld - looks like you are missing 8 indexes when comparing to my instance: could you execute the sql below and afterwards run the two select statements mentioned here again? @coreGreenberet / @sdwalker - can you confirm a total of 36 indexes within your environments of teslamate via corrective sql CREATE UNIQUE INDEX IF NOT EXISTS addresses_osm_id_osm_type_index ON public.addresses USING btree (osm_id, osm_type);
CREATE UNIQUE INDEX IF NOT EXISTS addresses_pkey ON public.addresses USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS car_settings_pkey ON public.car_settings USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS cars_eid_index ON public.cars USING btree (eid);
CREATE UNIQUE INDEX IF NOT EXISTS cars_pkey ON public.cars USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS cars_settings_id_index ON public.cars USING btree (settings_id);
CREATE UNIQUE INDEX IF NOT EXISTS cars_vid_index ON public.cars USING btree (vid);
CREATE UNIQUE INDEX IF NOT EXISTS cars_vin_index ON public.cars USING btree (vin);
CREATE INDEX IF NOT EXISTS charges_charging_process_id_index ON public.charges USING btree (charging_process_id);
CREATE INDEX IF NOT EXISTS charges_date_index ON public.charges USING btree (date);
CREATE UNIQUE INDEX IF NOT EXISTS charges_pkey ON public.charges USING btree (id);
CREATE INDEX IF NOT EXISTS charging_processes_address_id_index ON public.charging_processes USING btree (address_id);
CREATE INDEX IF NOT EXISTS charging_processes_car_id_index ON public.charging_processes USING btree (car_id);
CREATE UNIQUE INDEX IF NOT EXISTS charging_processes_pkey ON public.charging_processes USING btree (id);
CREATE INDEX IF NOT EXISTS charging_processes_position_id_index ON public.charging_processes USING btree (position_id);
CREATE INDEX IF NOT EXISTS drives_end_geofence_id_index ON public.drives USING btree (end_geofence_id);
CREATE INDEX IF NOT EXISTS drives_end_position_id_index ON public.drives USING btree (end_position_id);
CREATE INDEX IF NOT EXISTS drives_start_geofence_id_index ON public.drives USING btree (start_geofence_id);
CREATE INDEX IF NOT EXISTS drives_start_position_id_index ON public.drives USING btree (start_position_id);
CREATE INDEX IF NOT EXISTS trips_car_id_index ON public.drives USING btree (car_id);
CREATE INDEX IF NOT EXISTS trips_end_address_id_index ON public.drives USING btree (end_address_id);
CREATE UNIQUE INDEX IF NOT EXISTS trips_pkey ON public.drives USING btree (id);
CREATE INDEX IF NOT EXISTS trips_start_address_id_index ON public.drives USING btree (start_address_id);
CREATE UNIQUE INDEX IF NOT EXISTS geofences_pkey ON public.geofences USING btree (id);
CREATE INDEX IF NOT EXISTS positions_car_id_index ON public.positions USING btree (car_id);
CREATE INDEX IF NOT EXISTS positions_date_index ON public.positions USING btree (date);
CREATE INDEX IF NOT EXISTS positions_drive_id_date_index ON public.positions USING btree (drive_id, date);
CREATE UNIQUE INDEX IF NOT EXISTS positions_pkey ON public.positions USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS schema_migrations_pkey ON public.schema_migrations USING btree (version);
CREATE UNIQUE INDEX IF NOT EXISTS settings_pkey ON public.settings USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS "states_car_id__end_date_IS_NULL_index" ON public.states USING btree (car_id, ((end_date IS NULL))) WHERE (end_date IS NULL);
CREATE INDEX IF NOT EXISTS states_car_id_index ON public.states USING btree (car_id);
CREATE UNIQUE INDEX IF NOT EXISTS states_pkey ON public.states USING btree (id);
CREATE UNIQUE INDEX IF NOT EXISTS tokens_pkey ON public.tokens USING btree (id);
CREATE INDEX IF NOT EXISTS updates_car_id_index ON public.updates USING btree (car_id);
CREATE UNIQUE INDEX IF NOT EXISTS updates_pkey ON public.updates USING btree (id); |
Yeah, with a quick look into the migrations, I could not find one for Creating the indexes right now and will store a backup without the indexes for potential future testing. |
teslamate=# select count(*) from positions p where p.car_id = 1 and date between '2024-09-01T00:00:01.000Z' and '2024-09-13T23:59:59.000Z' and p.battery_level is not null;
count
--------
108817
(1 row) and QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on trick (cost=129128.92..129235.92 rows=200 width=32) (actual time=1295.137..1444.189 rows=104 loops=1)
-> WindowAgg (cost=129128.92..129136.92 rows=200 width=72) (actual time=1293.132..1294.773 rows=104 loops=1)
-> Subquery Scan on bucketing_gapfill (cost=129128.92..129134.42 rows=200 width=40) (actual time=753.184..753.685 rows=104 loops=1)
-> WindowAgg (cost=129128.92..129132.42 rows=200 width=48) (actual time=753.163..753.590 rows=104 loops=1)
-> Sort (cost=129128.92..129129.42 rows=200 width=48) (actual time=753.020..753.078 rows=104 loops=1)
Sort Key: locf_intermediate.i
Sort Method: quicksort Memory: 21kB
-> Subquery Scan on locf_intermediate (cost=55.28..129121.28 rows=200 width=48) (actual time=5.183..752.722 rows=104 loops=1)
-> WindowAgg (cost=55.28..129121.28 rows=200 width=48) (actual time=5.164..752.541 rows=104 loops=1)
-> GroupAggregate (cost=55.28..129118.28 rows=200 width=40) (actual time=5.030..751.688 rows=104 loops=1)
Group Key: (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
-> Nested Loop Left Join (cost=55.28..123350.78 rows=1153000 width=10) (actual time=4.805..680.663 rows=108895 loops=1)
-> WindowAgg (cost=54.85..72.35 rows=1000 width=16) (actual time=4.543..5.374 rows=104 loops=1)
-> Sort (cost=54.85..57.35 rows=1000 width=8) (actual time=4.457..4.569 rows=104 loops=1)
Sort Key: (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))
Sort Method: quicksort Memory: 18kB
-> ProjectSet (cost=0.00..5.02 rows=1000 width=8) (actual time=4.259..4.350 rows=104 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.027..0.028 rows=1 loops=1)
-> Index Scan using positions_date_index on positions p (cost=0.43..111.75 rows=1153 width=10) (actual time=0.110..5.777 rows=1046 loops=104)
Index Cond: ((date >= (generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) AND (date < (lead((generate_series('2024-09-01 00:00:01+00'::timestamp with time zone, '2024-09-13 23:59:59+00'::timestamp with time zone, ('10800000 milliseconds'::cstring)::interval))) OVER (?))) AND (date >= '2024-09-01 00:00:01'::timestamp without time zone) AND (date <= '2024-09-13 23:59:59'::timestamp without time zone))
Filter: ((battery_level IS NOT NULL) AND (car_id = 1))
SubPlan 1
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.471..0.471 rows=1 loops=104)
-> Function Scan on unnest trick_1 (cost=0.00..0.10 rows=10 width=32) (actual time=0.072..0.121 rows=104 loops=104)
SubPlan 2
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.472..0.472 rows=1 loops=104)
-> Function Scan on unnest trick_2 (cost=0.00..0.10 rows=10 width=32) (actual time=0.072..0.121 rows=104 loops=104)
SubPlan 3
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=0.475..0.476 rows=1 loops=104)
-> Function Scan on unnest trick_3 (cost=0.00..0.10 rows=10 width=32) (actual time=0.072..0.121 rows=104 loops=104)
Planning Time: 15.153 ms
JIT:
Functions: 57
Options: Inlining false, Optimization false, Expressions true, Deforming true
Timing: Generation 22.986 ms, Inlining 0.000 ms, Optimization 32.070 ms, Emission 507.753 ms, Total 562.809 ms
Execution Time: 4231.983 ms
(36 rows)
(END) Which is more than 3x faster than without index, innit? :-) And without series_id | p0075 | p050 | p0925
------------------------+--------------------+-------------------+-------------------
2024-09-01 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-01 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-02 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-03 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-04 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-05 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-06 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-07 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-08 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-09 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-10 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-11 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-12 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 00:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 03:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 06:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 09:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 12:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 15:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 18:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
2024-09-13 21:00:01+00 | 51.863056885892945 | 63.16666666666667 | 76.03816586777607
(104 rows)
(END) |
I assume, this one should normally create the index (introduced by #3186): |
Btw. love the speed on the Rpi 3B with the 6 additional indexes 😃 |
great to hear, wonder if i might should switch to slower hardware and look for additional improvements by adding an index here and there 😆 finalized the query for charge level - works great here @JakobLichterfeld - could you first add an index: CREATE INDEX "positions_car_id_date_ideal_battery_range_km_IS_NOT_NULL_index" ON public.positions USING btree (car_id, date, ((ideal_battery_range_km IS NOT NULL))) WHERE (ideal_battery_range_km IS NOT NULL) and afterwards run this query: -- To be able to calucate percentiles for unevenly sampled values we are bucketing & gapfilling values before running calcuations
explain analyze with positions_filtered as (
select
date,
battery_level
from
positions p
where
p.car_id = '1'
-- p.ideal_battery_range_km condition is added to reduce overall amount of data and avoid data biases while driving (unevenly sampled data)
and p.ideal_battery_range_km is not null
),
gen_date_series as (
select
-- series is used to bucket data and avoid gaps in series used to determine percentiles
generate_series(to_timestamp(1710497281), to_timestamp(1726391281), concat(7200, ' seconds')::interval) as series_id
),
date_series as (
select
series_id,
-- before joining, get beginning of next series to be able to left join `positions_filtered`
lead(series_id) over (order by series_id asc) as next_series_id
from
gen_date_series
),
positions_bucketed as (
select
series_id,
-- simple average can result in loss of accuracy, see https://www.timescale.com/blog/what-time-weighted-averages-are-and-why-you-should-care/ for details
avg(battery_level) as battery_level
from
date_series
left join positions_filtered on
positions_filtered.date >= date_series.series_id
and positions_filtered.date < date_series.next_series_id
group by
series_id
),
-- PostgreSQL cannot IGNORE NULLS via Window Functions LAST_VALUE - therefore use natural behavior of COUNT & MAX, see https://www.reddit.com/r/SQL/comments/wb949v/comment/ii5mmmi/ for details
positions_bucketed_gapfilling_locf_intermediate as (
select
series_id,
battery_level,
count(battery_level) over (order by series_id) as i
from
positions_bucketed
),
positions_bucketed_gapfilled_locf as (
select
series_id,
max(battery_level) over (partition by i) as battery_level
from
positions_bucketed_gapfilling_locf_intermediate
),
-- PostgreSQL cannot use PERCENTILE_DISC as Window Function - therefore use ARRAY_AGG and UNNEST, see https://stackoverflow.com/a/72718604 for details
positions_bucketed_gapfilled_locf_percentile_intermediate as (
select
series_id,
battery_level,
array_agg(battery_level) over w as arr,
avg(battery_level) over w as battery_level_avg
from
positions_bucketed_gapfilled_locf
window w as (rows between (1296000 / 7200) preceding and (1296000 / 7200) following)
)
select
series_id,
case when battery_level is null then null else (select percentile_cont(0.075) within group (order by s) from unnest(arr) trick(s)) end as "30 Day Moving 7,5% Percentile",
case when battery_level is null then null else (battery_level_avg) end as "30 Day Moving Average",
case when battery_level is null then null else (select percentile_cont(0.5) within group (order by s) from unnest(arr) trick(s)) end as "30 Day Moving Median",
case when battery_level is null then null else (select percentile_cont(0.925) within group (order by s) from unnest(arr) trick(s)) end as "30 Day Moving 92,5% Percentile"
from
positions_bucketed_gapfilled_locf_percentile_intermediate; |
😄 I only found a slow query after that, see #4199
Created with
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on positions_bucketed_gapfilled_locf_percentile_intermediate (cost=1556588.22..1556695.72 rows=200 width=64) (actual time=7442.816..17583.645 rows=2208 loops=1)
-> WindowAgg (cost=1556588.22..1556596.72 rows=200 width=104) (actual time=7442.779..8347.987 rows=2208 loops=1)
-> Subquery Scan on positions_bucketed_gapfilled_locf (cost=1556588.22..1556593.72 rows=200 width=40) (actual time=3659.174..3675.169 rows=2208 loops=1)
-> WindowAgg (cost=1556588.22..1556591.72 rows=200 width=48) (actual time=3659.142..3672.381 rows=2208 loops=1)
-> Sort (cost=1556588.22..1556588.72 rows=200 width=48) (actual time=3658.844..3660.370 rows=2208 loops=1)
Sort Key: positions_bucketed_gapfilling_locf_intermediate.i
Sort Method: quicksort Memory: 155kB
-> Subquery Scan on positions_bucketed_gapfilling_locf_intermediate (cost=55.28..1556580.57 rows=200 width=48) (actual time=8.954..3653.613 rows=2208 loops=1)
-> WindowAgg (cost=55.28..1556580.57 rows=200 width=48) (actual time=8.934..3650.616 rows=2208 loops=1)
-> GroupAggregate (cost=55.28..1556577.57 rows=200 width=40) (actual time=8.802..3632.143 rows=2208 loops=1)
Group Key: (generate_series('2024-03-15 10:08:01+00'::timestamp with time zone, '2024-09-15 09:08:01+00'::timestamp with time zone, (concat(7200, ' seconds'))::interval))
-> Nested Loop Left Join (cost=55.28..1392872.85 rows=32740444 width=10) (actual time=8.643..3598.592 rows=22537 loops=1)
-> WindowAgg (cost=54.85..72.35 rows=1000 width=16) (actual time=4.922..24.914 rows=2208 loops=1)
-> Sort (cost=54.85..57.35 rows=1000 width=8) (actual time=4.778..7.353 rows=2208 loops=1)
Sort Key: (generate_series('2024-03-15 10:08:01+00'::timestamp with time zone, '2024-09-15 09:08:01+00'::timestamp with time zone, (concat(7200, ' seconds'))::interval))
Sort Method: quicksort Memory: 99kB
-> ProjectSet (cost=0.00..5.03 rows=1000 width=8) (actual time=0.207..3.379 rows=2208 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.026..0.028 rows=1 loops=1)
-> Index Scan using "positions_car_id_date_ideal_battery_range_km_IS_NOT_NULL_index" on positions p (cost=0.42..1065.40 rows=32740 width=10) (actual time=0.070..1.600 rows=9 loops=2208)
Index Cond: ((car_id = '1'::smallint) AND (date >= (generate_series('2024-03-15 10:08:01+00'::timestamp with time zone, '2024-09-15 09:08:01+00'::timestamp with time zone, (concat(7200, ' seconds'))::interval))) AND (date < (lead((generate_series('2024-03-15 10:08:01+00'::timestamp with time zone, '2024-09-15 09:08:01+00'::timestamp with time zone, (concat(7200, ' seconds'))::interval))) OVER (?))))
SubPlan 1
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=1.418..1.419 rows=1 loops=2187)
-> Function Scan on unnest trick (cost=0.00..0.10 rows=10 width=32) (actual time=0.229..0.397 rows=348 loops=2187)
SubPlan 2
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=1.393..1.393 rows=1 loops=2187)
-> Function Scan on unnest trick_1 (cost=0.00..0.10 rows=10 width=32) (actual time=0.232..0.399 rows=348 loops=2187)
SubPlan 3
-> Aggregate (cost=0.16..0.17 rows=1 width=8) (actual time=1.389..1.389 rows=1 loops=2187)
-> Function Scan on unnest trick_2 (cost=0.00..0.10 rows=10 width=32) (actual time=0.230..0.396 rows=348 loops=2187)
Planning Time: 30.443 ms
JIT:
Functions: 57
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 23.334 ms, Inlining 775.072 ms, Optimization 1706.078 ms, Emission 1301.190 ms, Total 3805.675 ms
Execution Time: 21119.696 ms
(35 rows)
(END) and output seems reasonable |
Yes
|
closed in favor of #4200 @JakobLichterfeld - should we take any action for missing potentially missing indexes on other instances? |
Yeah, let's track it in #4201 |
actually I'm having 640 😁 - Edit: 48 without the partion positions_YYMM tables |
Ok, that doesnt help. If there are indexes that are benefitial for all please feel free to open an issue / pr! Why have you partioned the positions table, also to speed up queries ? @JakobLichterfeld would be another argument for switching to timescaledb ;) partitions by default ... |
@swiffer I've overseen your new index "positions_car_id_date_ideal_battery_range_km_IS_NOT_NULL_index", but that didn't really speed up the process and for whatever reason the query now takes ~1:30 minutes yes I've partitioned it for performance reasons. ATM positions has 15232196 entries which are ~6.4GB of data
|
@coreGreenberet - could you try the updated dashboard provided in grafana image of #4200 ? |
@coreGreenberet thanks, seems reasonable when comparing to timing from jakob |
this is an enhancement for the charge level dashboard and is built on top of #4186
by enabling the regression transformation feature toggle in grafana it allows showing a regression line for the charge level
details on the feature and feature toggles in general
grafana/grafana#78457
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/feature-toggles/