You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
diff --git a/charts/timescaledb-single/values.yaml b/charts/timescaledb-single/values.yaml
index ee030ca..91582e8 100644
--- a/charts/timescaledb-single/values.yaml
+++ b/charts/timescaledb-single/values.yaml
@@ -1,7 +1,7 @@
# This file and its contents are licensed under the Apache License 2.0.
# Please see the included NOTICE for copyright information and LICENSE for a copy of the license.
-replicaCount: 3
+replicaCount: 1
# To prevent very long names, we override the name, otherwise it would default to
# timescaledb-single (the name of the chart)
@@ -170,7 +170,7 @@ patroni:
autovacuum_vacuum_cost_limit: 500
autovacuum_vacuum_scale_factor: 0.05
log_autovacuum_min_duration: 1min
- hot_standby: 'on'
+ hot_standby: 'off'
log_checkpoints: 'on'
log_connections: 'on'
log_disconnections: 'on'
@@ -178,7 +178,7 @@ patroni:
log_lock_waits: 'on'
log_min_duration_statement: '1s'
log_statement: ddl
- max_connections: 100
+ max_connections: 30
max_prepared_transactions: 150
shared_preload_libraries: timescaledb,pg_stat_statements
ssl: 'on'
@@ -186,12 +186,14 @@ patroni:
ssl_key_file: '/etc/certificate/tls.key'
tcp_keepalives_idle: 900
tcp_keepalives_interval: 100
- temp_file_limit: 1GB
+ temp_file_limit: 20GB
+ max_files_per_process: 10000
timescaledb.passfile: '../.pgpass'
unix_socket_directories: "/var/run/postgresql"
unix_socket_permissions: '0750'
wal_level: hot_standby
wal_log_hints: 'on'
+ work_mem: 16MB
use_pg_rewind: true
use_slots: true
retry_timeout: 10
@@ -211,8 +213,6 @@ patroni:
no_master: true
basebackup:
- waldir: "/var/lib/postgresql/wal/pg_wal"
- recovery_conf:
- restore_command: /etc/timescaledb/scripts/pgbackrest_archive_get.sh %f "%p"
callbacks:
on_role_change: /etc/timescaledb/scripts/patroni_callback.sh
on_start: /etc/timescaledb/scripts/patroni_callback.sh
@@ -228,7 +228,7 @@ patroni:
pg_hba:
- local all postgres peer
- local all all md5
- - hostnossl all,replication all all reject
+ - hostnossl all,replication all all md5
- hostssl all all 127.0.0.1/32 md5
- hostssl all all ::1/128 md5
- hostssl replication standby all md5
@@ -303,7 +303,7 @@ persistentVolumes:
# https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-MOUNT-POINTS
data:
enabled: true
- size: 2Gi
+ size: 150Gi
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -321,7 +321,7 @@ persistentVolumes:
# volume for the WAL files should just work for new pods.
wal:
enabled: true
- size: 1Gi
+ size: 50Gi
subPath: ""
storageClass:
# When changing this mountPath ensure you also change the following key to reflect this:
@@ -353,15 +353,15 @@ fullWalPrevention:
readWriteFreePercent: 8
readWriteFreeMB: 128
-resources: {}
+resources:
# If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
- # limits:
- # cpu: 100m
- # memory: 128Mi
- # requests:
- # cpu: 100m
- # memory: 128Mi
+ limits:
+ cpu: 2000m
+ memory: 16384Mi
+ requests:
+ cpu: 1500m
+ memory: 8192Mi
sharedMemory:
# By default Kubernetes only provides 64MB to /dev/shm
@@ -376,7 +376,7 @@ sharedMemory:
# No space left on device
#
# you may wish to use a mount to Memory, by setting useMount to true
- useMount: false
+ useMount: true
# timescaledb-tune will be run with the Pod resources requests or - if not set - its limits.
# This should give a reasonably tuned PostgreSQL instance.
@@ -534,7 +534,7 @@ serviceAccount:
# Setting unsafe to true will generate some random credentials. This is meant
# for development or first evaluation of the Helm Charts. It should *not* be
# used for anything beyong the evaluation phase.
-unsafe: false
+unsafe: true
debug:
# This setting is mainly for during development, debugging or troubleshooting.
Both in Amazon and local minikube
When memory load is too high, it fails
Example 1.
Hypertable with 78M+ records structure like
create table my_table
(
id bigserial not null,
field1 varchar(64) not null,
field2 smallint,
field3 varchar(16) not null,
field4 varchar(16) not null,
field5 integer not null,
field6 double precision not null,
field7 double precision not null,
field8 double precision not null,
field9 double precision not null,
field10 double precision not null,
field11 boolean not null,
record_date integer not null,
capture_date integer not null,
created_at timestamp(0) not null
);
SELECT * FROM timescaledb_information.dimensions where hypertable_name like 'my_table' order by hypertable_name, dimension_number;
hypertable_schema | hypertable_name | dimension_number | column_name | column_type | dimension_type | time_interval | integer_interval | integer_now_func | num_partitions
-------------------+-----------------+------------------+-------------+-----------------------------+----------------+---------------+------------------+------------------+----------------
public | my_table | 1 | created_at | timestamp without time zone | Time | 7 days | | |
(1 row)
When try to execute, it fails
create index if not exists my_table_record_date_idx_2 on my_table(record_date);
ohlcdb_prod=# create index if not exists my_table_record_date_idx_2 on my_table(record_date);
LOG: statement: create index if not exists my_table_record_date_idx_2 on my_table(record_date);
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Time: 319954.963 ms (05:19.955)
!>
Settings
show shared_buffers;
shared_buffers
----------------
4GB
(1 row)
Did you check if the OOM killer is killing the postgres process? If this is a crash and not an out of memory condition then you could try to gather a core dump.
If it easy for you to generate this workload you could also test the latest version and see if the same problem does not occur anymore.
@npakudin thank you for the bug report. We haven't managed to reproduce this from the information provided, so I will close it for now, but we can always come back to it if there's more information that can help investigate this.
What type of bug is this?
Crash
What subsystems and features are affected?
Query executor
What happened?
Reproduced in k8s
https://github.com/timescale/timescaledb-kubernetes/blob/v0.8.2/charts/timescaledb-single/values.yaml
git diff
Both in Amazon and local minikube
When memory load is too high, it fails
Example 1.
Hypertable with 78M+ records structure like
When try to execute, it fails
Settings
top command a few seconds before failure
TimescaleDB version affected
2.0.1
PostgreSQL version used
PostgreSQL 12.6 (Debian 12.6-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
What operating system did you use?
Linux timescaledb-0 4.19.202 #1 SMP Wed Oct 27 22:52:27 UTC 2021 x86_64 GNU/Linux
What installation method did you use?
Docker, Other
What platform did you run on?
Amazon Web Services (AWS), Other
Relevant log output and stack trace
How can we reproduce the bug?
Data contains 78M+ rows If you're ready to debug version 2.0.1, I will generate a script filling this table
The text was updated successfully, but these errors were encountered: