Skip to content

Daily scripts used on multi cluster environments with many hosts

License

Notifications You must be signed in to change notification settings

dbajuliano/postgres-dba-scripts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to my public notes and scripts about Postgres


🔐 .pgpass

❗ Most of the scripts requires credentials stored on the file .pgpass to access the databases


Cron syntax:

mm hh dd month weekday /path/to/script.sh hostname port username

The example below runs every midnight cron user replication on 02 different hosts:

00 00 * * * /opt/bin/pg_backup.sh primary.production.localhost 5432 replication
00 00 * * * /opt/bin/pg_backup.sh single.test.localhost 5433 replication

Or as an alternative, we can use the /etc/crontab like the example below exporting the .pgpass credentials:

00 00 * * * postgres export PGPASSFILE=/home/replication/.pgpass | /opt/bin/pg_backup.sh localhost 5432 replication

Output:

[2020-05-10 - 13:12:02] : /opt/bin/pg_backup.sh : Performing base backup on new dir /backup/postgres
[2020-05-10 - 13:24:33] : /opt/bin/pg_backup.sh : Created compressed backup directory /backup/2020.05.10-13.12
[2020-05-10 - 13:24:33] : /opt/bin/pg_backup.sh : Compressing /backup/postgres into /backup/2020.05.10-13.12
[2020-05-10 - 13:33:49] : /opt/bin/pg_backup.sh : Cleaning compressed backup directories older than 4 days
[2020-05-10 - 13:33:50] : /opt/bin/pg_backup.sh : Deleting archives
[2020-05-10 - 13:33:50] : /opt/bin/pg_backup.sh : Finished
ls -l /backup/
2020.05.08-00.00
2020.05.09-00.00
2020.05.10-00.00
2020.05.11-00.00
backup_logs/

All you need is to provide a username as parameter to find. I.e.: /opt/bin/find_user_multiple_hosts.sh juliano

Output:

Host: primary.production.local Port: 5432
user
juliano_readonly
juliano_admin
(2 rows)
------------------------------

Host: single.test.local Port: 5433
user
ro_juliano
su_juliano
(2 rows)
------------------------------

Host: demo.local Port: 5434
user
(0 rows)
------------------------------

A Cron script with customized port in case of running multiple pg instances

00 2 * * * /opt/bin/refresh_materialized_view.sh localhost 5432
00 2 * * * /opt/bin/refresh_materialized_view.sh demo.local 5434

Output:

[2020-05-07 - 02:00:00] : /opt/bin/refresh_materialized_view.sh : Started

REFRESHING public.mview_queries
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 315.307 ms

REFRESHING public.mview_trx
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 574.408 ms

REFRESHING public.mview_tests
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 142.555 ms

[2020-05-07 - 02:00:01] : /opt/bin/refresh_materialized_view.sh : Finished

-------------------------------------------------

Just put the psql straightly on the cron job to find queries "idle in transaction" (running every 2 minutes below)

*/2 * * * * psql -h localhost -U postgres -d postgres -p 5432 --no-psqlrc -q -t -A -c "SELECT TO_CHAR(NOW(), 'DD-MM-YYYY HH:MI:SS'),pid,STATE,usename,client_addr,application_name,query FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND STATE IN ( 'idle in transaction' ,'idle in transaction (aborted)', 'disabled' ) AND state_change < current_timestamp - '2 minutes'::INTERVAL" >> /var/log/pg_idle_tx_conn.log
  • You can change the output symbol from ">>" to ">" if you want to reset the log file for each entry instead of increment the file otherwise would be recommended develop a log rotate plan, just in case
  • A good strategy is to configure your alert system tool to read the log file on every change and send a notification
  • Don't forget to create the log file with the correct permissions

Output:

20-07-2020 09:52:01|14779|idle in transaction|juliano|192.168.0.1|psql|SELECT 'Juliano detection test';

This script maps all database instances and ports using k8s port-forward from each remote environment to your local machine (localhost or 127.0.0.1). It can be used also to access AWS RDS Aurora Postgres and MySQL instances.

If you don’t want to use this bash script you can use kubefwd instead.

Before start list the service names and ports running on kubernetes using the command: kubectl --context=context-name-here -n dba get services

  1. Edit the script with your instances addresses, they must be already configured to use port-forward on k8s as a requisite, then run the script ./port-forward_k8s.sh

Once the script is running you can using you prefred cli or gui on localhost.

  1. To stop it list the opened ports ps -ef | grep kubectl and then just kill the connection you want kill -9 pid_here or all connections pkill -9 kubectl

Script to quick automate and direct connect to an AWS RDS instance using K8S pods Run ./k8s_sql_connect.sh -h or --help to see how it works:

Usage:
     ./psql_k8s.sh -c [context]  -i [instance]

E.g. ./psql_k8s.sh -c staging -i pg-staging01

     -c, --context    -->    -i, --instance
     1. staging       -->    pg-staging01 | pg-staging02 | mysql-staging01
     2. production    -->    pg-production01 | pg-production02 | mysql-production01
 
     -h, --help        This message

License: GPL v3

Buy Me Coffee

About

Daily scripts used on multi cluster environments with many hosts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages