PostgreSQL Administration – Monitoring



PostgreSQL Administration – Monitoring

0 0


omniti_pg_training_centos


On Github keithf4 / omniti_pg_training_centos

PostgreSQL Administration

The OmniTI Way

Presented by Keith Fiske / @keithf4

Database Administrator @ OmniTI

Follow along at http://slides.keithf4.com/pgtraining_centos

OmniTI, Inc

  • Full-stack support for high-traffic websites & applications
    • Millions of users
    • Terabytes of data
    • Gilt, Etsy, Ora.TV, Freelotto
  • Surge Conference

What is PostgreSQL?

  • Open Source RDBMS
  • Started at UC Berkley 1986, open sourced in 1996
  • BSD-type License
  • Follows SQL Standard very closely
  • Third-party Plugin Support
    • Procedural Languages
      • C, Java, Python, Perl, JavaScript, PHP, R, Ruby, etc
      • Extensions
      • Background Workers (>=9.3)
  • Massive online community (mailing lists, irc, conferences)

Training VM

  • Login: training / postgres
  • training user has sudo
    • All commads shown with "sudo" are run by training user
  • Shortcut to terminal along top bar
  • Internet should work within VM (if it works externally)
  • Clipboard should work between host & VM
  • Take snapshot now & as many times as you'd like along the way

Installing PostgreSQL

sudo yum install http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
Install PostgreSQL 9.3 & Contrib modules
sudo yum install postgresql93-server postgresql93-contrib
No clusters automatically created and no initial automatic startup (Redhat policy)
sudo service postgresql-9.3 initdb
sudo chkconfig postgresql-9.3 on
sudo service postgresql-9.3 start

Create Role & Database

  • Become postgres system user
sudo su - postgres
Log into database (default postgres role & database already created)
psql
  • \? to see all available commands
Create a role & database for yourself
CREATE ROLE training WITH LOGIN SUPERUSER;
CREATE DATABASE training;
Should now be able to log into PostgreSQL as your training user Create replication role for later
CREATE ROLE replication WITH LOGIN REPLICATION;
  • Set password ("password")
\password replication
Recommend keeping second terminal open logged in as postgres user. Avoids having to exit and re-sudo throughout training. Demo trying to connect before the training user & database are created.

Configuration - pg_hba.conf

training=# show data_directory;
Open pg_hba.conf file located in data directory (must be postgres user) All authentication into cluster is controlled by this file Evaluated in order top-down Default only allows local system users with matching role names (peer) Avoid "trust" if at all possible

Configuration - pg_hba.conf

  • Create following line for later:
host    replication    replication    127.0.0.1/32    md5
Only requires a reload to put new HBA settings into place
training=# select pg_reload_conf();
    OR
sudo service postgresql-9.3 reload
Check log file for SIGHUP
  • "pg_log" directory in the data directory

Configuration - postgresql.conf

  • Main configuration file
  • Important initial settings review (most require restart)
    • listen_addresses (* - all IPs, whitelist your app servers)
    • max_connections (monitor active connections, affected by work_mem)
    • shared_buffers (8GB good starting point)
    • work_mem (2-5MB good starting point. The more RAM the better.)
    • maintenance_work_mem (1GB good starting point)
    • wal_level = hot_standby
    • effective_cache_size (50% RAM good starting point)
    • archive_mode = on
    • archive_command = ‘/bin/true’
    • archive_timeout = 60 (for demo purposes, usually higher)
    • max_wal_senders = 3
    • wal_keep_segments = 30
    • logging_collector = on
    • autovacuum_freeze_max_age = 1000000000 (no higher!)
Discuss WAL files and what they are

Configuration - postgresql.conf

  • Important secondary settings review (all only require reload)
    • checkpoint_segments = 30 (good starting point, check logs for warnings)
    • checkpoint_timeout = 5min (default, good starting point, check logs for warnings)
    • checkpoint_completion_target = 0.9
    • log_filename = 'postgresql-%Y-%m-%d.log'
    • log_min_duration_statement = 5000ms (beware setting too low. can fill logs fast)
    • log_connections = on
    • log_disconnections = on
    • log_line_prefix = '%m [%r] [%p]: [l-%l] user=%u,db=%d,e=%e '
      • %x & %v can be good for transaction tracking
    • log_temp_files = 10240 (good starting point, 10MB)
    • autovacuum_vacuum_threshold = 500 (default too aggressive)
    • autovacuum_analyze_threshold = 500 (default too aggressive)
    • autovacuum_vacuum_scale_factor = 0.1 (default too mild)
    • autovacuum_analyze_scale_factor = 0.05 (default too mild)
  • Restart postgresql to put all new settings into place (as training user)
sudo service postgresql-9.3 restart
If restart unsuccessful, check postgresql log files
Mention log_rotation_size & log_filename are related in how log file re-use happens. Default (%a) would reuse old log file when that day comes around again next week). If the same log_filename already exists when it goes to role over, it will overwrite. If daily & will role over in one day, add more to the log_filename (hours, minutes, seconds)

Vacuums & Freezing

  • http://www.postgresql.org/docs/9.3/static/routine-vacuuming.html
  • Multi-Version Currency Control (MVCC)
    • Updated/Deleted rows not actually deleted. Just marked unavailable.
    • Vacuum marks unavailable rows as re-usable space. Leads to bloat.
    • VACUUM FULL recovers disk space but locks table
    • pg_repack - Extension to reclaim disk space with minimal lock
    • Don’t overuse VACUUM FULL or pg_repack.
      • Reusable space can be more efficient than reallocating new pages
  • Transaction ID Wraparound
    • Every row has transaction id (XID) value
    • Every new write transaction increments cluster-wide XID
    • Determines visibility to current transactions
    • 32-bit number, so wraparound is possible after 4 billion transactions
      • 2 billion transactions newer than current & 2 billion older
    • VACUUM resets all row XIDs to that of the oldest transaction or FrozenXID if possible.
    • Reserved, FrozenXID always older than all XIDs
      • VACUUM FREEZE tables not written to anymore so that vacuum never needs to run on them again
    • autovacuum_freeze_max_age - when table XID value reaches this, a VACUUM is forced (even if autovac turned off).

Setup Replication - Basic

  • Ensure line exists in master pg_hba.conf and reload master
host        replication     replication        127.0.0.1/32            md5
Use pg_basebackup to do a backup of master (as postgres system user)
pg_basebackup -h 127.0.0.1 -U replication -D /var/lib/pgsql/9.3/slave -R -Xs -P -v
Edit slave postgresql.conf
port = 5488 (doesn’t matter when using CentOS init.d)
hot_standby = on
Edit slave recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=127.0.0.1 port=5432 user=replication password=password'
trigger_file = '/var/lib/pgsql/9.3/slave/finish.recovery'
recovery_target_timeline='latest'
Mention delay settings and what they’re for near the hot_standby setting

Setup Replication - Basic

  • Production method for CentOS
  • Create new /etc/init.d/postgresql-9.3 config file for slave
sudo cp /etc/init.d/postgresql-9.3 /etc/init.d/postgresql-9.3-5488
Edit config file to change (as root):
PGPORT=5488 
PGDATA=/var/lib/pgsql/9.3/slave
PGLOG=/var/lib/pgsql/9.3/pgstartup-5488.log
PGUPLOG=/var/lib/pgsql/$PGMAJORVERSION/pgupgrade-5488.log
Register service, start up slave & ensure it connects to master (as training user)
sudo chkconfig postgresql-9.3-5488 on
sudo service postgresql-9.3-5488 start

Setup Replication - Basic

  • Check slave log to ensure it connected
started streaming WAL from primary at 0/3000000 on timeline 1
Check from master
select * from pg_stat_replication;
Create an object and make sure it appears on slave (as training user)
training=# CREATE TABLE testing (
                id serial primary key, 
                stuff text, 
                inserted_at timestamptz default now());
Connect to slave
psql -p 5488

Setup Replication - Advanced/Hybrid

  • Create a new data directory (as postgres system user)
mkdir /var/lib/pgsql/9.3/omnislave
chmod 700 /var/lib/pgsql/9.3/omnislave
Edit master postgresql.conf to set archive_command /w omnipitr-archive (ensure it's a single line) Clone of git repo already in /opt/omnipitr - https://github.com/omniti-labs/omnipitr
archive_command = '/opt/omnipitr/bin/omnipitr-archive -D /var/lib/pgsql/9.3/data -dl gzip=/var/lib/pgsql/9.3/backups/walarchive -l /var/lib/pgsql/9.3/data/pg_log/omnipitr-archive-^Y-^m-^d.log -v -s /var/lib/pgsql/9.3/data/pg_log/state "%p"'
Create a destination folder for the WAL archives
mkdir /var/lib/pgsql/9.3/backups/walarchive
Create state folder for omnipitr (used when there are multiple destinations)
mkdir /var/lib/pgsql/9.3/data/pg_log/state
CentOS does not install HiRes perl module by default (used by omnipitr)
sudo yum install perl-Time-HiRes.x86_64
Reload master to activate new archive_command Check omnipitr & postgres logs. Check WAL folder for WAL files
Mention pigz as alternative to normal gzip command to improve performance

Setup Replication - Advanced/Hybrid

  • Use omnipitr-synch to create new slave
    • Does the same as pg_basebackup, but can send to multiple locations in parallel
    • Quicker rebuilding of old master after failover using rsync
    • On-the-fly compression/decompression to preserve bandwidth if needed
    • Uses ssh (no local copy option). sshd server running on VM.
    • Create keys for postgres system user
ssh-keygen -t rsa  (accept all defaults, no password)
cd ~/.ssh
cp id_rsa.pub authorized_keys
ssh localhost (yes)
exit
/opt/omnipitr/bin/omnipitr-synch -o localhost:/var/lib/pgsql/9.3/omnislave -l /var/lib/pgsql/9.3/data/pg_log/omnipitr-synch-^Y-^m-^d.log -v
Confirm with all caps “YES” Edit postgresql.conf
  • port=5466 (matters because we’re NOT using init.d)

Setup Replication - Advanced/Hybrid

  • Copy other recovery.conf file but add a recovery_command & archive_cleanup_command using omnipitr (as postgres system user)
cp /var/lib/pgsql/9.3/slave/recovery.conf /var/lib/pgsql/9.3/omnislave/
restore_command = '/opt/omnipitr/bin/omnipitr-restore -D /var/lib/pgsql/9.3/omnislave -s gzip=/var/lib/pgsql/9.3/backups/walarchive -l /var/lib/pgsql/9.3/omnislave/pg_log/omnipitr-restore-^Y-^m-^d.log -f /var/lib/pgsql/9.3/omnislave/finish.recovery -v -sr %f %p'
archive_cleanup_command = '/opt/omnipitr/bin/omnipitr-cleanup -a gzip=/var/lib/pgsql/9.3/backups/walarchive -l /var/lib/pgsql/9.3/omnislave/pg_log/omnipitr-cleanup-^Y-^m-^d.log -v %r'
Also update trigger file to point to a different location (just in case)
trigger_file = '/var/lib/pgsql/9.3/omnislave/finish.recovery'

Setup Replication - Advanced/Hybrid

  • Start new cluster using manual method instead of new initi.d script
    • As postgres system user
/usr/pgsql-9.3/bin/pg_ctl start -D /var/lib/pgsql/9.3/omnislave
Will not autostart on boot Cannot connect to this slave (hot_standby = off in postgresql.conf) Should see two entries for streaming slaves when checking from master now After a few minutes, should start seeing archived WAL files cleaned up automatically Check logs
  • Replays WAL archives first using restore_command
  • Connects to primary for streaming replication
  • Sees it’s streaming replication, so omnipitr-restore -sr option allows restore_command to exit cleanly despite throwing an error that it can't find the next WAL file. Otherwise would be stuck in loop until restore_command successfully replayed a WAL file.
Pause for questions after Check Logs main point to allow archive_cleanup_command to work

Non-streaming Replication

  • Comment out “primary_conninfo” line from omnislave recovery.conf
  • Remove “-sr” flag from restore_command
  • Restart slave (as postgres user)
/usr/pgsql-9.3/bin/pg_ctl restart -m fast -D /var/lib/pgsql/9.3/omnislave/
archive_timeout of master determines how far behind slave could possibly be --recovery-delay (-w) option to omnipitr-restore can force slave to lag behind a specified period of time
  • Recovery slave to quickly undo mistakes
Only method of replication pre 9.0 UNDO ABOVE CHANGES Restart slave again and ensure it connected to primary

A Note on SELinux

  • Enabled by default in CentOS 6.5
  • omnipitr uses rsync
  • SELinux blocks the archive_command using rsync
    • Obscure error 3072 in log
  • Configure SELinux to allow postgres to rsync
sudo yum install policycoreutils-python-2.0.83-19.39.el6.x86_64
sudo semanage fcontext -a -t bin_t "/usr/bin/rsync"
sudo restorecon -R -v /usr/bin/rsync
sudo setsebool -P postgresql_can_rsync on
Just disable SELinux if you don’t need it (did this in VM already)
  • Edit /etc/sysconfig/selinux
  • SELINUX=disabled
  • Restart server

Backup - pg_dump/all

  • pg_dumpall
    • Dumps all databases in cluster in plaintext format
    • Can dump only role data
    • Use psql to restore backup
    • Restores entire cluster. No object filtering options.
    • Dump only roles and/or tablespaces (cluster-wide objects)
pg_dumpall -g -f globals.sql -v
pg_dump
  • Backs up individual databases in cluster
  • Does not back up roles or tablespaces, but does back up object privileges of those roles.
  • Use pg_restore to restore binary backups
  • Provides binary backup format that can make restore easier
  • Can restore individual schemas or tables instead of entire cluster
pg_dump -Fc training -f training.pgr -v

Backup - File System

  • Basic file system backup (basically what we did to create omnipitr slave system)
    • pg_start_backup() -> copy all files -> pg_stop_backup()
  • Omnipitr
    • Set dst-backup (-db) option in master archive_command (location does not exist)
archive_command = '/opt/omnipitr/bin/omnipitr-archive -D /var/lib/pgsql/9.3/data -dl gzip=/var/lib/pgsql/9.3/backups/walarchive -l /var/lib/pgsql/9.3/data/pg_log/omnipitr-archive-^Y-^m-^d.log -v -s /var/lib/pgsql/9.3/data/pg_log/state -db /var/lib/pgsql/9.3/backups/dst-backup "%p"'
  • When backup is run, this dir is created to hold WAL files needed for consistent backup
  • Same use as -X option to pg_basebackup
pg_reload_conf() Need to install Perl SHA library (as training user)
sudo yum install perl-Digest-SHA.x86_64
Run omnipitr-backup-master (as postgres system user)
/opt/omnipitr/bin/omnipitr-backup-master -D /var/lib/pgsql/9.3/data -x /var/lib/pgsql/9.3/backups/dst-backup -dl gzip=/var/lib/pgsql/9.3/backups -l /var/lib/pgsql/9.3/data/pg_log/omnipitr-backup-master-^Y-^m-^d.log -v -dg SHA-512

Restore - pg_restore

  • As training user, create new database in home folder
mkdir mydb
/usr/pgsql-9.3/bin/initdb -D /home/training/mydb
Edit port in postgresql.conf - port = 5444
/usr/pgsql-9.3/bin/pg_ctl start -D /home/training/mydb
Automatically made role “training” psql used to restore pg_dumpall or plaintext version of pg_dump
  • Restore roles before restoring database so permissions are set properly
psql -p 5444 -d postgres -f globals.sql -a
pg_restore can be used to restore binary dump of pg_dump
psql -p 5444 postgres
postgres=# create database new_training;
pg_restore -p 5444 -d new_training -v training.pgr
Smaller backups & more flexible restore Must recreate all indexes & constraints Stop this instance
/usr/pgsql-9.3/bin/pg_ctl stop -m fast -D /home/training/mydb

Restore - File System

  • As postgres system user, create a new data directory
mkdir /var/lib/pgsql/9.3/omnirestore
Untar the backup to it
cd /var/lib/pgsql/9.3/omnirestore
tar xvzpf ../backups/localhost.localdomain-data-YYYY-MM-DD.tar.gz
tar xvzpf ../backups/localhost.localdomain-xlog-YYYY-MM-DD.tar.gz
chmod 700 data
cd data
Change the port in postgresql.conf
port = 5444
Start it up
/usr/pgsql-9.3/bin/pg_ctl start -D /var/lib/pgsql/9.3/omnirestore/data
Much faster disaster recovery Less flexible restore options (all or nothing)

Monitoring

  • Slave lag
    • Monitor from slave (seconds behind since last WAL replay)
SELECT extract(epoch from now() - pg_last_xact_replay_timestamp()) AS slave_lag
Monitor from master if streaming (bytes behind)
SELECT client_hostname
    , client_addr
    , pg_xlog_location_diff(pg_stat_replication.sent_location, 
        pg_stat_replication.replay_location) AS byte_lag
FROM pg_stat_replication;
Use omnipitr-monitor to monitor a non-streaming, non-hot-standby slave
  • Create state directory owned by postgres - /var/log/omnipitr
/opt/omnipitr/bin/omnipitr-monitor -c last-restore-age -l /var/lib/pgsql/9.3/omnislave/pg_log/omnipitr-restore-^Y-^m-^d.log -s /var/log/omnipitr/
Backup monitor if using omnipitr-backup
/opt/omnipitr/bin/omnipitr-monitor -c last-backup-age -l /var/lib/pgsql/9.3/data/pg_log/omnipitr-backup-master-^Y-^m-^d.log -s /var/log/omnipitr/

Monitoring

  • Active/Idle connections. Idle in transaction session times.
  • Table statistics
    • Sequential scans vs index scans
    • Insert/Update/Delete rate
  • Max oldest autovac freeze age
  • Transactions
    • Commits vs Rollbacks
  • Database size
    • Total Table Size vs Total Index Size
  • Locks
  • WAL file count (expected vs current)
  • Nagios
    • check_postgres.pl
    • Disk space monitoring since you can’t predetermine the disk usage like in oracle
  • Log/query analysis - pgbadger
  • Critical functions - pg_jobmon

Connections

  • pg_stat_activity only shows all column data to superuser. Otherwise it censors data of other sessions. Create function as superuser using SECURITY DEFINER and grant execute to monitoring role.
create or replace function pg_stat_activity() 
    returns setof pg_catalog.pg_stat_activity as 
$$begin return query(select * from pg_catalog.pg_stat_activity); end$$ 
language plpgsql security definer;

revoke all on function pg_stat_activity() from public;
                                
select max_connections
    , total_used
    , coalesce(round(100*(total_used/max_connections)),0) as pct_used
    , idle
    , idle_in_txn
    , ((total_used - idle) - idle_in_txn) as active
    , (select coalesce(extract(epoch from (max(now() - query_start))),0) from pg_stat_activity() where state = 'idle in transaction') as max_idle_in_txn
    , (select coalesce(extract(epoch from (max(now() - query_start))),0) from pg_stat_activity() where state <> 'idle') as max_txn_time
        from (select count(*) as total_used
                , coalesce(sum(case when state = 'idle' then 1 else 0 end),0) as idle
                , coalesce(sum(case when state = 'idle in transaction' then 1 else 0 end),0) as idle_in_txn 
                from pg_stat_activity()) 
        x join (select setting::float AS max_connections FROM pg_settings WHERE name = 'max_connections') xx ON (true);
                                

Table Statistics

  • Not cluster-wide. Run this on each database in the cluster.
  • Graph updates, deletes, scans & fetches as rate of change
select sum(n_tup_ins) as inserts
    , sum(n_tup_upd) as updates
    , sum(n_tup_del) as deletes
    , sum(idx_scan)  as index_scans
    , sum(seq_scan) as seq_scans
    , sum(idx_tup_fetch) as index_tup_fetch
    , sum(seq_tup_read) as seq_tup_read 
    , coalesce(extract(epoch from now() - max(last_autovacuum))) as max_last_autovacuum
    , coalesce(extract(epoch from now() - max(last_vacuum))) as max_last_vacuum
    , coalesce(extract(epoch from now() - max(last_autoanalyze))) as max_last_autoanalyze
    , coalesce(extract(epoch from now() - max(last_analyze))) as max_last_analyze
from pg_stat_all_tables;
                                

Autovac Freeze

SELECT datname
    , txns as "age/txn"
    , wrap
    , ROUND(100*(txns/wrap::float)) as wrap_perc 
    , freez
    , ROUND(100*(txns/freez::float)) AS perc
FROM (
    SELECT foo.wrap::int
        , foo.freez::int
        , age(datfrozenxid) AS txns
        , datname 
        FROM pg_database d 
        JOIN (SELECT 2000000000 AS wrap
                , setting AS freez 
              FROM pg_settings 
              WHERE name = 'autovacuum_freeze_max_age') AS foo 
            ON (true) 
        WHERE d.datallowconn) AS foo2 ORDER BY 6 DESC, 1 ASC;
                            

Transations

  • Graph all columns as rate of change
select txid_snapshot_xmax(txid_current_snapshot()) as xmax
    , commits
    , rollback 
from (
    select sum(xact_commit) as commits
        , sum(xact_rollback) as rollback 
    from pg_stat_database) as x;
                            

Database size

  • Returns size of each database in cluster
  • Graph as both actual value & rate of change
select datname as name
    , pg_database_size(datname) as size 
from pg_catalog.pg_database;
                                

Total Table & Index Sizes

SELECT sum(pg_relation_size(c.oid)) as size_table 
FROM pg_class c, pg_namespace n 
WHERE relkind not in ( 'i' , 'c','v') AND n.oid = c.relnamespace
                            
SELECT sum(pg_relation_size(c.oid)) as size_index 
FROM pg_class c, pg_namespace n 
WHERE (relkind = 'i') AND n.oid = c.relnamespace;
                            

Locks

select count(*) as total
    , count(nullif(granted,true)) as waiting
    , count(nullif(mode ilike '%exclusive%',false)) as exclusive 
from pg_locks;
                            

WAL Metrics

  • pg_ls_dir() requires superuser privileges. Do same trick as with pg_stat_activity()
  • Can only see files in data directory
  • Warn if it goes above 100%. Probably more serious issue >= 200%
create or replace function pg_ls_dir_sec_def(text) 
    returns setof text as 
$$begin return query(select pg_catalog.pg_ls_dir('pg_xlog')); end$$ 
language plpgsql security definer;

revoke all on function pg_ls_dir_sec_def(text) from public;
                            
SELECT count(*) as total
    , (2 + current_setting('checkpoint_completion_target')::float) * 
        current_setting('checkpoint_segments')::int + 1 + 
        current_setting('wal_keep_segments')::int as expected
    , ((count(*) / ((2 + current_setting('checkpoint_completion_target')::float) * 
        current_setting('checkpoint_segments')::int + 1 + 
        current_setting('wal_keep_segments')::int)) * 100)::int as pct
FROM
    pg_ls_dir_sec_def('pg_xlog')
WHERE
    pg_ls_dir_sec_def ~ '^[0-9A-F]{24}$';
                            

Upgrading

  • Dump/Restore
    • Works on all versions
    • Use the pg_dump binary of the upgrade target
    • 8.4+ has parallel pg_restore
    • 9.3+ has parallel pg_dump
    • Same caveats mentioned previously (smaller dump/longer restore)
  • pg_upgrade
    • In-place upgrade of data files
    • Works only for 8.4.7+ (all previous versions must dump/restore)
    • OS w/ hard link support can greatly decrease upgrade time
      • Ex: 700GB upgrade in under 5 minutes

Failover

  • Will be failing over to the first slave we made (/var/lib/pgsql/9.3/slave/)
  • Ensure failover slave has same archive_command set as the original master but with appropriate path settings (would typically be the same if on different servers).
    • Make new state folder for omnipitr
mkdir /var/lib/pgsql/9.3/slave/pg_log/state
archive_command = '/opt/omnipitr/bin/omnipitr-archive -D /var/lib/pgsql/9.3/slave -dl gzip=/var/lib/pgsql/9.3/backups/walarchive -l /var/lib/pgsql/9.3/slave/pg_log/omnipitr-archive-^Y-^m-^d.log -v -s /var/lib/pgsql/9.3/slave/pg_log/state -db /var/lib/pgsql/9.3/backups/dst-backup "%p"'
Reload slave config to turn on new archive command
psql -p 5488
select pg_reload_conf();
Check that it's in place
show archive_command;
Archive command is only called if postgresql is in master mode

Failover

  • Stop the master database or deny access (avoid split-brain)
sudo service postgresql-9.3 stop
Check slave logs. Should see connection errors Touch failover file for first slave we made
touch /var/lib/pgsql/9.3/slave/finish.recovery
Edit recovery.conf in omnislave to point to new master
primary_conninfo = 'user=replication password=password host=127.0.0.1 port=5488'
Restart omnislave
/usr/pgsql-9.3/bin/pg_ctl restart -D /var/lib/pgsql/9.3/omnislave
Check omnislave log to see if it reconnected to new primary

Final Questions?

0