Skip to content

Releases: splendiddata/puppet_pure_repmgr

Added custom fact

04 Sep 01:55
Compare
Choose a tag to compare
Added custom fact Pre-release
Pre-release
  • New custom ruby script will add 'pure_postgres_node_count' fact on nodes with postgres installed.

Changed behaviour of generating new nodeid.

06 Jul 16:08
Compare
Choose a tag to compare
  • Previous solution did clone with nodeid 100, which was also then used in replicationslot name which ended up in recovery.conf.
    That solution didn't work with more than one standby, since they both wanted to replicate using same replication slot.
    New solution is a complex python script that finds master, detects a free nodeid, registers it immediately and then uses that for cloning.
    New solution works with multiple standbys too.

Cleanup

05 Jul 14:03
Compare
Choose a tag to compare
  • Commenting in templates/pure_cluster_logger.epp
  • New version of barman module

Deploy without Round Robin DNS, Cleanup and small issue in pure_repmgr_facts script

03 Jul 15:01
Compare
Choose a tag to compare
  • Cluster logger doesn't require Round RobinDNS anymore.
    From now on, deploying a new cluster can finish without DNS properly setup
  • Moved some manifests to subfolders for clarity
  • Puppet lint
  • Small issue in pure_repmgr_facts script. Added Exception as e so that Exceptions are properly outputen in debug mode

Cleanup, release notes, commenting, less dependant of facts and copyrights

23 Jun 17:05
Compare
Choose a tag to compare
  • Cleanup with puppet lint

  • Added a line to some files and templates stating that the file is managed by puppet

  • Added a copyright statement to manifests, files and templates

  • Added release notes. They are shipped by puppet to the node
    so that operators know which version of puppet module is currently managing this node.

  • Changed the dependencies on facts.
    Previously, a huge script ran to collect all kinds of facts.

    • It read DNS,
    • connected to other hosts,
    • it read ssh authorized keys for postgres,
    • etc.
      Since this release, many of these solutions have changed:
    • Instead of DNS info, exported resources is used to collect the data
    • ssh data is a small facts snippet
    • config.pp works quite different and therefore connectivity of hosts is no longer required
    • etc.
  • Details:

    • Removed the facts script and ini that collected cluster config from dns.
    • Changed location of inifile of the logger
    • Added config dns= to inifile of the logger
    • config.pp and install.pp don't rely on nodeid anymore
    • a python script reads info from postgres (local->finds master->generates free nodeid)
      then builds a repmgr.conf file and calls repmgr register command. It is omnipotent.
    • a python script prints facts on nodeid and replication role of the node
    • primary_network parameter is replaced by initial_standby parameter
      Setting this to on makes that node do an initdb if cloning didn't work properly.

Added heartbeat feature

12 Jun 10:51
Compare
Choose a tag to compare

Thus release adds a heartbeat feature.
The heartbeat feature, basically consists of a table in the postgres database and some additions in the pure_cluster_logger python script. The script:

  • creates the table if it doesn't exist
  • adds a record for the server it is running on (if it doesn't exist)
  • updates the record (sets column [updated] to current date with now function) on every check run (basically every second).
    This does two things:
    1: You now have a single point to check whether the scripts are running on all servers. Furthermore, some additional information is available, like 'previous servers it was running on', and 'when a script stopped running'.
    2: You now have a very small replication stream going on, even when the application modifies nothing. This enhancements the lag_sec value. This shows the delta between time on the master and the latest commit that was applied on the standby. Previously, latest commit on standby could be old, even in a proper functioning replication setup. With this heartbeat feature, in a properly functioning setup it can be a second old to the latest.

clusterlogger inconsistencies en autorestart

01 Jun 10:01
Compare
Choose a tag to compare

Some minor inconsistencies in clusterlogger
Added feature for (en/dis)abling autorestart.

Added barman support and modified postgres service to better fit to puppet way

22 May 14:02
Compare
Choose a tag to compare

I have added a parameter barman_server. If you set this to a fqdn of a barman server, then puppet will add that is required for barman support in the replicated cluster setup.
Furthermore, the ssh module is split in two and the sshkey part is moved to the pure_postgres module.
The ssh key part must be called with a list of ssh keys that should be added to the known hosts of the server
Last but not least, the service part is modified to better fit the puppet way of resource management.

  • All the service stuff is moved to the pure_postgres::service module (to fit to proper module layout)
  • A init parameter sets if the service should be managed by the module or not.
    • Managed: pure_postgres starts the service
    • Unmanaged: pure_postgres::start can be notified, but will not be started by default
  • pure_postgres::started is now a definition. Both pure_postgres::start and pure_postgres::restart use it to check that postgres is up after class is finished
  • pure_postgres::reload and pure_postgres::restart are refreshonly
  • pure_postgres::service now rather initializes the services that taking action
  • This new setu better fits into pure_repmgr::config. Required changed are applied as needed.

Final release for phase 1

13 May 06:45
Compare
Choose a tag to compare

Also fixed Workaround for 'repmgr switchover not handling seperate replication user well' issue.

Fixed cluster logger issue reporting weird lag

28 Apr 16:23
Compare
Choose a tag to compare
  • Added a feature to cluster logger. He now validates lag info. If replay time has travelled to the past, it is invalid. On instance restart, previous replay time is reset.
  • repmgrd and automatic failover is not implemented, but I started to prepare fore it.
    • Added config file to add repmgrd automatic failover functionality to a postgres cluster with repmgrd.
    • Added a systemd unit file for repmgrd.