PostgreSQL Cluster Replication RTO Test
The PostgreSQL Cluster Replication RTO test helps assess the failover readiness of standby nodes in a high-availability cluster. It measures the replication lag durations to indicate how quickly a standby node can take over if the primary fails. High lag values can lead to longer failover times, increasing the risk of data loss and downtime. By identifying nodes with excessive or inconsistent lag, the test helps pinpoint replication bottlenecks. Monitoring these metrics ensures that administrators can proactively resolve issues and meet recovery time objectives. Therefore, this test is essential for maintaining seamless continuity and minimizing disruption in production environments.
Target of the test : A PostgreSQL Cluster
Agent deploying the test: An external agent
Outputs of the test :One set of results for each node on the target PostgreSQL cluster being monitored.
| Parameter | Description |
|---|---|
|
Test period |
How often should the test be executed |
|
Host |
The IP address of the host for which this test is to be configured. |
|
Port |
The port on which the server is listening. The default port is 5432. |
|
Username |
To monitor a PostgreSQL cluster, you must manually create a dedicated database user account on each PostgreSQL instance that you wish to monitor. To know how to create such a user based on where the target PostgreSQL cluster is installed (whether on-premises or hosted on Cloud), refer to How does eG Enterprise Monitor PostgreSQL Server?. |
|
Password |
The password associated with the above Username (can be ‘NULL’). Here, ‘NULL’ means that the user does not have any password. |
|
Confirm Password |
Confirm the Password (if any) by retyping it here. |
|
DB Name |
The name of the target database to connect to. The default is “postgres”. |
|
SSL |
This indicates that the eG agent will communicate with the PostgreSQL cluster via HTTPS or not. By default, this flag is set to No, as the target PostGreSQL database is not SSL-enabled by default. If the target cluster is SSL-enabled, then set this flag to Yes. |
|
Verify CA |
If the eG agent is required to establish an encrypted connection with the target PostgreSQL cluster by authenticating the server's identity through verifying the server CA certificate, set Verify CA flag to Yes. By default, this flag is set to No. |
|
CA Cert File |
This parameter is applicable only if the target PostgreSQL cluster is SSL-enabled.The certificate file is a public-key certificate following the x.509 standard. It contains information about the identity of the server, such as its name, geolocation, and public key. Each nodes of the target cluster can have individual certificate files or a single certificate can be used to access all the nodes in the cluster. Essentially, it’s a certificate that the server serves to the connecting users to prove that they are what they claim to be. Therefore, specify the full path to the server root certificate or certificate file that is signed by the CA in .crt file format for all/each node in the CA Cert File text box. For example, the location of this file may be: C:\app\eGurkha\JRE\lib\security\PostGreQL-test-ca.crt. By default, this parameter is set to none. This parameter specification differs according to the type of cluster and configuration: If the certificate file is available for each node of the PostgreSQL Cluster then, provide a comma-separated list of full path to the certificates in CA Cert File text box: For example:C:\app\eGurkha\JRE\lib\security\postgresql-test-ca.crt,C:\app\eGurkha\JRE\lib\security\postgresql-test-ca2.crt,C:\app\eGurkha\JRE\lib\security\postgresql-test-ca3.crt Specify the full path to the certificate file of the target PostgreSQL Cluster if a single certificate is used to access all nodes. For example: C:\app\eGurkha\JRE\lib\security\postgresql-test-ca.crt |
|
Client Cert File |
This parameter is applicable only if the target PostgreSQL Cluster is SSL-enabled. In order to collect metrics from the target PostgreSQL Cluster, the eG agent requires client certificate in .p12 format. Hence, specify the full path to the Client certificate file in .p12 format in the Client Cert File text box. For example, the location of this file may be: C:\app\eGurkha\JRE\lib\security\test-client.p12. |
|
Client Key File |
A client key file refers to a file containing the private key that corresponds to the public key used by a client. Provide full path of the file containing client key. |
|
Include Available Nodes |
In the Include Available Nodes text box, provide a comma-separated list of all the available nodes to be included for monitoring. This way, the test monitor and collect metrics from all the available nodes in the cluster. By default, this parameter is set to none. The format of this configuration is: HOSTNAME:PORT, for example, 172.16.8.136:3306,172.16.8.139:3306 |
|
DD Frequency |
Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD frequency. |
|
Detailed Diagnosis |
To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option. The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:
|
|
Measurement |
Description |
Measurement Unit |
Interpretation |
|---|---|---|---|
|
Replication lag duration |
Indicates the time difference in execution between the primary and this standby node; i.e., the amount of time the replica is lagging behind the current state of the primary instance. |
Seconds |
This measure is applicable only for the standby node. A higher lag duration implies that the standby node is taking longer to apply changes, which may impact failover readiness. Ideally, this value should remain low to ensure minimal data loss and quick recovery during failover. The detailed diagnosis of this measure gives the Current time, and Last transaction replayed time. |
|
Maximum replication lag duration |
Indicates the maximum replication time lag noticed between the primary against the standby node. |
Seconds |
This measure is reported only for the Summary descriptor. For the Summary descriptor, this measure will report the maximum replication lag across all the nodes in the cluster. A high maximum lag indicates that at least one standby node is significantly behind the primary, posing a risk to data consistency and high availability. Monitoring this helps isolate lagging nodes and take corrective action. Using the detailed diagnosis, the current node with maximum replication lag size in the cluster can be identified. |
|
Minimum replication lag duration |
Indicates the minimum replication time lag noticed between the primary against the standby node. |
Seconds |
This measure is reported only for the Summary descriptor. For the Summary descriptor, this measure will report the minimum replication lag across all the nodes in the cluster. Using the detailed diagnosis, the current node with minimum replication lag size in the cluster can be identified. |