PostgreSQL Cluster Uptime Test
The PostgreSQL Cluster Uptime Test tracks and reports the uptime of each node in a PostgreSQL cluster, helping administrators assess the availability and stability of their database infrastructure. By monitoring key metrics such as server restarts, current uptime, total uptime, and maintenance status, this test allows for early detection of unexpected server crashes, frequent restarts, or extended downtimes. It also helps distinguish between planned maintenance and unplanned outages. In a high-availability environment, where continuous access to data is critical, this test plays a vital role in ensuring that all cluster nodes are running reliably, thereby supporting consistent performance and minimizing the risk of service disruptions.
Target of the test : A PostgreSQL Cluster
Agent deploying the test: An external agent
Outputs of the test :One set of results for each node on the target PostgreSQL cluster being monitored.
| Parameter | Description |
|---|---|
|
Test period |
How often should the test be executed |
|
Host |
The IP address of the host for which this test is to be configured. |
|
Port |
The port on which the server is listening. The default port is 5432. |
|
Username |
To monitor a PostgreSQL cluster, you must manually create a dedicated database user account on each PostgreSQL instance that you wish to monitor. To know how to create such a user based on where the target PostgreSQL cluster is installed (whether on-premises or hosted on Cloud), refer to How does eG Enterprise Monitor PostgreSQL Server?. |
|
Password |
The password associated with the above Username (can be ‘NULL’). Here, ‘NULL’ means that the user does not have any password. |
|
Confirm Password |
Confirm the Password (if any) by retyping it here. |
|
DB Name |
The name of the target database to connect to. The default is “postgres”. |
|
SSL |
This indicates that the eG agent will communicate with the PostgreSQL cluster via HTTPS or not. By default, this flag is set to No, as the target PostGreSQL database is not SSL-enabled by default. If the target cluster is SSL-enabled, then set this flag to Yes. |
|
Verify CA |
If the eG agent is required to establish an encrypted connection with the target PostgreSQL cluster by authenticating the server's identity through verifying the server CA certificate, set Verify CA flag to Yes. By default, this flag is set to No. |
|
CA Cert File |
This parameter is applicable only if the target PostgreSQL cluster is SSL-enabled.The certificate file is a public-key certificate following the x.509 standard. It contains information about the identity of the server, such as its name, geolocation, and public key. Each nodes of the target cluster can have individual certificate files or a single certificate can be used to access all the nodes in the cluster. Essentially, it’s a certificate that the server serves to the connecting users to prove that they are what they claim to be. Therefore, specify the full path to the server root certificate or certificate file that is signed by the CA in .crt file format for all/each node in the CA Cert File text box. For example, the location of this file may be: C:\app\eGurkha\JRE\lib\security\PostGreQL-test-ca.crt. By default, this parameter is set to none. This parameter specification differs according to the type of cluster and configuration: If the certificate file is available for each node of the PostgreSQL Cluster then, provide a comma-separated list of full path to the certificates in CA Cert File text box: For example:C:\app\eGurkha\JRE\lib\security\postgresql-test-ca.crt,C:\app\eGurkha\JRE\lib\security\postgresql-test-ca2.crt,C:\app\eGurkha\JRE\lib\security\postgresql-test-ca3.crt Specify the full path to the certificate file of the target PostgreSQL Cluster if a single certificate is used to access all nodes. For example: C:\app\eGurkha\JRE\lib\security\postgresql-test-ca.crt |
|
Client Cert File |
This parameter is applicable only if the target PostgreSQL Cluster is SSL-enabled. In order to collect metrics from the target PostgreSQL Cluster, the eG agent requires client certificate in .p12 format. Hence, specify the full path to the Client certificate file in .p12 format in the Client Cert File text box. For example, the location of this file may be: C:\app\eGurkha\JRE\lib\security\test-client.p12. |
|
Client Key File |
A client key file refers to a file containing the private key that corresponds to the public key used by a client. Provide full path of the file containing client key. |
|
Include Available Nodes |
In the Include Available Nodes text box, provide a comma-separated list of all the available nodes to be included for monitoring. This way, the test monitor and collect metrics from all the available nodes in the cluster. By default, this parameter is set to none. The format of this configuration is: HOSTNAME:PORT, for example, 172.16.8.136:3306,172.16.8.139:3306 |
|
DD Frequency |
Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD frequency. |
|
Detailed Diagnosis |
To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option. The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:
|
|
Measurement |
Description |
Measurement Unit |
Interpretation |
||||||
|---|---|---|---|---|---|---|---|---|---|
|
Has Postgres server been restarted? |
Indicates whether this node has been rebooted during the last measurement period or not. |
|
This measure is not reported for Summary descriptor. If the value of this measure is No, it indicates that the node has not restarted. The value Yes on the other hand implies that the target node has indeed restarted. The values reported by this measure and its numeric equivalents are mentioned in the table below:
Note: By default, this measure reports the value Yes or No to indicate whether the node has restarted. The graph of this measure however, represents the same using the numeric equivalents – 0 or 1. Use the detailed diagnosis to find out the Shutdown date, Restart date, Shutdown duration(Minutes) and whether the server ia under maintenance. |
||||||
|
Uptime since last measure |
Indicates the time period that this node has been up since the last time this test ran. |
Seconds |
This measure is not reported for Summary descriptor. If this node has not been restarted during the last measurement period and the agent has been running continuously, this value will be equal to the measurement period. If this node was restarted during the last measurement period, this value will be less than the measurement period of the test. For example, if the measurement period is 300 secs, and if the node was restarted 120 secs back, this metric will report a value of 120 seconds. The accuracy of this metric is dependent on the measurement period – the smaller the measurement period, greater the accuracy. |
||||||
|
Uptime |
Indicates the total time that this node has been up since its last reboot. |
Minutes |
This measure is not reported for Summary descriptor. Administrators may wish to be alerted if a node has been running without a reboot for a very long period. Setting a threshold for this metric allows administrators to determine such conditions. |
||||||
|
Maximum uptime |
Indicates the maximum uptime of the nodes in the cluster. |
Minutes |
This measure is reported only for the Summary descriptor. This measure reports the node with maximum uptime in the cluster. A high value indicates a stable and reliable node. Use the detailed diagnosis of this measure to findout the details of the node with maximum uptime. |
||||||
|
Is under maintenance? |
Indicates whether this node is under maintenance or not. |
|
This measure is not reported for Summary descriptor. The values reported by this measure and its numeric equivalents are mentioned in the table below:
Note: This measure reports the Measure Values listed in the table above to indicate whether the target node is under maintenance. However, in the graph, this measure is indicated using the Numeric Values listed in the table above. |