K8s Persistent Volumes Test
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
Typically, a user creates a PersistentVolumeClaim with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, checks if any static PV (a PV manually created by the administrator) matches the new PVC, and binds them together. When none of the static PVs the administrator created matches a user’s PVC, the cluster may try to dynamically provision a volume specially for the PVC. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
When a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource.
If there are many unfulfilled PVCs, an administrator may quickly want to check the status of the existing PVs to determine why they could not be bound to any of the PVCs - is it because the PVs are already bound? is it because the PVs have been released, but cannot be reclaimed? or has reclamation failed for many PVs? The Kube Persistent Volumes test provides answers to these questions!
This test auto-discovers PVs and reports the bind status of each PV, thereby pointing administrators to those PVs that are unbound, bound, or released, and those that could not be reclaimed. This way, administrators can figure out if the bind/reclamation status of a PV is why it could not be bound to a PVC. Also, if there are one/more available/unbound PVs, then administrators can use this test to verify the configuration - i.e., the access mode and storage capacity - of such PVs. This will reveal if those PVs are unbound because their configuration does not match any open PVC. In addition, this test also reveals the space utilization of the PVs and alerts administrators about those PVs that are facing severe space contention.
Target of the test : A Kubernetes/OpenShift Cluster
Agent deploying the test : A remote agent
Outputs of the test : One set of results for each Persistent Volume in the Kubernetes/OpenShift cluster being monitored
Parameter | Description |
---|---|
Test Period |
How often should the test be executed. |
Host |
The IP address of the host for which this test is to be configured. |
Port |
Specify the port at which the specified Host listens. By default, this is 6443. |
Load Balancer / Master Node IP |
To run this test and report metrics, the eG agent needs to connect to the Kubernetes API on the master node and run API commands. To enable this connection, the eG agent has to be configured with either of the following:
By default, this parameter will display the Load Balancer / Master Node IP that you configured when manually adding the Kubernetes/OpenShift cluster for monitoring, using the Kubernetes Cluster Preferences page in the eG admin interface (see Figure 3). The steps for managing the cluster using the eG admin interface are discussed elaborately in How to Monitor the Kubernetes/OpenShift Cluster Using eG Enterprise? Whenever the eG agent runs this test, it uses the IP address that is displayed (by default) against this parameter to connect to the Kubernetes API. If there is any change in this IP address at a later point in time, then make sure that you update this parameter with it, by overriding its default setting. |
K8s Cluster API Prefix |
By default, this parameter is set to none. Do not disturb this setting if you are monitoring a Kubernetes/OpenShift Cluster. To run this test and report metrics for Rancher clusters, the eG agent needs to connect to the Kubernetes API on the master node of the Rancher cluster and run API commands. The Kubernetes API of Rancher clusters is of the default format: http(s)://{IP Address of kubernetes}/{api endpoints}. The Server section of the kubeconfig.yaml file downloaded from the Rancher console helps in identifying the Kubernetes API of the cluster. For e.g., https://{IP address of Kubernetes}/k8s/clusters/c-m-bznxvg4w/ is usually the URL of the Kubernetes API of a Rancher cluster. For the eG agent to connect to the master node of a Rancher cluster and pull out metrics, the eG agent should be made aware of the API endpoints in the Kubernetes API of the Rancher cluster. To aid this, you can specify the API endpoints available in the Kubernetes API of the Rancher cluster against this parameter. In our example, this parameter can be specified as: /k8s/clusters/c-m-bznxvg4w/. |
SSL |
By default, the Kubernetes/OpenShift cluster is SSL-enabled. This is why, the eG agent, by default, connects to the Kubernetes API via an HTTPS connection. Accordingly, this flag is set to Yes by default. If the cluster is not SSL-enabled in your environment, then set this flag to No. |
Authentication Token |
The eG agent requires an authentication bearer token to access the Kubernetes API, run API commands on the cluster, and pull metrics of interest. The steps for generating this token have been detailed in How Does eG Enterprise Monitor a Kubernetes/OpenShift Cluster?
Typically, once you generate the token, you can associate that token with the target Kubernetes/OpenShift cluster, when manually adding that cluster for monitoring using the eG admin interface. The steps for managing the cluster using the eG admin interface are discussed elaborately in How to Monitor the Kubernetes/OpenShift Cluster Using eG Enterprise? By default, this parameter will display the Authentication Token that you provided in the Kubernetes Cluster Preferences page of the eG admin interface, when manually adding the cluster for monitoring (see Figure 3). Whenever the eG agent runs this test, it uses the token that is displayed (by default) against this parameter for accessing the API and pulling metrics. If for any reason, you generate a new authentication token for the target cluster at a later point in time, then make sure you update this parameter with the change. For that, copy the new token and paste it against this parameter. |
Proxy Host |
If the eG agent connects to the Kubernetes API on the master node via a proxy server, then provide the IP address of the proxy server here. If no proxy is used, then the default setting -none - of this parameter, need not be changed, |
Proxy Port |
If the eG agent connects to the Kubernetes API on the master node via a proxy server, then provide the port number at which that proxy server listens here. If no proxy is used, then the default setting -none - of this parameter, need not be changed, |
Proxy Username, Proxy Password, Confirm Password |
These parameters are applicable only if the eG agent uses a proxy server to connect to the Kubernetes/OpenShift cluster, and that proxy server requires authentication. In this case, provide a valid user name and password against the Proxy Username and Proxy Password parameters, respectively. Then, confirm the password by retyping it in the Confirm Password text box. If no proxy server is used, or if the proxy server used does not require authentication, then the default setting - none - of these parameters, need not be changed. |
DD Frequency |
Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD frequency. |
Detailed Diagnosis |
To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option. The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:
|
Measurement | Description | Measurement Unit | Interpretation | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Status |
Indicates the current status of this PV. |
|
This measure can report any of the following values:
The numeric values that correspond to these measure values are as follows:
Note: By default, this test reports the Measure Values listed in the table above to indicate the state of a PV. In the graph of this measure however, the same is indicated using the numeric equivalents only. Using the detailed diagnosis of this measure, you can determine the namespace to which a PV belongs, the PVC that binds the PV (in case the PV is Bound), the reclaim policy configured for the PV, and the storage class (in case the PV is dynamically provisioned). |
||||||||||
Time since storage creation |
Indicates how old this PV is. |
|
The value of this measure is expressed in number of days, hours, and minutes. |
||||||||||
Access modes |
Indicates the access modes configured for this PV. |
|
A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities. The access modes are:
The aforesaid access modes also represent the values that this measure can report. The numeric values that correspond to these measure values are as follows:
Note: By default, this test reports the Measure Values listed in the table above to indicate the access mode of a PV. In the graph of this measure however, the same is indicated using the numeric equivalents only. |
||||||||||
Storage assigned |
Indicates the storage capacity configured for this PV. |
GB |
|
||||||||||
Total capacity |
Indicates the total capacity of this PV. |
GB |
|
||||||||||
Used space |
Indicates the amount of space utilized in this PV. |
GB |
A value close to the Total Capacity measure indicates that the PV is currently running out of space. |
||||||||||
Free space |
Indicates the amount of space that is available for use in this PV. |
GB |
A high value is desired for this measure. |
||||||||||
Percent usage |
Indicates the percentage of space utilized in this PV. |
Percent |
A value close to 100 percent indicates that the PV is running out of space. Compare the value across Persistent Volumes to identify the Persistent Volume that is frequently running out of space. |
The detailed diagnosis of the Status measure reveals the namespace to which a PV belongs, the PVC that binds the PV (in case the PV is Bound), the reclaim policy configured for the PV, and the storage class (in case the PV is dynamically provisioned). If a PV is in the Released (but not reclaimed) state or in the Failed state, then you can use the detailed diagnosis to identify what reclaim policy applies to that PV, so you can easily troubleshoot the failure.
Figure 1 : The detailed diagnosis of the Status measure reported by the Kube Persistent Volumes test