NetApp Block I/O Protocol Test
Volumes are data containers. Clients can access the data in volumes through the access protocols supported by Data ONTAP. These protocols include Network File System (NFS), Common Internet File System (CIFS), HyperText Transfer Protocol (HTTP), Web-based Distributed Authoring and Versioning (WebDAV), Fibre Channel Protocol (FCP), and Internet SCSI (iSCSI).
Obviously, if one/more of these protocols are suddenly rendered unavailable, then clients will not be able to access critical data through these protocols. Moreover, whenever request processing delays are noticed, it becomes necessary for administrators to determine which protocol took the longest to perform read/write operations, so that slow protocol services can be identified. The NetApp Block I/O Protocol test provides these protocol-centric insights. For every protocol used for accessing data volumes, this test reports the availability of the protocol service, the rate of I/O operations performed through each protocol, and the time taken by each protocol to process read-write requests, so that problem-prone protocols can be accurately identified.
Target of the test : A NetApp Unified Storage
Agent deploying the test : An external/
Outputs of the test : One set of results for each protocol that is active on the NetApp storage system being monitored.
Parameters | Description |
---|---|
Test Period |
How often should the test be executed. |
Host |
The host for which the test is to be configured. |
Port |
Specify the port at which the specified host listens in the Port text box. By default, this is NULL. |
User |
Here, specify the name of the user who possesses the following privileges: login-http-admin,api-aggr-check-spare-low,api-aggr-list-info,api-aggr-mediascrub-list-info,api-aggr-scrub-list-info,api-cifs-status,api-clone-list-status,api-disk-list-info,api-fcp-adapter-list-info,api-fcp-adapter-stats-list-info,api-fcp-service-status,api-file-get-file-info,api-file-read-file,api-iscsi-connection-list-info,api-iscsi-initiator-list-info,api-iscsi-service-status,api-iscsi-session-list-info,api-iscsi-stats-list-info,api-lun-config-check-alua-conflicts-info,api-lun-config-check-cfmode-info,api-lun-config-check-info,api-lun-config-check-single-image-info,api-lun-list-info,api-nfs-status,api-perf-object-get-instances-iter*,api-perf-object-instance-list-info,api-quota-report-iter*,api-snapshot-list-info,api-vfiler-list-info,api-volume-list-info-iter*. If such a user does not pre-exist, then, you can create a special user for this purpose using the steps detailed in Creating a New User with the Privileges Required for Monitoring the NetApp Unified Storage. |
Password |
Specify the password that corresponds to the above-mentioned User. |
Confirm Password |
Confirm the Password by retyping it here. |
Authentication Mechanism |
In order to collect metrics from the NetApp Unified Storage system, the eG agent connects to the ONTAP management APIs over HTTP or HTTPS. By default, this connection is authenticated using the LOGIN_PASSWORD authentication mechanism. This is why, LOGIN_PASSWORD is displayed as the default authentication mechanism. |
Use SSL |
Set the Use SSL flag to Yes, if SSL (Secured Socket Layer) is to be used to connect to the NetApp Unified Storage System, and No if it is not. |
API Port |
By default, in most environments, NetApp Unified Storage system listens on port 80 (if not SSL-enabled) or on port 443 (if SSL-enabled) only. This implies that while monitoring the NetApp Unified Storage system, the eG agent, by default, connects to port 80 or 443, depending upon the SSL-enabled status of the NetApp Unified Storage system - i.e., if the NetApp Unified Storage system is not SSL-enabled (i.e., if the Use SSL flag above is set to No), then the eG agent connects to the NetApp Unified Storage system using port 80 by default, and if the NetApp Unified Storage system is SSL-enabled (i.e., if the Use SSL flag is set to Yes), then the agent-NetApp Unified Storage system communication occurs via port 443 by default. Accordingly, the API Port parameter is set to default by default. In some environments however, the default ports 80 or 443 might not apply. In such a case, against the API Port parameter, you can specify the exact port at which the NetApp Unified Storage system in your environment listens, so that the eG agent communicates with that port for collecting metrics from the NetApp Unified Storage system. |
vFilerName |
A vFiler is a virtual storage system you create using MultiStore, which enables you to partition the storage and network resources of a single storage system so that it appears as multiple storage systems on the network. If the NetApp Unified Storage system is partitioned to accommodate a set of vFilers, specify the name of the vFiler that you wish to monitor in the vFilerName text box. In some environments, the NetApp Unified Storage system may not be partitioned at all. In such a case, the NetApp Unified Storage system is monitored as a single vFiler and hence the default value of none is displayed in this text box. |
Timeout |
Specify the duration (in seconds) beyond which the test will timeout if no response is received from the device. The default is 120 seconds. |
Measurement | Description | Measurement Unit | Interpretation | ||||||
---|---|---|---|---|---|---|---|---|---|
Is service available? |
Indicates whether this protocol service is currently available. |
|
This measure reports the value Yes if this protocol service is currently available and the value No if this protocol service is not available. The values reported by this measure and their numeric equivalents are available in the table below:
Note: This measure reports the Measure Values listed in the table above while indicating whether this protocol service is currently available or not. However, in the graph of this measure, the state is indicated using only the Numeric Values listed in the above table. |
||||||
Operations rate |
Indicates the rate at which read/write operations were performed by users through this block protocol. |
Ops/Sec |
|
||||||
Latency |
Indicates the average time taken for performing the operations through this protocol. |
Millisecs |
A low value is desired for this measure. When users complaint of slowdowns when accessing data volumes, you can compare the value of this measure across protocols to know which protocol took the longest to perform the read-write operations. |
||||||
Read operations rate |
Indicates the rate at which the read operations are performed across all LUNs of this storage system through this protocol. |
Ops/Sec |
Very high values for these measures are indicative of the existence of road-blocks to rapid reading/writing by the storage device. By observing the variations in these measures over time, you can understand whether the latencies are sporadic or consistent. Consistent delays in reading/writing could indicate that there are persistent bottlenecks (if any) in the storage device to speedy I/O processing. |
||||||
Read latency |
Indicates the average time taken to perform read operations across all LUNs through this protocol. |
Millisecs |
|||||||
Data read |
Indicates the rate at which data is read from this storage system through this protocol. |
Bytes/Sec |
|||||||
Write operations rate |
Indicates the rate at which the write operations were performed across all LUNs of this storage system through this protocol. |
Ops/Sec |
|||||||
Write latency |
Indicates the average time taken to perform write operations across all LUNs through this protocol. |
Millisecs |
|||||||
Data written |
Indicates the rate at which data is written to this storage system through this protocol. |
Bytes/Sec |
|||||||
Partner read latency |
Indicates the average time taken to perform read operations across all the LUNs of the partner system (i.e., either the master/slave in a cluster setup of this storage system) through this protocol. |
Millisecs |
Very high values for these measures are indicative of the existence of road-blocks to rapid reading/writing by the storage device. By observing the variations in these measures over time, you can understand whether the latencies are sporadic or consistent. Consistent delays in reading/writing could indicate that there are persistent bottlenecks (if any) in the storage device to speedy I/O processing. |
||||||
Partner write latency |
Indicates the average time taken to perform write operations on the LUNs of the partner system (i.e., either the master/slave in a cluster setup of this storage system) through this protocol. |
Millisecs |