EMC RAID Arrays Test
This test monitors the current state, overall health, and the load-balancing capability of each storage array in the EMC CLARiiON storage system. With the help of this test, administrators can be proactively alerted to potential array failures / slowdowns / overload conditions. This way, irregularities in the distribution of I/O load across arrays comes to light, prompting administrators to fine-tune the load-balancing algorithm.
This test is disabled by default. To enable the test, go to the enable / disable tests page using the menu sequence : Agents -> Tests -> Enable/Disable, pick the EMC Clariion SAN as the desired Component type, set Performance as the Test type, choose the test from the disabled tests list, and click on the < button to move the test to the ENABLED TESTS list. Finally, click the Update button.
Target of the test : An EMC CLARiiON storage device
Agent deploying the test : A remote agent
Outputs of the test : One set of results for each RAID array on the storage system.
Parameter | Description |
---|---|
Test Period |
How often should the test be executed. |
Host |
The IP address of the storage device for which this test is to be configured. |
Port |
The port number at which the storage device listens. The default is NULL. |
User Name and Password |
The SMI-S Provider is paired with the EMC CIM Object Manager Server to provide an SMI-compliant interface for CLARiiON arrays. Against the User and Password parameters, specify the credentials of a user who has been assigned Monitor access to the EMC CIM Object Manager Server paired with EMC CLARiiON’s SMI-S provider. |
Confirm Password |
Confirm the Password by retyping it here. |
SSL |
Set this flag to Yes, if the storage device being monitored is SSL-enabled. |
IsEmbedded |
By default, this flag is set to False for an EMC CLARiiON device. Do not disturb this default setting. |
SerialNumber |
If the SMI-S provider has been implemented as a proxy, then such a provider can be configured to manage multiple storage devices. This is why, you will have to explicitly specify which storage system you want the eG agent to monitor. Since each storage system is uniquely identified by a Serial number, specify the same here. The serial number for an EMC CLARiiON device will be of the format, FCNMM094900059. |
NameSpace |
Specify the NameSpace that uniquely identifies the profiles specific to the provider in use. For EMC CLARiiON, this parameter will be set to root/emc by default. |
Measurements | Description | Measurement Unit | Interpretation | ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Operational status |
Indicates the current operational state of this RAID array. |
|
The values that this measure can report and their corresponding numeric values are discussed in the table below:
Note: By default, this measure reports the Measure Values discussed above to indicate the operational state of a storage array. In the graph of this measure however, operational states are represented using the numeric equivalents only. |
||||||||||||||||||||||||||||||||||||||||
Detailed operational state |
Describes the current operational state of this RAID array. |
|
This measure will be reported only if the API provides a detailed operational state. Typically, the detailed state will describe why the storage array is in a particular operational state. For instance, if the Operational status measure reports the value Stopping for a storage array, then this measure will explain why that storage array is being stopped. The values that this measure can report and their corresponding numeric values are discussed in the table below:
Note: By default, this measure reports the Measure Values discussed above to indicate the detailed operational state of an array. In the graph of this measure however, detailed operational states are represented using the numeric equivalents only. |
||||||||||||||||||||||||||||||||||||||||
Data transmitted |
Indicates the rate at which data was transmitted by this RAID array. |
MB/Sec |
|
||||||||||||||||||||||||||||||||||||||||
IOPS |
Indicates the rate at which I/O operations were performed on this RAID array. |
IOPS |
Compare the value of this measure across storage arrays to know which storage array handled the maximum number of I/O requests and which handled the least. If the gap between the two is very high, then it indicates serious irregularities in load-balancing across storage arrays. You may then want to take a look at the Reads and Writes measures to understand what to fine-tune - the load-balancing algorithm for read requests or that of the write requests. |
||||||||||||||||||||||||||||||||||||||||
Reads |
Indicates the rate at which read operations were performed on this RAID array. |
Reads/Sec |
Compare the value of this measure across storage arrays to know which storage array handled the maximum number of read requests and which handled the least. |
||||||||||||||||||||||||||||||||||||||||
Writes |
Indicates the rate at which write operations were performed on this RAID array. |
Writes/Sec |
Compare the value of this measure across storage arrays to know which storage array handled the maximum number of write requests and which handled the least. |
||||||||||||||||||||||||||||||||||||||||
Data reads |
Indicates the rate at which data is read from this RAID array. |
MB/Sec |
Compare the value of these measures across storage arrays to identify the slowest storage array in terms of servicing read and write requests (respectively). |
||||||||||||||||||||||||||||||||||||||||
Data written |
Indicates the rate at which data is written to this RAID array. |
MB/Sec |
|||||||||||||||||||||||||||||||||||||||||
Average read size |
Indicates the amount of data read from this RAID array per I/O operation |
MB/Op |
Compare the value of these measures across disks to identify the slowest disk in terms of servicing read and write requests (respectively). |
||||||||||||||||||||||||||||||||||||||||
Average write size |
Indicates the amount of data written to this RAID per I/O operation. |
MB/Op |
|||||||||||||||||||||||||||||||||||||||||
Read hit |
Indicates the percentage of read requests that were serviced by the cache of this RAID array. |
Percent |
A high value is desired for this measure. A very low value is a cause for concern, as it indicates that cache usage is very poor; this in turn implies that direct storage array accesses, which are expensive operations, are high. |
||||||||||||||||||||||||||||||||||||||||
Write hit |
Indicates the percentage of write requests that were serviced by the cache of this RAID array. |
Percent |
A high value is desired for this measure. A very low value is a cause for concern, as it indicates that cache usage is very poor; this in turn implies that direct storage array accesses, which are resource-intensive operations, are high. |
||||||||||||||||||||||||||||||||||||||||
EFD data flushed SPA |
Indicates the amount of data flushed to the EFDs from the write cache of this RAID array through storage processor A. |
KB |
One of the key features of EMC is the availability of Enterprise Flash Drives (EFDs). With this capability, EMC creates new ultra-performing “Tier 0” storage that removes the performance limitations of magnetic disk drives. EFDs increase performance of latency-sensitive applications, and are ideal for applications with high transaction rates and those requiring the fastest possible storage and retrieval. EMC CLARiiON storage arrays support both enabling and disabled read/write caches. The default recommendation is to turn off both read and write caches on all LUNs that reside on EFDs for the following reasons:
If the read and write caches are disabled, these measures will not report any values. |
||||||||||||||||||||||||||||||||||||||||
EFD data flushed SPB |
Indicates the amount of data flushed to the EFDs from the write cache of this RAID array through storage processor B. |
KB |
|||||||||||||||||||||||||||||||||||||||||
EFD dirty cache SPA |
Indicates the percentage of pages in write cache that have received new data from hosts but have not yet been flushed to the EFD through storage processor A.
|
Percent |
You should have a high percentage of dirty pages as it increases the chance of a read coming from cache or additional writes to the same block of data being absorbed by the cache. If an IO is served from cache the performance is better than if the data had to be retrieved from disk. That’s why the default watermarks are usually around 60/80% or 70/90%. You don’t want dirty pages to reach 100%, they should fluctuate between the high and low watermarks (which means the Cache is healthy). Periodic spikes or drops outside the watermarks are ok, but consistently hitting 100% indicates that the write cache is overstressed. |
||||||||||||||||||||||||||||||||||||||||
EFD dirty cache SPB |
Indicates the percentage of pages in write cache that have received new data from hosts but have not yet been flushed to the EFD through storage processor B. |
Percent |