AWS Simple Storage Service (S3) Test

Amazon Simple Storage Service is storage for the Internet. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

To upload data to the cloud, you first create a bucket in one of the AWS Regions. A bucket is a container for data stored in Amazon S3. Once a bucket is created, you can then upload any number of objects to the bucket. Objects are the fundamental entities stored in Amazon S3, and consist of object data and metadata. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the johnsmith bucket, then it is addressable using the URL http://johnsmith.s3.amazonaws.com/photos/puppy.jpg

To create objects in a bucket and to manipulate these objects (say, to retrieve objects from a bucket or delete them), administrators often make Amazon S3 REST requests over HTTP - eg., HTTP GET, PUT, LIST, DELETE, etc. By monitoring the HTTP requests to Amazon S3 and their responses, operational issues can be quickly detected . This is exactly what administrators can achieve using the AWS Simple Storage Service(S3) - Request Statistics test!

This test monitors the HTTP requests to each bucket, promptly captures error responses, and brings them to the notice of administrators. In addition, the test also measures the time taken by S3 to service the requests, and in the process, warns administrators of an impending processing slowdown.

Note:

For this test to report metrics, you need to enable Resource Metrics collection for every S3 bucket in AWS that you want to monitor. To know how to achieve this, refer to Enabling Resource Metrics for S3 Bucket.

Target of the test: Amazon Cloud

Agent deploying the test : A remote agent

Outputs of the test : One set of results for each bucket in each AWS region

First-level descriptor: AWS Region

Second-level descriptor: Bucket name

Configurable parameters for the test
Parameter Description

Test Period

How often should the test be executed.

Host

The host for which the test is to be configured.

Access Type

eG Enterprise monitors the AWS cloud using AWS API. By default, the eG agent accesses the AWS API using a valid AWS account ID, which is assigned a special role that is specifically created for monitoring purposes. Accordingly, the Access Type parameter is set to Role by default. Furthermore, to enable the eG agent to use this default access approach, you will have to configure the eG tests with a valid AWS Account ID to Monitor and the special AWS Role Name you created for monitoring purposes.

Some AWS cloud environments however, may not support the role-based approach. Instead, they may allow cloud API requests only if such requests are signed by a valid Access Key and Secret Key. When monitoring such a cloud environment therefore, you should change the Access Type to Secret. Then, you should configure the eG tests with a valid AWS Access Key and AWS Secret Key.

Note that the Secret option may not be ideal when monitoring high-security cloud environments. This is because, such environments may issue a security mandate, which would require administrators to change the Access Key and Secret Key, often. Because of the dynamicity of the key-based approach, Amazon recommends the Role-based approach for accessing the AWS API.

AWS Account ID to Monitor

This parameter appears only when the Access Type parameter is set to Role. Specify the AWS Account ID that the eG agent should use for connecting and making requests to the AWS API. To determine your AWS Account ID, follow the steps below:

  • Login to the AWS management console. with your credentials.

  • Click on your IAM user/role on the top right corner of the AWS Console. You will see a drop-down menu containing the Account ID (see Figure 1).

    Figure 1 : Identifying the AWS Account ID

AWS Role Name

This parameter appears when the Access Type parameter is set to Role. Specify the name of the role that you have specifically created on the AWS cloud for monitoring purposes. The eG agent uses this role and the configured Account ID to connect to the AWS Cloud and pull the required metrics. To know how to create such a role, refer to Creating a New Role.

AWS Access Key, AWS Secret Key, Confirm AWS Access Key, Confirm AWS Secret Key

These parameters appear only when the Access Type parameter is set to Secret.To monitor an Amazon cloud instance using the Secret approach, the eG agent has to be configured with the access key and secret key of a user with a valid AWS account. For this purpose, we recommend that you create a special user on the AWS cloud, obtain the access and secret keys of this user, and configure this test with these keys. The procedure for this has been detailed in the Obtaining an Access key and Secret key topic. Make sure you reconfirm the access and secret keys you provide here by retyping it in the corresponding Confirm text boxes.

Proxy Host and Proxy Port

In some environments, all communication with the AWS cloud and its regions could be routed through a proxy server. In such environments, you should make sure that the eG agent connects to the cloud via the proxy server and collects metrics. To enable metrics collection via a proxy, specify the IP address of the proxy server and the port at which the server listens against the Proxy Host and Proxy Port parameters. By default, these parameters are set to none , indicating that the eG agent is not configured to communicate via a proxy, by default.

Proxy User Name, Proxy Password, and Confirm Password

If the proxy server requires authentication, then, specify a valid proxy user name and password in the Proxy User Name and Proxy Password parameters, respectively. Then, confirm the password by retyping it in the Confirm Password text box. By default, these parameters are set to none, indicating that the proxy sever does not require authentication by default.

Proxy Domain and Proxy Workstation

If a Windows NTLM proxy is to be configured for use, then additionally, you will have to configure the Windows domain name and the Windows workstation name required for the same against the proxy domain and proxy workstation parameters. If the environment does not support a Windows NTLM proxy, set these parameters to none.

Exclude Region

Here, you can provide a comma-separated list of region names or patterns of region names that you do not want to monitor. For instance, to exclude regions with names that contain 'east' and 'west' from monitoring, your specification should be: *east*,*west*

Measurements made by the test
Measurement Description Measurement Unit Interpretation

All requests

Indicates the total number of HTTP requests made to this bucket.

Number

This is a good indicator of the workload of a bucket.

Amazon S3 scales to support very high request rates. If your request rate grows steadily, Amazon S3 automatically partitions your buckets as needed to support higher request rates.

Get requests

Indicates the number of HTTP GET requests made for objects in this bucket.

Number

Amazon S3 scales to support very high request rates. If your request rate grows steadily, Amazon S3 automatically partitions your buckets as needed to support higher request rates. However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.

Put requests

Indicates the number of HTTP PUT requests made for objects in this bucket.

Number

Delete requests

Indicates the number of HTTP DELETE requests made for objects in this bucket. This also includes delete multiple objects requests.

Number

List requests

Indicates the number of HTTP requests this bucket that list the contents of the bucket.

Number

Head requests

Indicates the number of HTTP HEAD requests made to this bucket.

Number

Post requests

Indicates the number of HTTP POST requests made to this bucket.

Number

Data downloaded

Indicates the amount of data downloaded from this bucket.

KB

Data uploaded

Indicates the amount of data uploaded to this bucket.

KB

HTTP 4XX client errors

Indicates the number of HTTP requests to this bucket that returned the HTTP 4XX client error status code.

Number

This class of status code is intended for situations in which the error seems to have been caused by the client.

Ideally, the value of this measure should be 0.

HTTP 5XX server errors

Indicates the total number of HTTP 5xx server error status code requests made to this bucket.

Number

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request.

Ideally, the value of this measure should be 0.

First byte latency

Indicates the time that elapsed between when this bucket receives a complete request and when it starts returning a response to it .

Seconds

A low value is desired for this measure.

Total request latency

Indicates the time that elapsed from the first byte received to the last byte sent to this bucket.

Secs

This metric includes the time taken to receive the request body and send the response body, which is not included in First byte latency measure.

If the value of this measure is very high for a bucket, then you may want to follow the best practices guidelines discussed below to ensure low latency access to and better performance of Amazon S3. These guidelines vary with the type of workload - i.e., workload with mix of request tupes and a GET-intensive workload.

  • Workload with a mix of request tyes: If your requests are typically a mix of GET, PUT, DELETE, or GET Bucket (list objects), choosing appropriate key names for your objects will ensure better performance by providing low-latency access to the Amazon S3 index. It will also ensure scalability regardless of the number of requests you send per second.
  • Workloads that are GET-intensive: If the bulk of your workload consists of GET requests, we recommend using the Amazon CloudFront content delivery service.