Oracle RAC Timed Workload Test

Workload analysis for an Oracle database server involves:

  • Determining the number of transactions that applications execute on each node/ database instance of the Oracle cluster at any given point in time;
  • Understanding the type of database cluster operations these transactions trigger – executes? Updates? reads? Writes? Rollbacks? Parses?
  • Knowing how many users are active on each node/instance of the Oracle cluster at a given point in time;
  • Determining how quickly the server processes this load and how much processing power was spent on the same.

This not only reveals the current workload of the database server, but also highlights the processing ability of the server, pinpoints bottlenecks in processing, and leads administrators to where these bottlenecks lie. To perform such detailed workload analysis on each node in an Oracle cluster, administrators can use the Oracle RAC Timed Workload test. This test reports the current CPU usage of the server to indicate its current load. In addition, the test reveals the number and type of transactions the server processes every second, so that administrators can understand how well the server handles the load and can accurately identify where bottlenecks lie. By comparing the CPU usage of the server with its processing ability, administrators can intelligently figure out if the server requires additional CPU resources for improved performance.

Target of the test : Oracle RAC

Agent deploying the test : An internal agent

Outputs of the test : One set of results for every node in the Oracle cluster

Configurable parameters for the test
  1. TEST PERIOD - How often should the test be executed.
  2. Host – The host for which the test is to be configured.
  3. Port - The port on which the server is listening.
  4. orasid - The variable name of the oracle instance.
  5. service name - A ServiceName exists for the entire Oracle RAC system. When clients connect to an Oracle cluster using the ServiceName, then the cluster routes the request to any available database instance in the cluster. By default, the service name is set to none. In this case, the test connects to the cluster using the orasid and pulls out the metrics from that database instance which corresponds to that orasid. If a valid service name is specified instead, then, the test will connect to the cluster using that service name, and will be able to pull out metrics from any available database instance in the cluster.

    To know the ServiceName of a cluster, execute the following query on any node in the target cluster:

    select name, value from v$parameter where name =’service_names’

  6. User – In order to monitor an Oracle database server, a special database user account has to be created in every Oracle database instance that requires monitoring. A Click here hyperlink is available in the test configuration page, using which a new oracle database user can be created. Alternatively, you can manually create the special database user. When doing so, ensure that this user is vested with the select_catalog_role and create session privileges.

    The sample script we recommend for user creation (in Oracle database server versions before 12c) for eG monitoring is:

    create user oraeg identified by oraeg ;

    create role oratest;

    grant create session to oratest;

    grant select_catalog_role to oratest;

    grant oratest to oraeg;

    The sample script we recommend for user creation (in Oracle database server 12c) for eG monitoring is:

    alter session set container=<Oracle_service_name>;

    create user <user_name>identified by <user_password> container=current default tablespace <name_of_default_tablespace> temporary tablespace <name_of_temporary_tablespace>;

    Grant create session to <user_name>;                                 

    Grant select_catalog_role to <user_name>;

    The name of this user has to be specified here.

  7. Password – Password of the specified database user
  8. Confirm password – Confirm the password by retyping it here.
Measurements made by the test
Measurement Description Measurement Unit Interpretation

Database CPU usage

Indicates the percentage CPU used by the server.

Percent

A value close to 100% is indicative of excessive CPU usage. This in turn indicates that the server is using up all its processing power to service its current workload. It could be because the load is very high. It could also be owing to a few resource-intensive transactions executing on the server. In case of the former, you may want to allocate more CPU resources to the server, so as to enhance its processing ability.  

CPU  time

Indicates the time for which the server has been hogging the CPU resources since the last measurement period.

Secs

A consistent increase in the value of this measure could indicate a steady increase in the workload of the server.

Redo size

Indicates the rate at which  modifications were written to the redo logs since the last measurement period.

MB/Sec

If the value of this measure keeps growing, it could indicate that data is changing rapidly in the databases. A steady drop in this value could indicate that changes are not written to the redo logs as quickly as they occur.

Logical reads

Indicates the rate at which logical reads were performed by the server.

Reads/Sec

These measures are good indicators of the level of activity on the database server and how well the server handles these activity levels. In the event of a slowdown, you can compare the value of these measures to know where the slowdown may have originated – when making changes to data? When reading? When writing?

Block changes

Indicates the rate at which database blocks were changed.

Blocks/Sec

Physical reads

Indicates the rate at which the server performed physical reads.

Reads/Sec

Physical writes

Indicates the rate at which the server performed physical writes.

Writes/Sec

User calls

Indicates the rate at which the server made user calls.

Calls/Sec

 

Parses

Indicates the rate at which the server parsed SQL statements.

Parses/Sec

Parsing is one stage in the processing of a SQL statement. When an application issues a SQL statement, the application makes a parse call to Oracle Database. During the parse call, Oracle Database:

  • Checks the statement for syntactic and semantic validity.
  • Determines whether the process issuing the statement has privileges to run it.
  • Allocates a private SQL area for the statement.

 

If the value of this measure keeps increasing consistently, it could indicate that many SQL statements are being executed on the server, thus generating more parses every second. If the value of this measure drops consistently, it could indicate a bottleneck in parsing. 

Hard parses

Indicates the rate at which the server hard parsed SQL statements.

Parses/Sec

As opposed to a soft parse, a hard parse loads the SQL source code into RAM for parsing. If the value of this measure is decreasing steadily, it could mean that hard parsing is taking too long. It could also mean that very few hard parses are actually performed.

WA memory processed

Indicates the rate at which work area memory is used by the server.

MB/Sec

Oracle Database reads and writes information in the PGA on behalf of the server process. An example of such information is the run-time area of a cursor. Each time a cursor is executed, a new run-time area is created for that cursor in the PGA memory region of the server process executing that cursor. For complex queries (such as decision support queries), a big portion of the run-time area is dedicated to work areas allocated by memory intensive operators, including:

  • Sort-based operators, such as ORDER BY, GROUP BY, ROLLUP, and window functions
  • Hash-join
  • Bitmap merge
  • Bitmap create
  • Write buffers used by bulk load operations

For example, a sort operator uses a work area (sometimes called the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (also called the hash area) to build a hash table from its left input. If the amount of data to be processed by these two operators does not fit into a work area, then the input data is divided into smaller pieces. This allows some data pieces to be processed in memory while the rest are spilled to temporary disk storage to be processed later. 

A consistent increase in the value of this measure is indicative of excessive usage of the work area. This could indicate that the workload is characterized by complex queries that use memory intensive operators such as sort, hash-join, etc. You may want to fine-tune the work area size in order to enable it to handle the memory-intensive load better.

Logons

Indicates the rate at which users login to the database server.

Logons/Sec

A steady rise in this value is indicative of a steady increase in user activity on the server.

Executes

Indicates the rate at which executions are performed by the server.

Executions/Sec

 

Rollbacks

Indicates the rate at which the server performs rollbacks.

Rollbacks/Sec

Ideally, the value of this measure should be low. This is because, rollbacks are expensive operations and should be avoided at all costs. A consistent increase in the value of this measure is hence a cause for concern.

Transactions

Indicates the rate at which transactions were executed by the server.

Trans/Sec

A steady increase in the value of this measure could indicate an increase in the transaction load on the server. A consistent and notable drop in the value of this measure could indicate a bottleneck in transaction processing.