Oracle DataFile Activity Test

The average read and write time of Oracle metric is the amount of time spent for each read and write against the datafile. By comparing read and write times across multiple datafiles will show you which datafiles are slower than others and you can identify the hot files among them.

Note :

The test should be configured to run every 10 mins or more

This test is disabled by default. To enable the test, go to the enable / disable tests page using the menu sequence : Agents -> Tests -> Enable/Disable, pick Oracle Database as the Component type, Performance as the Test type, choose this test from the disabled tests list, and click on the << button to move the test to the ENABLED TESTS list. Finally, click the Update button.

Target of the test : An Oracle 10g server

Agent deploying the test : An internal agent

Outputs of the test : One set of results for every datafile on the Oracle server.

Configurable parameters for the test
  1. TEST PERIOD - How often should the test be executed
  2. Host – The host for which the test is to be configured
  3. Port - The port on which the server is listening
  4. User – In order to monitor an Oracle database server, a special database user account has to be created in every Oracle database instance that requires monitoring. A Click here hyperlink is available in the test configuration page, using which a new oracle database user can be created. Alternatively, you can manually create the special database user. When doing so, ensure that this user is vested with the select_catalog_role and create session privileges.

    The sample script we recommend for user creation (in Oracle database server versions before 12c) for eG monitoring is:

    create user oraeg identified by oraeg

    create role oratest;

    grant create session to oratest;

    grant select_catalog_role to oratest;

    grant oratest to oraeg;

    The sample script we recommend for user creation (in Oracle database server 12c) for eG monitoring is:

    alter session set container=<Oracle_service_name>;

    create user <user_name>identified by <user_password> container=current default tablespace <name_of_default_tablespace> temporary tablespace <name_of_temporary_tablespace>;

    Grant create session to <user_name>;                                

    Grant select_catalog_role to <user_name>;

    The name of this user has to be specified here.

  5. Password – Password of the specified database user

    This login information is required to query Oracle’s internal dynamic views, so as to fetch the current status / health of the various database components.

  6. Confirm password – Confirm the password by retyping it here.
  7. ISPASSIVE – If the value chosen is yes, then the Oracle server under consideration is a passive server in an Oracle cluster. No alerts will be generated if the server is not running. Measures will be reported as “Not applicable" by the agent if the server is not up.
  8. show datafile path- This test reports a set of results for each datafile on the target Oracle database server. This means that every datafile is a descriptor of this test. By default, while displaying the descriptors of this test, the eG monitoring console does not prefix the datafile names with the full path to the datafiles. This is why, the show datafile path flag is set to No by default. If you want the data file names to be prefixed by the full path to the data files, then, set the show datafile path flag to Yes.
Measurements made by the test
Measurement Description Measurement Unit Interpretation

Average read time:

This measure indicates the average time taken to read each datafile.



Disk read times might be high due to the following reasons.

  • Executing inefficient queries for retrieving data; this could increase the frequency of full table scans and disk sorts, and can delay reading considerably ;
  • Frequent insert and update operations on datafiles could cause data fragmentation

Building efficient SQL queries can significantly increase the speed of your read operations. If fragmented data is the cause for the consistent slow-down in the read operations, then you might want to consider re-organizing the database objects to address this issue.

Average write time:

This measure indicates the average time taken to write each datafile.