JEUS Applications Test

This test automatically discovers all applications installed on the JEUS server and, at regular intervals, reports performance data pertaining to each of the applications.

Target of the test : A JEUS web application server

Agent deploying the test : An internal agent

Outputs of the test : One set of results for each deployment type:application on the target server.

Configurable parameters for the test
Parameter Description

Test period

How often should the test be executed

Host

The host for which the test is to be configured.

Port

The port at which the specified host listens. By default, this is 9736.

Username and Password

To enable the eG agent to communicate and continuously monitor the target JEUS server, the eG agent should be configured with the credentials of the admin user on the server. In highly-secure environments, administrators may not want to expose the credentials of the user possessing administrator privileges. In such environments, for monitoring the JEUS application server, administrators have an option to create a new user on the JEUS server and assign administrator privilege to that user. The steps to create a new user with administrator privilege are explained in Creating a User with Administrator Privileges topic. This user should also be granted permission to access the JNDI objects on the server such that the eG agent can pull out performance metrics from the target server. To know how to grant permission to access the resources, refer to Granting the Administrator Role access to JNDI Binding Objects topic.

Confirm Password

Confirm the Password by retyping it here.

Listener Port

To collect metrics from the target server, the eG agent should be configured to use JMX to connect to the JRE used by the target server and pull out the performance metrics. By default, JMX support is enabled for the JRE used by the target server. The JMX connector listens on port 9736, by default. Therefore, type 9736 as the Listener Port. However, if the host is configured with multiple sever instances, then you should specify the port number at which the JMX listens in your environment. Ensure that you specify the same port that you configured while creating the listener (if required) using the JEUS WebAdmin Console. To know the details on the listener port, refer to Enabling JMX Support for the JEUS Web Application Server topic.

Export name

The export name is the reference name of the RMI connector that is to be used as a JMX connector. The procedure to obtain the export name is detailed inEnabling JMX Support for the JEUS Web Application Server topic. Specify the name of the export against this parameter.

Server name

Provide the name of the sever instance that is being monitored in the Server Name text box. Also, ensure that the JVM monitoring is enabled for the target server. To obtain the name of the server instance, refer to Enabling JMX Support for the JEUS Web Application Server topic.

DD Frequency

Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD frequency.

Detailed Diagnosis

To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option.

The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:

  • The eG manager license should allow the detailed diagnosis capability
  • Both the normal and abnormal frequencies configured for the detailed diagnosis measures should not be 0.
Measurements made by the test
Measurement Description Measurement Unit Interpretation

Status

Indicates the current status of this application.

 

The values reported by this measure and its numeric equivalents are mentioned in the table below:

Measure Value Numeric Value
Running 10
Distributed 9

Note:

By default, this measure reports the current status of each application. The graph of this measure however, is represented using the numeric equivalents only - 9 or 10.

The detailed diagnosis of this measure lists the name of the contexts in each application, total number of requests received by the application corresponding to the contexts, number of successful/failure requests and average time taken to process the requests.

Active sessions

Indicates the number of sessions that are currently active for this application.

Number

 

Requests

Indicates the total number of requests received by this application during the last measurement period.

Number

This measure is a good indicator of the load on each application. Compare the value of this measure across applications to identify which application is experiencing very high load.

Request rate

Indicates the rate at which the requests were received by this application.

Requests/sec

 

Successful requests

Indicates the number of requestes to this application that were processed succesfully during the last measurement period.

Number

Ideally, the value of this measure is desired to be high.

Successful requests rate

Indicates the rate at which the requests to this application were processed succesfully.

Requests/sec

 

Unsucessful requests

Indicates the number of requests to this application that failed during the last measurement period.

Number

Ideally, the value of this measure should be very low (zero). A sudden/gradual increase in the value of this measure indicates the processing bottle-neck on the server.

Unsucessful requests rate

Indicates the rate at which the requests to this application failed.

Requests/sec

 

Average process time

Indicates the average time taken by this application to process the requests.

Milliseconds

A low value is desired for this measure. A consistent rise in the value of this measure could indicate a processing bottleneck, which in turn may affect application performance. Compare the value of this measure across applications to identify the application that is least responsive to user requests.