WebLogic Queues Test

A JMS queue represents the point-to-point (PTP) messaging model, which enables one application to send a message to another. PTP messaging applications send and receive messages using named queues. A queue sender (producer) sends a message to a specific queue. A queue receiver (consumer) receives messages from a specific queue.

This test auto-discovers the queues on a WebLogic server, and monitors each queue for the size, number, and type of messages it holds, so that impending overloads and probable delivery bottlenecks can be proactively isolated and corrected.

Target of the test : A WebLogic Application Server

Agent deploying the test : An internal agent

Outputs of the test : One set of results for every queue auto-discovered on the monitored WebLogic server.

Configurable parameters for the test
Parameter Description

Test Period

How often should the test be executed.

Host

The IP address of the host for which this test is to be configured.

Port

The port at which the specified host listens. By default, this is NULL.

UseWarFile

This flag indicates whether/not monitoring is to be done using a Web archive file deployed on the WebLogic server (in which case, HTTP/HTTPS is used by the server to connect to the server). If this flag is set to No, the agent directly connects to the WebLogic server using the T3 protocol (no other file needs to be deployed on the WebLogic server for this to work). Note that the T3 protocol-based support is available for WebLogic servers ver.9 and above. Also, if the UseWarFile parameter is set to No, make sure that the EncryptPass parameter is set to No as well.  

When monitoring a WebLogic server deployed on a Unix platform particularly, if the UseWarFile parameter is set to No, you have to make sure that the eG agent install user is added to the WebLogic users group.

AdminServerHost and AdminServerPort

In some highly secured environments, the eG agent may not be able to collect certain critical metrics related to JDBC from a managed WebLogic server. In such cases, to enable the eG agent to collect the required metrics, you should specify the IP address and Port of the WebLogic admin server to which the managed WebLogic server is associated with. This will enable the eG agent to connect to the WebLogic admin server and collect the required metrics pertaining to the managed WebLogic server. Specify the IP address and Port of the WebLogic admin server in the AdminServerHost and AdminServerPort text boxes. By default, these parameters are set to none.

JSPTimeOut

Specify the duration (in seconds) within which the eG agent should receive the response from the eGurkha WAR file deployed on the WebLogic server in this text box. By default, this is set to is 120 seconds.

User

The admin user name of the WebLogic server being monitored.

Password

The password of the specified admin user.

Confirm Password

Confirm the password by retyping it here.

EncryptPass

If the specified password needs to be encrypted, set the EncryptPass flag to Yes. Otherwise, set it to No. By default, the Yes option will be selected.

Note:

If the UseWarFile flag is set to No, then make sure that the EncryptPass flag is also set to No.

SSL

Indicate whether the SSL (Secured Socket Layer) is to be used to connect to the WebLogic server.

Server

The name of the specific server instance to be monitored for a WebLogic server (the default value is "localhome")

URL

The URL to be accessed to collect metrics pertaining to the WebLogic server. By default, this test connects to a managed WebLogic server and attempts to obtain the metrics of interest by accessing the local Mbeans of the server. This parameter can be changed to a value of http://<adminserverIP>:<adminserverPort>. In this case, the test connects to the WebLogic admin server to collect metrics pertaining to the managed server (specified by the Host and Port). The URL setting provides the administrator with the flexibility of determining the WebLogic monitoring configuration to use.

Note:

If the admin server is to be used for collecting measures for all the managed WebLogic servers, then it is mandatory that the egurkha war file is deployed to the admin server, and it is up and running. 

Version

The Version text box indicates the version of the Weblogic server to be managed. The default value is "none", in which case the test auto-discovers the weblogic version. If the value of this parameter is not "none", the test uses the value provided (e.g., 7.0) as the weblogic version (i.e., it does not auto-discover the weblogic server version). This parameter has been added to address cases when the eG agent is not able to discover the WebLogic server version.

WebLogicJARLocation

Specify the location of the WebLogic server's java archive (Jar) file. If the UseWarFile flag is set to No, then the weblogic.jar file specified here is used to connect to the corresponding WebLogic server using the T3 protocol. Note that the T3 protocol-based support is available for WebLogic servers ver.9 and above.

Measurements made by the test
Measurement Description Measurement Unit Interpretation

Messages count

Indicates the current number of messages in this queue.

 

Number

This count does not include the messages that are pending.

Messages pending count

Indicates the number of pending messages in this queue.

Number

A message is considered to be in pending state when it is:

  • sent in a transaction but not committed.
  • received and not acknowledged
  • received and not committed
  • subject to a redelivery delay (as of WebLogic JMS 6.1 or later)
  • subject to a delivery time (as of WebLogic JMS 6.1 or later)

 

 

 

While momentary spikes in the number of pending messages in a queue is normal, if the number is allowed to grow consistently over time, it is bound to increase the total number of messages in the queue. Typically, the sum of the values of the Messages count and the Messages pending count measures equals the total number of messages in the queue. If this sum is equal to or is very close to the Messages Maximum setting for the quota resource that is mapped to this queue, it implies that the queue has filled up or is rapidly filling up with messages and cannot handle any more. When this happens, JMS prevents further sends with a ResourceAllocationException. Furthermore, such quota failures will force multiple producers to contend for space in the queue, thereby degrading application performance. To avoid this, you can do one/more of the following:

  • Increase the Messages Maximum setting of the quota resource mapped to the queue;
  • If a quota has not been configured for the queue, then increase the quota of the JMS server where the queue is deployed;
  • Regulate the flow of messages into the queue using one/more of the following configurations:

    • Blocking senders during quota conditions: The Send Timeout feature provides more control over message send operations by giving message producers the option of waiting a specified length of time until space becomes available on a destination.
    • Specifying a Blocking Send Policy on JMS Servers : The Blocking Send policies enable you to define the JMS server’s blocking behavior on whether to deliver smaller messages before larger ones when multiple message producers are competing for space on a destination that has exceeded its message quota.
    • Using the Flow Control feature: With the Flow Control feature, you can direct a JMS server or destination to slow down message producers when it determines that it is becoming overloaded. Specifically, when either a JMS server or it’s destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second). Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds). If at each interval, the server/destination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount. As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.
    • By tuning Message Performance Preference: The Messaging Performance Preference tuning option on JMS destinations enables you to control how long a destination should wait (if at all) before creating full batches of available messages for delivery to consumers.

      At the minimum value, batching is disabled. Tuning above the default value increases the amount of time a destination is willing to wait before batching available messages. The maximum message count of a full batch is controlled by the JMS connection factory’s Messages Maximum per Session setting. It may take some experimentation to find out which value works best for your system. For example, if you have a queue with many concurrent message consumers, by selecting the Administration Console’s Do Not Batch Messages value (or specifying “0” on the DestinationBean MBean), the queue will make every effort to promptly push messages out to its consumers as soon as they are available.

      Conversely, if you have a queue with only one message consumer that does not require fast response times, by selecting the console’s High Waiting Threshold for Message Batching value (or specifying “100” on the DestinationBean MBean), you can ensure that the queue only pushes messages to that consumer in batches.

Bytes count

Indicates the current size of the message that is stored in the queue destination in bytes.

KB

 

This count does not include the pending bytes.

Bytes pending count

Indicates the current size of the pending message that is stored in the queue destination in bytes.

KB

While momentary spikes in the size of pending messages in a queue is acceptable, if the size is allowed to grow consistently over time, it is bound to increase the total size of all messages in the queue. Typically, the sum of the values of the BytesCurrentCount and the BytesPendingCount measures indicates the total size of all messages in the queue. If this sum is equal to or is very close to the Bytes Maximum setting for the quota resource that is mapped to this queue, it implies that the queue has filled up or is rapidly filling up with messages and cannot handle any more. When this happens, JMS prevents further sends with a ResourceAllocationException. Furthermore, such quota failures will force multiple producers to contend for space in the queue, thereby degrading application performance. To avoid this, you can do one/more of the following:

  • Increase the Bytes Maximum setting of the quota resource mapped to the queue;
  • If a quota has not been configured for the queue, then increase the quota of the JMS server where the queue is deployed;
  • Regulate the flow of messages into the queue using one/more of the following configurations:

    • Blocking senders during quota conditions: The Send Timeout feature provides more control over message send operations by giving message producers the option of waiting a specified length of time until space becomes available on a destination.
    • Specifying a Blocking Send Policy on JMS Servers : The Blocking Send policies enable you to define the JMS server’s blocking behavior on whether to deliver smaller messages before larger ones when multiple message producers are competing for space on a destination that has exceeded its message quota.
    • Using the Flow Control feature: With the Flow Control feature, you can direct a JMS server or destination to slow down message producers when it determines that it is becoming overloaded. Specifically, when either a JMS server or it’s destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second). Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds).

      If at each interval, the server/des tination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount. As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed t o increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.

    • By Tuning the MessageMaximum configuration: WebLogic JMS pipelines messages that are delivered to asynchronous consumers (otherwise known as message listeners) or prefetch-enabled synchronous consumers.

      The messages backlog (the size of the pipeline) between the JMS server and the client is tunable by configuring the MessagesMaximum setting on the connection factory. In some circumstances, tuning this setting may improve performance dramatically, such as when the JMS application defers acknowledges or commits. In this case, BEA suggests setting the MessagesMaximum value to: 2 * (ack or commit interval) + 1. For example, if the JMS application acknowledges 50 messages at a time, set the MessagesMaximum value to 101. You may also need to configure WebLogic clients in addition to the WebLogic Server instance, when sending and receiving large messages.

    • By compressing messages: You may improve the performance of sending large messages traveling across JVM boundaries and help conserve disk space by specifying the automatic compression of any messages that exceed a user-specified threshold size. Message compression can help reduce network bottlenecks by automatically reducing the size of messages sent across network wires. Compressing messages can also conserve disk space when storing persistent messages in file stores or databases.
    • By paging out messages: With the message paging feature, JMS servers automatically attempt to free up virtual memory during peak message load periods. This feature can greatly benefit applications with large message spaces.
    • By tuning the Message Buffer Size: The Message Buffer Size option specifies the amount of memory that will be used to store message bodies in memory before they are paged out to disk.

      The default value of Message Buffer Size is approximately one-third of the maximum heap size for the JVM, or a maximum of 512 megabytes. The larger this parameter is set, the more memory JMS will consume when many messages are waiting on queues or topics. Once this threshold is crossed, JMS may write message bodies to the directory specified by the Paging Directory option in an effort to reduce memory usage below this threshold.

Messages deleted count

Indicates the number of messages that have been deleted from this queue.

Number

While you can design a QueueBrowser on your JMS server to view and delete specific queue messages, some messages are automatically deleted by the server. For instance, one-way messages that exceed quota are silently deleted without immediately throwing exceptions back to the client.

Messages moved count

Indicates the number of messages that have been moved from one queue destination to the other.

Number

 

Consumers count

Indicates the current number of consumers accessing the queue destination.

Number