Lumira Server Performance Test
One of the key factors influencing the performance of the Lumira server is the usage of its JVM memory heap. This is because, Lumira server is a pure Java based process, configured with an initial amount of java heap size. Naturally therefore, the lack of adequate free memory to the JVM, faulty and frequent garbage collections, and JVM deadlocks can all have an adverse impact on the health of the Lumira Server. Likewise, if critical services hosted on the Lumira Server are not correctly configured to handle the requests they receive, then again Lumira Server performance will degrade. This is why, the eG agent periodically runs the Lumira Server Performance Test. This test enables administrators measure JVM health and the correctness of the configuration of the critical Lumira services, thus helping them rapidly detect dips in Lumira Server performance and the possible reasons for it.
Target of the test : A SAP BOBI Node
Agent deploying the test : An internal/remote agent
Outputs of the test : One set of results for Lumira server running in the node monitored.
Parameter | Description |
---|---|
Test period |
How often should the test be executed. |
Host |
Host name of the server for which the test is to be configured. |
Port |
Enter the port to which the specified host listens. This should be the port at which the web application server hosting SAP BOBI listens. |
Monitoring Hosts |
In large SAP environments, SAP BOBI installations can have several BI platform server hosts working together in a cluster. A node is a collection of BI platform servers running on the same host and managed by the same Server Intelligence Agent (SIA). One or more nodes can be on a single host. An Adapative Processing Server hosts a lot of BI services and processes non-Object/post processing requests. Multiple APSes may be defined on multiple nodes within a deployment. A single APS acts a primary server to process requests pertaining to the services hosted by the APS from the target SAP BOBI. In some environments, eG Enterprise failed to collect metrics from the target BOBI continuously. This was because, when the APSes failed over, the APS that was acting as a primary server corresponded to a different BOBI node and the eG agent failed to establish connection with the APS that was currently processing the service requests. To ensure that the metrics are collected seamlessly from the target BOBI, the eG agent should be equipped with the IP addresses of the target BOBI node as well as the IP addresses of all the Adaptive Processing Servers associated with the target BOBI node. For this purpose, specify a comma-separated list of IP addresses corresponding to the APSes in the MONITORING HOSTS text box. |
JMX Remote Port |
Specify the RMI port number of the BOBI monitoring application.To know the RMI port number of the monitoring application, refer to Enabling the Monitoring Application of the SAP BOBI Platform. |
JNDI Name |
Specify the lookup name for connecting to the JMX connector of the BOBI monitoring application. To know the JNDI name, refer to Enabling the Monitoring Application of the SAP BOBI Platform. |
JMX User and JMX Password |
Enter the credentials of an enterprise authenticated BOBI user belonging to the default monitoring users group. |
Confirm Password |
Confirm the password by retyping it here. |
Node Name |
Specify the name of the BOBI node being monitored. |
Provider |
This parameter appears only if the Mode is set to JMX. This test uses a JMX Provider to access the MBean attributes of the target Java application and collect metrics. Specify the package name of this JMX Provider in the PROVIDER textbox. By default, this parameter is set to default indicating that this test automatically discovers the JMX provider and report metrics. |
Timeout |
Specify the maximum duration (in seconds) for which the test will wait for a response from the server in the TIMEOUT textbox. The default TIMEOUT period is 1000 seconds. |
Detailed Diagnosis |
To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option. The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:
|
Measurement | Description | Measurement Unit | Interpretation |
---|---|---|---|
Free JVM memory |
Indicates the amount of memory available to the JVM for allocating new objects. |
GB |
Ideally, the value of this measure should be high. |
Free JVM memory percentage |
Indicates the percentage of memory available to the JVM for allocating new objects. |
Percent |
A value close to 100% is a cause for concern, as it indicates rapid erosion of the JVM memory heap. Without sufficient memory, the Lumira Server and its services will not be able to operate optimally. |
CPU usage in last 5 mins |
Indicates the percentage of time the CPU was used by the Lumira Server during the last 5 mins. |
Percent |
This measure considers all processors allocated to the JVM. A value close to 100% indicates excessive CPU usage, probably owing to CPU-intensive operations performed on the JVM. If more processing power is not allocated to the JVM, the Lumira Server may hang. |
Stopped system time during GC in last 5 mins |
Indicates the percentage of time that Lumira services were stopped for Garbage Collection in the last 5 minutes. |
Percent |
A critical stage of garbage collection requires exclusive access and all Lumira services are halted at this time. This value should always be less than 10. 10 and above indicates a low throughput issue and requires further investigation. |
Number of page faults during GC in last 5 mins |
Indicates the number of page faults that occurred while garbage collection was running during the last five minutes. |
Number |
Any value greater than 0 indicates a system under heavy load and low memory conditions. |
JVM lock contentions |
Indicates the current number of JVM lock contentions. |
Number |
This represents the number of synchronized objects that have threads that are waiting for access. The average value of this measure should be 0. Consistently higher values indicates threads that will not run again. You may want to take a thread dump to investigate such issues. |
Deadlocked threads |
Indicates the number of threads that are deadlocked. |
Number |
These threads are indefinitely waiting on each other for a common set of resources. Average value should be 0. Consistently higher values warranties further investigation using thread dumps. |
Session count |
Indicates the number of active sessions to the design studio. |
Number |
Design Studio is an application for building executive dashboards. |
Total JVM memory |
Indicates the amount of total memory available to the JVM for allocating new objects. |
GB |
Ideally, the value of this measure should be high. |
Current number of auditing events queued |
Indicates the number of auditing events that the Lumira Server has recorded, but which have not yet been retrieved by the CMS Auditor. |
Number |
If this number increases without bound, it could mean indicate that auditing has not been configured properly or that the system is heavily loaded and generating auditing events faster than the auditor can retrieve them. When stopping servers, It is advisable to disable them first and wait for auditing events to be fully retrieved and this queue becomes empty. Otherwise, they may be retrieved only when this server has been restarted and the CMS polls for them. |
Full GCs rate |
Indicates the rate of full garbage collections performed in the last 5 minutes. |
GCs/sec |
A rapid increase in this value may indicate a system under low memory conditions. |