Page Types Test

Page types can be classified as follows:

  • Page: This refers to the plain vanilla pages – i.e., the normal base pages.
  • iFrame: iFrames allow a visual HTML Browser window to be split into segments, each of which can show a different document.
  • AJAX: AJAX (Asynchronous JavaScript and XML) allows web pages to be updated asynchronously by exchanging small amounts of data with the server behind the scenes. This means that using AJAX, it is possible to update parts of a web page, without reloading the whole page.

A single web page in a web site/web application may contain more than one page type – in other words, it can contain a base page, one/more iFrames, and AJAX pages. Much against popular notion, base pages may not always be responsible for slowing down user accesses to a web site/web application. More often than not, iFrame URLs that that take hours to load and inefficient AJAX code, cause users to experience slowness or errors when accessing a web site/web application. This is why, when users complain that their web site/web application is responding slowly to requests, administrators need to rapidly determine whether the slowness is owing to the base pages themselves, or because of iFrames and/or AJAX pages operating within the base pages. Accurate identification of the problem source enables administrators to figure out exactly what should be done to enhance performance of the web site/web application – should the base pages be re-engineered? should the iFrame URLs be pulled up for scrutiny? or should the AJAX code be cleaned up?

The Page Types test looks ‘under-the-hood’ of a web site/web application, discovers the page types that are in use, and reports the user experience per page type, so that administrators can instantly identify which page type is the least responsive or is error-prone. Detailed diagnostics of the test also lead you to the precise pages of a type that are slow, and what is causing the slowness – is it a slow front end? latent network? or a busy backend? The actual JavaScript errors that occurred in the pages of each type are also available as part of the detailed diagnosis, so as to facilitate easy and effective troubleshooting.   

Target of the test : A web site/web application managed as a Real User Monitor

Agent deploying the test : A remote agent

Outputs of the test : One set of results for each page type

Configurable parameters for the test
Parameter Description

Test Period

How often should the test be executed.

Proxy Host

If the eG agent communicates with the RUM collector via a proxy, then specify the IP address/fully-qualified host name of the proxy server here. By default, this is set to none, indicating that the eG agent does not communicate with the collector via a proxy.

Proxy Port

If the eG agent communicates with the RUM collector via a proxy, then specify the port at which the proxy server listens for requests from the eG agent. By default, this is set to none, indicating that the eG agent does not communicate with the collector via a proxy.

Proxy Username and Proxy Password

If the eG agent communicates with the RUM collector via a proxy server, and if proxy server requires authentication, then specify valid credentials for authentication against Proxy Username and Proxy Password. If no proxy server is used or if the proxy server used does not require authentication, then set the ProxyUsername and ProxyPassword to none.

Confirm Password

Confirm the Proxy Password by retyping it here.

Note:

If you Reconfigure the test later to change the values of the Proxy Username, Proxy Password, Confirm Password parameters, then such changes will be effected only if the eG remote agent monitoring the Real User Monitor component is restarted. 

Do you want to limit the page views?

By default, the eG RUM monitors all requests to a managed web site. This is why, this flag is set to No by default. However, in case of web sites that receive thousands of hits every day, monitoring each page view may add significantly to the overhead of the eG agent and may also increase the size of the eG database considerably. To reduce the strain on both the eG agent and the eG backend, you may want to restrict the monitoring scope of this test to a few page visits. To achieve this, first set this flag to Yes. This will invoke the option depicted by the below figure.

Configuring the number of allowed page visits

By default, the Maximum allowed page visits per day is set to 100000. This implies that the test will consider only the first 100000 requests in a day for monitoring. All page visits beyond 100000 will by default be excluded from the test’s monitoring purview. You can increase or decrease this limit, if you so need.

Consider JavaScript errors in Apdex calculations

The formula for computing the Apdex score is as follows:

[Satisfied requests + (Tolerating requests/2)]/Total samples

By default, the count of Satisfied and Tolerating requests used for Apdex score computation includes the count of requests that experienced JavaScript errors. This is why, the Consider JavaScript errors in Apdex calculations flag is set to Yes by default. Since JavaScript errors also impact user experience, this default setting will result in an Apdex score that presents a near-real picture of the performance of a web site/web application.

If you do not want the Apdex score to be tainted by error requests, then set this flag to No. In this case, requests with JavaScript errors will not be considered for Apdex calculations, even if such requests have a response time that is well-within (satisfied) or in violation (tolerating) of the Slow Transaction Cutoff.

URL Segments to be used as Grouped URL

This parameter is applicable to the Page Groups test alone. The Page Groups test groups URLs based on the URL segments configured for monitoring and reports aggregated response time metrics for every group.

Using this parameter, you can specify a comma-separated list of URL segment numbers based on which the pages are to be grouped.

URL segments are the parts of a URL (after the base URL) or path delimited by slashes. So if you had the URL: http://www.eazykart.com/web/shopping/login.jsp, then http://www.eazykart.com will be the base URL or domain, /web will be the first URL segment, /shopping will be the second URL segment, and /login.jsp will be the third URL segment.

By default, this parameter is set to 1,2. This default setting, when applied to the sample URL provided above, implies that the eG agent will aggregate request and response time metrics to all instrumented web pages (i.e., web pages with the code snippet) under the URL /web/shopping. Here, /web corresponds to the specification 1 (URL segment 1) and /shopping corresponds to the specification 2 (URL segment 2) in the default value 1,2. This in turn means that, if the web site consists of pages such as http://www.eazykart.com/web/shopping/products.jsp, http://www.eazykart.com/web/shopping/products/travel/bags.jsp, http://www.eazykart.com/web/shopping/payment.jsp, etc., then the eG agent will track the requests to and responses from all these web pages, aggregate the results, and present the aggregated metrics for the descriptor /web/shopping.  This way, the test will create different page groups based on each of the second-level URL segments in the managed web site – eg., /web/movies, /web/travel, /web/reservations, /partner/contacts etc. - and will report aggregated metrics for each group so created.

If you want, you can override the default setting by providing different URL segment numbers here. For instance, your specification can be just 2. In this case, for the URL http://www.eazykart.com/web/shopping/login.jsp, the test will report metrics for the descriptor /shopping. You can even set this parameter to 1,3. For a web site that contains URLs such as http://www.eazykart.com/web/shopping/products.jsp, http://www.eazykart.com/web/shopping/products/travel/bags.jsp, and http://www.eazykart.com/web/shopping/payment.jsp, this specification will result in the following descriptors: /web/products, /web/products.jsp, and /web/payment.jsp.  

URL Patterns to be Ignored from Monitoring

By default, this test does not track requests to the following URL patterns: *.js,*.css,*.jpeg,*.jpg,*.png. If required, you can remove one/more patterns from this default list, so that such patterns are monitored, or can append more patterns to this list in order to exclude them from monitoring. For instance, to additionally ignore URLs that end with .gif and .bmp when monitoring, you need to alter the default specification as follows: *.js,*.css,*.jpeg,*.jpg,*.png,*.gif,*.bmp

Note:

The URL patterns configured here are not just used by the eG agent to filter out unimportant performance data during metrics collection; these patterns are also used by the RUM collector to determine which beacons should be accepted and which ones need to be discarded.

Typically, the very first time this test runs and polls the RUM collector for metrics, the eG agent executing this test searches the performance records stored in the collector for data that pertains to the URL patterns configured for exclusion. If such data is found, the agent then ignores that data during metrics collection. This means that during the very first test execution, URL filtering is performed only by the eG agent. During this time, the RUM collector downloads the URL patterns configured against this parameter from the eG agent. Armed with this information, the collector then scans all beacons that browsers send subsequently, and determines if there are any beacons for the excluded URL patterns. If such beacons are found, the collector discards them instantly. Filtering URLs at the collector-level significantly reduces the load on the collector and conserves storage space on the collector; it also minimizes the workload of the eG agent.  By additionally filtering URLs at the agent-level, eG makes sure that even if beacons pertaining to excluded URL patterns find their way into the RUM collector, they are captured and siphoned out by the eG agent.

JavaScript Errors to be Ignored

By default, this test alerts administrators to all Javascript errors that occur in the monitored web site/web application. This is why, this parameter is set to none by default. Sometimes however, administrators may not want to be notified when certain types of Javascript errors occur – this could be because such errors are harmless or are a normal occurrence in their environment. In such circumstances, you can instruct the eG agent to ignore these errors when monitoring. For this, specify the Java script error message to be ignored in the  JavaScript Errors to be Ignored text box, in the following format: <Javascript eror message>:<URL of the page/file where the message originated>.

For instance, say that the login.html page in your web site runs a few Java scripts that throw Object expected errors, which you want the eG agent to ignore. In this case, your error specification can be as follows: Object expected:http://www.eazykart.com/web/login.html. Alternatively, you can provide only that text string with which the error message begins in your specification  – eg., Object:http://www.eazykart.com/web/login.html. Moreover, instead of the complete URL, you can specify just the name of the HTML/jsp/aspx page in which the error is to be ignored – example: Object:login.html

Sometimes, the individual web pages in your web site may not run any Java script directly. Instead, these web pages may include links to Java script files that will run the Java script and return the output to the web pages. If you want the eG agent to ignore certain errors thrown by such a Javascript file, then your error pattern specification should include the URL of the Javascript file and not the web page that points to it. This is because, in this case, the file is where the error message originates. For instance, in the same example above, if the login.html page points to a validate.js file, and you want to ignore the Object expected errors that this JS file throws, your error pattern specification will either be, Object expected:validate.js, or Object:validate.js.

Multiple error message-URL combinations can also be provided as a comma-separated list. The format of your specification will be:

<Javascript error message 1>:<Originating URL 1>,<Javascript error message 2>:<Originating URL 2>,. . .

For example, to ignore the Object expected and Uncaught TypeError errors in the login.html page, use the following specification:

Object:login.html,Uncaught:login.html

Likewise, to ignore the Object expected error in the login.html page and the Uncaught TypeError in the validate.js file, your specification will be:

Object:login.html,Uncaught:validate.js

If you want to ignore the Uncaught TypeError across all the pages of the web site, your specification will be as follows:

Uncaught TypeError:All

Note:

When specifying the <Javascript error message> to be ignored, take care of the following:

  • The error message should not contain any special characters – in particular, the ‘:’ (the colon) and the ‘,’ (the comma)  characters should be avoided.
  • The case of the actual error message and the one specified as part of your error specification should match. This is because, the eG agent performs case-sensitive pattern matching.

Include Page Type Categorization in Top-N DD

By default, the detailed diagnosis of this test reports the top-n transactions (in terms of Avg page load time) across page types. Accordingly, the Include Page Type Categorization in Top-N DD flag is set to No by default. If you want detailed diagnosis to report the top-n transactions per page type, then set this flag to Yes. In this case, if say the Maximum Healthy Transactions, Maximum Slow Transactions, and Maximum Error Transactions parameters are each set to 5, then the detailed diagnosis of this test will report the top-5 healthy, top-5 slow, and top-5 error transactions to Ajax pages, iFrames, Virtual pages, and base pages. As this may typically generate a large volume of data and increase the load on the eG database, you may want to set the 'Include Page Type Categorization in Top-N DD' to 'Yes', only if your eG database is sized sufficiently.

Maximum Healthy Transactions in DD

By default, this parameter is set to 5 indicating that the detailed diagnosis of this test will collect and display metrics related to the top-5 normal/healthy page views, in terms of the Average page load time, for each measurement period in the timeline chosen for detailed diagnosis. To identify the top 5, the eG agent sorts all healthy transactions in a measurement period in the ascending order of their Average page load time, and picks the first 5 transactions from this sorted list for display in the Detailed Diagnosis page and for storing in the eG database.

You can however, increase or decrease this value depending upon how many healthy transactions you want to see in your detailed diagnosis, and how well-tuned or well-sized the eG database is.   

If you do not want the detailed diagnosis to include any healthy transaction, set the value of this parameter to 0. To view all healthy transactions, set the value of this parameter to all. Before setting the value of this parameter to all, make sure that you have a well-sized eG database in place, as this setting will store details of every transaction that registers an Avg page load time value that is lower than the Slow Transaction Cutoff specification. On a good day, this can result in numerous transactions, and can consume considerable space in the eG database.

Maximum Slow Transactions in DD

By default, this parameter is set to 5 indicating that the detailed diagnosis of this test will display metrics related to the top-5 slow transactions, in terms of the Average page load time, for each measurement period in the timeline chosen for detailed diagnosis. To identify the top 5, the eG agent sorts all slow transactions in a measurement period in the descending  order of their Average page load time, and picks the first 5 transactions from this sorted list for display in the Detailed Diagnosis page and for storing in the eG database.

You can however, increase or decrease this value depending upon how many slow transactions you want to see in your detailed diagnosis, and how well-tuned or well-sized the eG database is.   

If you do not want the detailed diagnosis to include any slow transaction, set the value of this parameter to 0. To view all slow transactions, set the value of this parameter to all. Before setting the  value of this parameter to all, make sure that you have a well-sized eG database in place, as this setting will store details of every transaction that registers an Avg page load time value that is higher than the slow transaction cutoff specification. On a bad day, this can result in numerous transactions, and can consume considerable space in the eG database.

Maximum Error Transactions in DD

By default, this parameter is set to 5, indicating that the detailed diagnosis of this test will display metrics related to the top-5 transactions (per measurement period) that encountered JavaScript errors, based on when those errors occurred. To identify the top 5, the eG agent sorts all error transactions in a measurement period the descending  order of the date/time at which the errors occurred, and picks the first 5 transactions from this sorted list for display in the detailed diagnosis page and for storing in the eG database.

You can however, increase or decrease this value depending upon how many error transactions you want to see in your detailed diagnosis, and how well-tuned or well-sized the eG database is.

If you do not want the detailed diagnosis to include any error transaction, set the value of this parameter to 0. To view all error transactions, set the value of this parameter to all. Before setting the  value of this parameter to all, make sure that you have a well-sized eG database in place, as this setting will store details of every transaction that encounters a JavaScript error. On a bad day, this can result in numerous transactions, and can consume considerable space in the eG database.

Slow Transaction Cutoff (Ms)

This test reports the count of slow page views and also pinpoints the pages that are slow. To determine whether/not a page is slow, this test uses the Slow Transaction Cutoff parameter. By default, this parameter is set to 4000 millisecs (i.e., 4 seconds). This means that, if a page takes more than 4 seconds to load, this test will consider that page as a slow page by default. You can increase or decrease this slow transaction cutoff according to what is ‘slow’ and what is ‘normal’ in your environment.   

Note:

The default value of this parameter is the same as the default Maximum threshold setting of the Avg page load time measure – i.e., both are set to 4000 millisecs by default. While the former helps eG to distinguish between slow and healthy page views for the purpose of providing detailed diagnosis, the latter tells eG when to generate an alarm on Avg page load time. For best results, it is recommended that both these settings are configured with the same value at all times. Therefore, if you change the value of one of these configurations, then make sure you update the value of the other as well. For instance, if the Slow Trasaction Cutoff is changed to 6000 millisecs, change the Maximum Threshold of the Avg page load time measure to 6000 millsecs as well.

Tolerating Cutoff

This test reports the count of tolerating page views and also pinpoints the pages with tolerating page views. For a page view to be considered as a tolerating page view, this test uses the Tolerating Cutoff parameter. By default, this parameter is set as 4 times the default value of the Slow Transaction Cutoff parameter. Since the default Slow Transaction Cutoff is 4000 milliseconds, the Tolderating Cutoff is set as 16000 milliseconds (4 * 4000) - i.e., 16 seconds - by default. This means that, if a page takes anywhere between 4 to 16 seconds to load, then, this test will consider that page as a page with tolerating page view by default. You can increase or decrease this Tolerating Cutoff according to what is 'tolerating', what is ‘slow’ and what is ‘normal’ in your environment.

Page Types to be Included in Dashboard

By default, the eG RUM Dashboard displays details of base page views only. You can optionally include Ajax, Virtual Page, and/or iFrame views as well in the dashboard, by selecting the relevant options from this list box.

Send Zero Values when there is No Traffic

By default, this flag is set to No. This implies that, if there is no traffic to a monitored web site/web application – i.e., if all measures of this test return only the value 0 - then the eG agent will not report these metrics to the eG manager. This also means that, by default, users to the eG monitoring console will not know that there is no traffic to the web site/web application.

You can however, ensure that users to the eG monitoring console are informed of the absence of any user activity on the web site/web application. For this, set this flag to Yes. If this is done, then the eG agent will report all the metrics of this test to the eG manager, despite the fact that their value is 0. These zero values will clearly indicate to users that there is no traffic to the monitored web site/web application.

Mask Public IP

Many high-security environments consider public IP addresses as 'classified information', as in the wrong hands, such information can cause serious damage to data security and integrity. This is why, by default, eG Enterprise hides/masks the last octet of public IP addresses displayed in detailed diagnosis using the 'x' character (by default). Accordingly, the Mask Public IP flag of this test is set to Yes by default. If you want, you can 'unmask' the last octet of public IP addresses, so the entire IP address is visible in clear text in the detailed diagnostics. For this, set the Mask Public IP flag of this test to No.

Mask Private IP

Many high-security environments consider private IP addresses as 'classified information', as in the wrong hands, such information can cause serious damage to data security and integrity. This is why, by default, eG Enterprise hides/masks the last octet of private IP addresses displayed in detailed diagnosis using the 'x' character (by default). Accordingly, the Mask Private IP flag of this test is set to Yes by default. If you want, you can 'unmask' the last octet of private IP addresses, so the entire private IP address is visible in clear text in the detailed diagnostics. For this, set the Mask Private IP flag of this test to No.

IP Masking Character

This parameter is applicable only if the Mask Public IP and/or Mask Private IP flags are set to Yes.

By default, this parameter is set to 'x'. This means that, by default, the last octet of private and public IP addresses in detailed diagnosis is masked using the 'x' character. You can override this default value by specifying any other character that you may want to use as a masking character of IP addresses - e.g., *, ? etc.

Mask URL Params

Sometimes, sensitive information - e.g., passwords - may be transmitted in 'clear text' as values of certain URL parameters. To make sure that miscreants have no access to such confidential information, eG Enterprise, by default, uses the * (asterisk) character to hide/mask all parameter values in the URLs displayed in detailed diagnosis. This is why, the Mask URL Params flag is set to Yes by default. If you want, you can unmask all URL parameter values in the detailed diagnosis of this test by setting this flag to No.

DD Frequency

Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD frequency.

Detailed Diagnosis

To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option.

The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:

  • The eG manager license should allow the detailed diagnosis capability
  • Both the normal and abnormal frequencies configured for the detailed diagnosis measures should not be 0.
Measurements made by the test
Measurement Measurement Measurement Unit Interpretation

Page views

Indicates the total number of times pages of this type were viewed by users to the web site/web application.

Number

This is a good measure of the traffic to a specific page type.

Sudden, but significant spikes in the page view count could be a cause for concern, as it could be owing to a malicious virus attack or an unscrupulous attempt to hack your web site/web application.

Note:

An abnormally high value for this measure may not always be a cause for concern; nor would it always indicate a genuine increase in traffic to the web site/web application. 

If the eG agent monitoring the Real User Monitor component is stopped for sometime (say, for maintenance purposes) and then started, or if the eG agent-collector connection breaks and is restored after a while, the eG agent will pull all the metrics that the collector stored locally during the period of its absence, cumulate them, and then display the cumulated values in the eG monitoring console as metrics that pertain to the current measurement period. In reality, these metrics pertain to the entire time period when the eG agent was unavailable. Because of this, the Page views measure may indicate a sudden and significant surge in traffic.   

Apdex score

Indicates the apdex score of the web site/web application based on the experience of users to this page type.

Number

Apdex (Application Performance Index) is an open standard developed by an alliance of companies. It defines a standard method for reporting and comparing the performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations.

The Apdex method converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined.

The Apdex formula is:

Apdext = (Satisfied Count + Tolerating Count / 2) / Total Samples

This is nothing but the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples.

A score of 1.0 means all responses were satisfactory. A score of 0.0 means none of the responses were satisfactory. Tolerating responses half satisfy a user. For example, if all responses are tolerating, then the Apdex score would be 0.50.

Ideally therefore, the value of this measure should be 1.0. A value less than 1.0 indicates that the experience of users to this page type has been less than satisfactory.  

Average page load time

Indicates the average time taken by the pages of this type to load completely on the browser.

ms

This is the average interval between the time that a user initiates a request and the completion of the page load of the response in the user's browser. In the context of an Ajax request, it ends when the response has been completely processed.

By comparing the value of this measure across page types, you will be able to tell if the page load time is significantly higher for any one type of page – this could be the page type that is causing the slowness.

You may want to compare the values of the of the Average browser initial request time, Average network time, Average server time, Average content download time, and Average browser render time measures for that page type, to know what is exactly causing pages of that type to load slowly.

If the Average browser initial request time is the highest, it can be attributed to a slowdown in redirection, AppCaching, or because the request spent too much time waiting on the browser. If the Average network time is the highest, it denotes that the network is the problem source. This in turn can be caused by TCP connection latencies, delays in domain look up, and slowdown in SSL handshake. If the Average content download time measure reports an abnormally high value, then it could be owing to slowness in downloading the HTML document or in building the DOM. On the other hand, if the Average server time measure registers the highest value, it indicates that the problem lies with the web site/web application backend – i.e., the web/web application server that is hosting the web site/web application being monitored.

To know which pages of the type are slow, use the detailed diagnosis of this measure.   

Unique user session

Indicates the number of distinct users who are currently accessing pages of this type in the web site/web application.

Number

HttpOnly is a flag that can be set on cookies to prevent their contents from being accessed by JavaScript. This is often done for session cookies to hide the session identifier, as a security measure.

If the HttpOnly flag is set for the session cookies of the monitored web site, then the eG RUM JavaScript will not be able to read the session identifier from the cookies. In such cases therefore, the eG agent will not be able to report the count of unique sessions.

Page views per minute

Indicates the number of times the pages of this type were viewed per minute.

Number

An unusually high value for this measure may require investigation.

Average page unload time

Indicates the average time taken for an unload event to complete.

Milliseconds

The unload event occurs when the user navigates away from the page. The unload event is triggered when: a link to leave the page is clicked or a new URL is typed in the address bar.

If an unload event takes too long to complete, it can adversely impact page load time. This in turn can scar user experience with the web site/application. This is why, a high value for this measure is a cause for concern.

Average page processing time

Indicates the elapsed time between DOM loading and the start of the page load event.

Milliseconds

A high value of this measure can adversely impact page load time.

Average page onload event duration

Indicates the average duration of actions triggered by the OnLoad event to complete.

Milliseconds

The onload event occurs when an object has been loaded. onload is most often used within the <body> element to execute a script once a web page has completely loaded all content (including images, script files, CSS files, etc.).

Sometimes, actions triggered via the OnLoad event or any other child events can take a while to complete execution.

Such a delay in completion of these can slow down page loading, thereby delivering a sub-par experience to users. Ideally therefore, the value of these measures should be low.

Average children event load time

Indicates the average duration of actions triggered by child events.

Milliseconds

Average first paint time

Indicates the average time between navigation and when the browser first renders pixels to the screen, rendering anything that is visually different from the default background color of the body.

Milliseconds

If first paint is slow, or late, the user will not be able to perceive the visual change during the loading of the web page.

If first paint or first contentful paint is high, recommended best practices include improving time to first byte from the server, eliminating render-blocking resources, avoid script-based elements above the fold, avoid lazy loading of above the fold images, optimize DOM size, etc.

Average first contentful paint time

Indicates when the browser first rendered any text, image (including background images), video, canvas that had been drawn into, or non-empty SVG

Milliseconds

Slow contentful paint will affect the perception of the user regarding page speed. A slow first contentful paint will cause users to think that the web page is slow, even if it loads in a short time thereafter.

If first paint or first contentful paint is high, recommended best practices include improving time to first byte from the server, eliminating render-blocking resources, avoid script-based elements above the fold, avoid lazy loading of above the fold images, optimize DOM size, etc.

Normal page view percentage

Indicates the percentage of page views of this type that delivered a satisfactory experience to users.

Percent

The value of this measure indicates the percentage of page views of this type in which users have neither experienced any slowness, nor encountered any Javascript errors.

Ideally, the value of this measure should be 100%. A value that is slightly less than 100% indicates that the user experience with pages of this type has not been up to the mark. A value less than 50% is indicative of a serious problem, where most of the page views of this type are either slow or have encountered Javascript errors. Under such circumstances, to know what exactly is affecting the experience of users to this page type, compare the value of the Slow page view percentage with that of the Javasript error page view percentage for that page type. This will reveal the reason for the poor user experience – slow pages? or Javascript errors?

If slow pages are the problem, use the detailed diagnosis of the Slow page views measure to know which pages of that type are slow and where these pages are losing time.

If JavaScript errors are the problem, use the detailed diagnosis of the JavaScript error view percentage measure to know what errors occurred in which pages of the type. 

Slow page view percentage

Indicates the percentage of page views of this type that are slow in loading.

Percent

Ideally, the value of this measure should be 0. A value over 50% implies that you are in a spot of bother, with over half of the page views being slow. Use the detailed diagnosis of the Slow page views measure to identify the slow pages and isolate the root-cause of the slowness – is it the front end? the network? or the backend?

JavaScript error view percentage

Indicates the percentage of page views of this type that have encountered JavaScript errors.

Percent

Ideally, the value of this measure should be 0. A value over 50% implies that you are in a spot of bother, with over half of the page views of this type are experiencing JavaScript errors.

Slow page views (Tolerating & Frustrated)

Indicates the number of page views of this type that were slow.

Number

A page view is considered to be slow when the average time taken to load that page exceeds the Slow Transaction Cutoff configured for this test.

Ideally, a page should load quickly. The value 0 is hence desired for this measure. If the value of this measure is high, it indicates that users frequently experienced slowness when accessing pages of this type. To know which page views of this type are slow and why, use the detailed diagnosis of this measure.

JavaScript error page views

Indicates the number of times JavaScript errors occurred when viewing pages of this type.

Number

Ideally, the value of this measure should be 0. A high value indicates that many JavaScript errors are occurring when viewing pages of this type. Use the detailed diagnosis of this measure to identify the error pages and to know what Javascript error has occurred in which page. This will greatly aid troubleshooting!

Satisfied page views

Indicates the number of times pages of this type were viewed without any slowness. 

Number

A page view is considered to be slow when the average time taken to load that page exceeds the Slow Transaction Cutoff configured for this test. If this Slow Transaction Cutoff is not exceeded, then the page view is deemed to be ‘satisfactory’. To know which page views are satisfactory, use the detailed diagnosis of this measure.

Ideally, the value of this measure should be the same as that of the Page views measure. If not, then it indicates that one/more page views are slow – i.e., have violated the Slow Transaction Cutoff.

If the value of this measure is much lesser than the value of the Tolerating page views and the Frustrated page views, it is a clear indicator that the user experience with this page type has been below-par. In such a case, use the detailed diagnosis of the Tolerating page views and Frustrated page views measures to know which pages are slow and why.

Tolerating page views

Indicates the number of tolerating page views of this type.

Number

If the Average page load time of a page exceeds the Tolerating Cutoff configuration of this test, then such a page view is considered to be a Tolerating page view.

Ideally, the value of this measure should be 0. A value higher than that of the Satisfied page views measure is a cause for concern, as it implies that the overall user experience with this page type is less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. The detailed metrics will also enable you to accurately isolate what is causing the tolerating page views – a problem with the front end? network? or backend?

Frustrated page views

Indicates the number of frustrated page views of this type.

Number

If the Average page load time of a page is over 4 times the Slow Transaction Cutoff configuration of this test (i.e., > 4 * Slow Transaction Cutoff ), then such a page view is considered to be a Frustrated page view.

Ideally, the value of this measure should be 0. A value higher than that of the Satisfied page views measure is a cause for concern, as it implies that the experience of users to this page type has been less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. The detailed metrics will also enable you to accurately isolate what is causing the frustrated page views – a problem with the front end? network? or backend?

Average browser initial request time

Indicates the interval between when a request was received by pages of this type, when it followed a redirect, to when it was processed in the AppCache.

ms

In Figure 1, the Average browser initial request time is the time spent between the navigationStart event to the domainLookupStart event. This also includes the time spent by the browser waiting for one event to end and the next to begin. In short, this measure is the sum of Average redirect timeAverage AppCache time, and Average browser wait time measures. This means that if a request takes too long to follow a redirection, or if the AppCache takes too long to process the request, or if the request spends too much time on the browser for the previous request to complete, then the value of this measure will significantly increase. This in turn will impact user experience, and consequently, the Apdex score.

This is why, if this measure reports an abnormal value for a page type, it is important that you compare the value of the Average redirect timeAverage AppCache time, and Average browser wait time measures for the same page type, to figure out where the request spent maximum time - in redirection? in the AppCache? or when waiting on the browser?

Average server time

Indicates the interval between the start of processing of a request on the browser for this page type to when the response is received.

ms

In Figure 1, the Average server time is the time spent between the requestStart event and responseStart event.

Ideally, a low value is desired for this measure, as high values will certainly hurt the Apdex scrore of the web site/web application.

The key factor that can influence the value of this measure is the request processing ability of the web server/web application server that is hosting the web site/web application being monitored.

Any slowdown in the backend web server/web application server – caused by the lack of adequate processing power in or improper configuration of the backend server – can significantly delay request processing by the server. In its aftermath, the Average server time will increase, leaving users with an unsatisfactory experience with the web site/web application.

Note:

This test uses the Navigation Timing API to measure web site performance. The Navigation Timing API typically exposes several properties that offer information about the time at which different page load events happen (see Figure 1) - eg., the requestStart event, the responseStart event, etc. This test uses the time stamps provided by the Navigation Timing API to compute and report the duration of page load events, so you can accurately identify where page loading is bottlenecked.

The Navigation Timing API on Internet Explorer (IE) v11 reports an incorrect time stamp for the requestStart event of the page loading process (see Figure 1). As a result, for page view requests initiated from IE 11 browsers alone, eG RUM will report incorrect values for this measure.

This issue was noticed in IE 11 in April 2019. It is recommended that you track hot fixes/patches released by Microsoft post April 2019, study the release notes of such fixes/patches, and determine if this bug has been fixed in any. If so, then you are advised to apply that fix/patch on IE 11 to resolve the issue.

Until then, we recommend that you use the following workaround to accurately measure the Average server time of a page view request.

  • Deploy eG BTM (Java/.NET, as the case may be) on the backend server hosting the target web site/web application.
  • Use eG BTM to trace the path of transactions to the target web site/web application and enable the (Java or .NET) Business Transactions test to capture metrics on transaction performance.
  • Next, use the detailed diagnostics reported by eG RUM to identify the page view requests coming from the IE 11 browser. Make a note of the values in the Request time, URL and Query Params columns of detailed diagnosis.
  • Then, search the detailed diagnostics of the (Java or .NET) Business Transactions test for transactions with the same URL, Request time, and Query Params as reported by eG RUM.
  • The response time that eG BTM reports for each of these transactions is the Average server time of those transactions.

Note that this workaround applies only for those transaction URLs that are captured and reported as part of detailed diagnostics.

Average network time

Indicates the elapsed time since a user initiates a request to this page type and the start of fetching the response document from it.

ms

In Figure 1, the time spent between navigationStart and requestStart makes up the Average network time. This includes the time to perform DNS lookups, the time to establish a TCP connection with the server, and the time to perform an SSL handshake. In other words, the value of this measure is nothing but the sum of the Average DNS lookup time , the Average TCP connection time , and the Average SSL handshake time measures.

Ideally, the value of this measure should be low. A very high value will often end up delaying page loading and degrading the quality of the web site service. In the event that the server connection time is high therefore, simply compare the values of the Average DNS lookup time , Average TCP connection time, and Average SSL handshake time measures to know to what this delay can be attributed – a delay in domain name resolution? Or a poor network connection to the server? or slowness in SSL negotiations between the browser and the server?

Note:

This test uses the Navigation Timing API to measure web site performance. The Navigation Timing API typically exposes several properties that offer information about the time at which different page load events happen (see Figure 1) - eg., the requestStart event, the responseStart event, etc. This test uses the time stamps provided by the Navigation Timing API to compute and report the duration of page load events, so you can accurately identify where page loading is bottlenecked.

The Navigation Timing API on Internet Explorer (IE) v11 reports an incorrect time stamp for the requestStart event of the page loading process (see ). As a result, for page view requests initiated from IE 11 browsers alone, eG RUM will report incorrect values for this measure.

This issue was noticed in IE 11 in April 2019. It is recommended that you track hot fixes/patches released by Microsoft post April 2019, study the release notes of such fixes/patches, and determine if this bug has been fixed in any. If so, then you are advised to apply that fix/patch on IE 11 to resolve the issue.

Average content download time

Indicates the time to make the complete HTML document (DOM) available for JavaScript to apply rendering logic on the pages of this type.

ms

This is the time spent between the responseStart event (shown in ) and the documentContentLoadedEventStart event (not shown in ). The documentContentLoadedEventStart is typically fired just before the domContentLoaded event, which is just after browser has finished downloading and parsing all the scripts that had defer set and no async attribute.

In summary, the Average content download time measure is the sum of the values of the Average DOM download time and Average DOM processing time measures.

This means that an abnormal increase in any of the above-mentioned time values will increase the value of this measure.

If content downloading takes unusually long for a request, then you must compare the values of the Average DOM download time and Average DOM processing time measures to figure out what is causing the delay - is it because of the poor responsiveness of server, cache, or local resource? or is it because DOM processing took too long?

Average browser render time

Indicates the time taken to complete the download of remaining resources, including images, and to finish rendering the pages of this type.

ms

A high value of this measure indicates that page rendering is taking too long. This can be attributed to an in-optimal HTML document architecture, complex CSS' (eg., deeply nested rules, slow selectors, complicated effects such as round borders, gradients etc.), and large sized images.

If the Average page load time measure reports an abnormally high value, then you may want to compare the value of this measure with that of the Average browser initial request time, Average network time, Average server time, and Average content download time measures to nail the exact source of the bottleneck.

Average DOM download time

Indicates the time for the browser to download the complete HTML document content to the pages of this type.

ms

The value of this measure is the time that elapsed between the responseStart and responseEnd events in .

Higher the download time of the document, longer will be the time taken to make the document available for page rendering. As a result, the overall user experience with the web site/web application will be affected! This is why, a low value is desired for this measure at all times.

Average DOM processing time

Indicates the time taken by the browser to build the Document Object Model (DOM) for the pages of this type and make it available for JavaScript to apply rendering logic.

ms

An unusually high value for this measure is a clear indicator that DOM building is taking longer than normal. Consequently, content download will be delayed, thus adversely impacting user experience with the web site/web application. Ideally therefore, the value of this measure should be low.

Average browser wait time

Indicates the time that requests to this page type spent on the browser, waiting for another request to complete.

ms

This is the sum of the time between every two consecutive events, starting with the navigationStart event till the requestStart event in .

Typically, web browsers limit the number of active connections for each domain. Most modern browsers (eg., Chrome) support only six simultaneous requests/connections. In this case therefore, when the seventh request comes in, that request waits on the browser until the six requests sent previously are processed. The waiting time of the seventh request is the browser wait time.

High browser wait time can prolong the browser's initial request time, thus adversely impacting the overall responsiveness of the web site/application. This is why, if the Average browser initial request time measure reports an abnormally high value, you will have to compare the values of the Average browser wait timeAverage redirection time, and Average AppCache time measure to determine whether/not the initial request delay observed on the browser is because requests have been waiting on the browser for too long a time.

Some of the means by which you can reduce browser waits are briefly discussed below:

  • Browsers such as Mozilla Firefox support up to 10 parallel requests. You may want to recommend such browsers for your web site/web application users, so that more requests are processed and fewer requests are queued on the browser, thus reducing browser wait time.
  • Web site/application developers can try domain sharding - i.e., split content across multiple domains. Typically, when a user connects to a web page, his or her browser scans the resulting HTML for resources to download. Normally these resources are supplied by a single domain - the domain providing the web page or a domain created specifically for resources. With domain sharding, the user’s browser connects to two or more different domains to simultaneously download the resources needed to render the web page. This allows website/application to be delivered faster to users as they do not have to wait for the previous set of requests to end before beginning the next set.

Average redirection time

Indicates the time that requests to pages of this type spent in redirection before fetching the pages.

ms

In , this is the elapsed time between the redirectStart and redirectEnd events.

URL redirection, also known as URL forwarding, is a technique to give a page, a form, or a whole web site/application, more than one URL address. Usually, web site administrators use URL redirection to:

  • Redirect users to the mobile version of the site
  • Redirect users to secured pages
  • Redirect users to the latest version of the resource/content
  • Redirect users to pages specific to their geo location
  • Redirecting canonical URLs

Though redirects are useful, they have to be kept at a minimum, as each redirect on a page adds latency to the overall page load time. This is because, when a user enters a domain into the browser and hits enter, the DNS resolution process is triggered and the domain is resolved to its corresponding IP address in a few milliseconds. If the landing page has another redirect, then the browser repeats the entire DNS resolution process once again to guide the user to the correct web page. The multiple redirect requests are taxing on the browser resources and slow down the page load.

Web page load time is also affected by internal redirects; for example, if the page tries to load content from a URL that has been redirected to newer or updated content, then the browser must create additional requests to fetch the valid content. These redirects result in additional round trips between the browser and the web server which pushes the load time higher; the perceived performance is degraded every time the browser encounters a redundant redirect.

Web site/application performance is also impacted if redirects are not implemented correctly. Some of the common redirect errors are:

  • Multiple redirects: The higher the number of redirects on a page, the higher is its page load time.
  • Invalid redirects: There are often instances where the web site administrator sets up bulk redirects without verifying the validity of the redirects. The site may also have old redirects that were never cleaned up. This can cause several issues on the site like broken links and 404s.
  • Redirect loop: When there are several redirects on the page that are linked to each other, it creates a chain of redirects which may loop back to the same URL that initiated the redirect. This results in a redirect loop error and the user will not be able to access the site.

Therefore, if you find that the value of the Average page load time measure is abnormally high owing to an unusually high value for the Average redirection time measure, then make sure you follow the best practices outlined below while implementing redirects, so you can significantly reduce page load time and improve user experience:

  • Avoid redundant redirects: It’s recommended to avoid redirects where possible and to use this method only when absolutely needed. This will cut down unnecessary overhead and improve the perceived performance of the page.
  • Chain redirects: When a URL is linked to another URL, this creates a chained redirect. Each URL added to the chain adds latency to the page. Chained redirects have a negative impact not only on page speed, but also SEO.
  • Clean-up redirects: You may have hundreds of redirects on your website and it could be one of the main factors affecting page speed. Old redirects may conflict with new URLs, backlinks can cause odd errors on the page. It is recommended to verify all the redirects you have set up and to remove the ones that are no longer needed. Retain the old links that have major referral traffic, while those that are rarely accessed can be removed. This exercise will help improve page speed significantly.

Average AppCache time

Indicates the time taken to check whether/not the requests to this page type can be serviced by the AppCache.

ms

HTML5 provides an application caching mechanism that lets web-based applications run offline. Developers can use the Application Cache (AppCache) interface to specify resources that the browser should cache and make available to offline users. Applications that are cached load and work correctly even if users click the refresh button when they are offline. Using an application cache gives an application the following benefits:

  • Offline browsing: users can navigate a site even when they are offline.
  • Speed: cached resources are local, and therefore load faster.
  • Reduced server load: the browser only downloads resources that have changed from the server.

To enable the application cache for an application, you must include the manifest attribute in the <html> element in your application's pages The manifest attribute references a cache manifest file, which is a text file that lists resources (files) that the browser should cache for your application. The browser does not cache pages that do not contain the manifest attribute, unless such pages are explicitly listed in the manifest file itself. You do not need to list all the pages you want cached in the manifest file, the browser implicitly adds every page that the user visits and that has the manifest attribute set to the application cache.

When the browser visits a document that includes the manifest attribute, if no application cache exists, the browser loads the document and then fetches all the entries listed in the manifest file, creating the first version of the application cache.

Subsequent visits to that document cause the browser to load the document and other assets specified in the manifest file from the application cache.

If the manifest file has changed, all the files listed in the manifest—as well as those already added to the cache —are fetched into a temporary cache.

Once all the files have been successfully retrieved, they are moved into the real offline cache automatically. Since the document has already been loaded into the browser from the cache, the updated document will not be rendered until the document is reloaded (either manually or programmatically).

A high value of this measure signifies that requests are spending too much time in the AppCache. This also introduces page loading latencies, which have an adverse effect on user-perceived performance of a web site/application. Common reasons for AppCaching issues and their practical solutions are detailed below:

  • If the media type is not set, then AppCache will not work. To avoid this, make sure that the manifest file is always served under the correct media type of text/cache-manifest.
  • If the manifest file is not served to the web browser from the same origin as the host page, the manifest file will fail to load. To avoid this, make sure that the manifest file is always served from the same origin as the host page. However, note that the manifest file can hold reference to resources to be cached from other domains.
  • The relative URLs that you mention in the manifest are relative to the manifest file and not to the document where you reference the manifest file. If you make this error when the manifest and the reference are not in the same path, the resources will fail to load and in turn the manifest file will not be loaded. This will stall appcaching.
  • Any change made to the manifest file will cause the entire set of files to be downloaded again. Moreover, if a manifest file is added to a HTML file, it forces all resources to be downloaded synchronously as soon as the manifest file is downloaded. As a result, resources that may not yet be required, such as JavaScript or an image below the fold, will be downloaded at the start of the page. This can increase page load time significantly. The solution to this is to load the Application Cache from a simple HTML file loaded in an iframe. This not only avoids caching dynamic HTML but also avoids the Application Cache being downloaded asynchronously after the page load has completed.

Average DNS lookup time

Indicates the time taken by this requests to this page type to perform the domain lookup for connecting to the web site/web application.

ms

A high value for this measure will not only affect DNS lookup, but will also impact the Average network time and Average page load time of the web site/web application. This naturally will have a disastrous effect on user experience.

Average TCP connection time

Indicates the time taken by requests to this page type to establish a TCP connection with the server.

ms

A bad network connection between the browser client and the server can delay TCP connections to the server. As a result, the Average network time too will increase, thus impacting page load time and overall user experience with the web site/web application.

Average SSL handshake time

Indicates the time taken by requests to this page type to complete the SSL handshake.

ms

An SSL handshake happens when a browser makes a secure request for content, also known as an encrypted HTTPS connection. The user’s browser and server negotiate encrypted keys and certificates to establish a secure connection between each other. Because this SSL negotiation requires exchanges between the browser and your server, it increases the time spent by the request on the network (i.e., it adds to the value of the Average network time measure). This in turn increases page load time.

In fact, an SSL handshake, along with DNS lookup and TCP handshake add three round trips to the page load time.

A quick fix to this is to use HTTP/2. HTTP/2 can use caching to reduce SSL setup to only one round trip.