Total live sessions
|
Indicates the total number of sessions that are currently live.
|
Number
|
This is a good indicator of the current session load on the target web site /web application.
To know which users' sessions are currently live on the web site/ web application, use the detailed diagnosis of this measure.
|
Average session duration
|
Indicates the average duration for which sessions were alive.
|
Mins
|
|
Average session apdex score
|
Indicates the average Apdex score across all sessions.
|
|
Apdex (Application Performance Index) is an open standard developed by an alliance of companies. It defines a standard method for reporting and comparing the performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations.
The Apdex method converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined.
The Apdex formula is:
Apdext = (Satisfied Count + Tolerating Count / 2) / Total Samples
This is nothing but the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples.
A score of 1.0 means all responses were satisfactory. A score of 0.0 means none of the responses were satisfactory. Tolerating responses half satisfy a user. For example, if all responses are tolerating, then the Apdex score would be 0.50.
Ideally therefore, the value of this measure should be 1.0. A value less than 1.0 indicates that the user experience with the web site/web application has been less than satisfactory.
|
Healthy live sessions
|
Indicates the number of live sessions with a 'healthy' user experience - i.e., sessions that did not experience slowness or errors.
|
Number
|
|
Slow live sessions
|
Indicates the number of live sessions that experience slowness when loading.
|
Number
|
Ideally, the value of this measure should be 0. A high value implies that you are in a spot of bother, as many sessions are experiencing lethargy in page loading.
|
Erroneous live sessions
|
Indicates the number of live sessions that have encountered errors.
|
Number
|
Ideally, the value of this measure should be 0. A high value implies that many sessions are erroneous.
|
Poor user experience live sessions
|
Indicates the number of live sessions with sub-par user experience.
|
Number
|
Ideally, the value of this measure should be 0. A non-zero value implies that one/more users are experiencing slowness or errors when interacting with the target web site/web application.
|
Slow session percentage
|
Indicates the percentage of sessions that experienced slowness.
|
Percent
|
Ideally, the value of this measure should be 0. A value over 50% is a cause of concern, as it means that over half of the sessions are experiencing slowness. Use the detailed diagnosis of this measure to figure out which user's sessions are slow and isolate the root-cause of the slowness – is it a slow frontend? bad network? or a malfunctioning backend?
|
Error session percentage
|
Indicates the percentage of sessions that encountered JavaScript errors.
|
Percent
|
Ideally, the value of this measure should be 0. A value over 50% implies that JavaScript errors are common in many sessions. Use the detailed diagnosis of this measure to figure out which user sessions are erroneous.
|
Average page unload time
|
Indicates the average time taken for an unload event to complete.
|
Milliseconds
|
The unload event occurs when the user navigates away from the page. The unload event is triggered when: a link to leave the page is clicked or a new URL is typed in the address bar.
If an unload event takes too long to complete, it can adversely impact page load time. This in turn can scar user experience with the web site/application. This is why, a high value for this measure is a cause for concern.
|
Average page processing time
|
Indicates the elapsed time between DOM loading and the start of the page load event.
|
Milliseconds
|
A high value of this measure can adversely impact page load time.
|
Average page onload event duration
|
Indicates the average duration of actions triggered by the OnLoad event to complete.
|
Milliseconds
|
The onload event occurs when an object has been loaded. onload is most often used within the <body> element to execute a script once a web page has completely loaded all content (including images, script files, CSS files, etc.).
Sometimes, actions triggered via the OnLoad event or any other child events can take a while to complete execution.
Such a delay in completion of these can slow down page loading, thereby delivering a sub-par experience to users. Ideally therefore, the value of these measures should be low.
|
Average children event load time
|
Indicates the average duration of actions triggered by child events.
|
Milliseconds
|
Average first paint time
|
Indicates the average time between navigation and when the browser first renders pixels to the screen, rendering anything that is visually different from the default background color of the body.
|
Milliseconds
|
If first paint is slow, or late, the user will not be able to perceive the visual change during the loading of the web page.
If first paint or first contentful paint is high, recommended best practices include improving time to first byte from the server, eliminating render-blocking resources, avoid script-based elements above the fold, avoid lazy loading of above the fold images, optimize DOM size, etc.
|
Average first contentful paint time
|
Indicates when the browser first rendered any text, image (including background images), video, canvas that had been drawn into, or non-empty SVG
|
Milliseconds
|
Slow contentful paint will affect the perception of the user regarding page speed. A slow first contentful paint will cause users to think that the web page is slow, even if it loads in a short time thereafter.
If first paint or first contentful paint is high, recommended best practices include improving time to first byte from the server, eliminating render-blocking resources, avoid script-based elements above the fold, avoid lazy loading of above the fold images, optimize DOM size, etc.
|
Unique user session
|
Indicates the number of distinct users who are currently accessing this web site/web application.
|
Number
|
HttpOnly is a flag that can be set on cookies to prevent their contents from being accessed by JavaScript. This is often done for session cookies to hide the session identifier, as a security measure.
If the HttpOnly flag is set for the session cookies of the monitored web site, then the eG RUM JavaScript will not be able to read the session identifier from the cookies. In such cases therefore, the eG agent will not be able to report the count of unique sessions.
|
Normal page view percentage
|
Indicates the percentage of page views with a normal user experience.
|
Percent
|
The value of this measure indicates the percentage of page views in which users have neither experienced any slowness, nor encountered any Javascript errors.
Ideally, the value of this measure should be 100%. A value less than 100% indicates the existence of one/more slow/error-prone pages in the web site. A value less than 50% is indicative of a serious problem, where most of the page views are either slow or have encountered Javascript errors. Under such circumstances, to know what exactly is affecting user experience, compare the value of the Slow page view percentage with that of the Javasript error page view percentage. This will reveal the reason for the poor user experience with the web site/web application – slow pages? or Javascript errors?
|
Slow page view percentage
|
Indicates the percentage of page views that are slow in loading.
|
Percent
|
Ideally, the value of this measure should be 0. A value over 50% implies that you are in a spot of bother, with over half of the page views being slow. Use the detailed diagnosis of the Slow page views measure to identify the slow pages and isolate the root-cause of the slowness – is it a slow frontend? bad network? or a malfunctioning backend?
|
JavaScript error view percentage
|
Indicates the percentage of page views that have encountered JavaScript errors.
|
Percent
|
Ideally, the value of this measure should be 0. A value over 50% implies that you are in a spot of bother, with over half of the page views experiencing JavaScript errors.
|
Slow page views (Tolerating & Frustrated)
|
Indicates the number of times pages in this web site/web application took very long to be viewed.
|
Number
|
A page view is considered to be slow when the average time taken to load that page exceeds the Slow Transaction Cutoff configured for this test.
Ideally, a page should load quickly. The value 0 is hence desired for this measure. If the value of this measure is high, it indicates that users frequently experienced slowness when accessing pages in the web site/web application. To know which page views are slow and why, use the detailed diagnosis of this measure.
|
JavaScript error page views
|
Indicates the number of times JavaScript errors occurred when viewing the pages in this web site/web application.
|
Number
|
Ideally, the value of this measure should be 0. A high value indicates that many JavaScript errors are occurring when viewing pages in the web site/web application. Use the detailed diagnosis of this measure to identify the error pages and to know what Javascript error has occurred in which page. This will greatly aid troubleshooting!
|
Satisfied page views
|
Indicates the number of times pages were viewed in the web site without any slowness.
|
Number
|
A page view is considered to be slow when the average time taken to load that page exceeds the Slow Transaction Cutoff configured for this test. If this Slow Transaction Cutoff is not exceeded, then the page view is deemed to be ‘satisfactory’. To know which page views are satisfactory, use the detailed diagnosis of this measure.
Ideally, the value of this measure should be the same as that of the Page views measure. If not, then it indicates that one/more page views are slow – i.e., have violated theSlow Transaction Cutoff
If the value of this measure is much lesser than the value of the Tolerating page views and the Frustrated page views, it is a clear indicator that web site performance is below-par. In such a case, use the detailed diagnosis of the Tolerating page views and Frustrated page views measures to know which pages are slow and why.
|
Tolerating page views
|
Indicates the number of tolerating page views to the web site/web application.
|
Number
|
If the Average page load time of a page exceeds the Tolerating Cutoff configuration of this test, then such a page view is considered to be a Tolerating page view.
Ideally, the value of this measure should be 0. A value higher than that of the Satisfied page views measure is a cause for concern, as it implies that the overall user experience with the pages in the web site is less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. The detailed metrics will also enable you to accurately isolate what is causing the tolerating page views – a problem with the frontend? network? or backend?
|
Frustrated page views
|
Indicates the number of frustrated page views to this web site/web application.
|
Number
|
If the Average page load time of a page is over 4 times the Slow Transaction Cutoff configuration of this test(i.e., > 4 * Slow Transaction Cutoff ), then such a page view is considered to be a Frustrated page view.
Ideally, the value of this measure should be 0. A value higher than that of the Satisfied page views measure is a cause for concern, as it implies that the overall user experience with the pages in the web site is less than satisfactory. To know which pages are contributing to this sub-par experience, use the detailed diagnosis of this measure. The detailed metrics will also enable you to accurately isolate what is causing the frustrated page views – a problem with the frontend? network? or backend?
|
Desktop page views
|
Indicates the number of times this web site/web applicationwas accessed from client desktops.
|
Number
|
To know which pages in the web site were accessed by desktop users and to evaluate the experience of the desktop users with each of these pages, use the detailed diagnosis of this measure. In the process, slow pages can be identified and the reason for the slowness can be pinpointed.
|
Mobile page views
|
Indicates the number of times this web site/web application was accessed from mobile phones.
|
Number
|
To know which pages in the web site were accessed by mobile phone users and to evaluate the experience of these users with each of the pages, use the detailed diagnosis of this measure. In the process, slow pages can be identified and the reason for the slowness can be pinpointed.
|
Tablet page views
|
Indicates the number of times this web site/web application was accessed from tablets.
|
Number
|
To know which pages in the web site were accessed by tablet users and to evaluate the experience of these users with each of the pages, use the detailed diagnosis of this measure. In the process, slow pages can be identified and the reason for the slowness can be pinpointed.
|
Average browser initial request time
|
Indicates the interval between when a request was received by this web site, when it followed a redirect, to when it was processed in the AppCache.
|
ms
|
In , the Average browser initial request time is the time spent between the navigationStart event to the domainLookupStart event. This also includes the time spent by the browser waiting for one event to end and the next to begin. In short, this measure is the sum of Average redirect time, Average AppCache time, and Average browser wait time measures. This means that if a request takes too long to follow a redirection, or if the AppCache takes too long to process the request, or if the request spends too much time on the browser for the previous request to complete, then the value of this measure will significantly increase. This in turn will impact user experience, and consequently, the Apdex score.
This is why, if this measure reports an abnormal value, it is important that you compare the value of the Average redirect time, Average AppCache time, and Average browser wait time measures, to figure out where the request spent maximum time - in redirection? in the AppCache? or when waiting on the browser?
|
Average server time
|
Indicates the interval between the start of processing of a request on the browser to when the browser receives the response.
|
ms
|
In , the Average server time is the time spent between the requestStart event and responseStart event.
Ideally, a low value is desired for this measure, as high values will certainly hurt the Apdex scrore of the web site/web application.
The key factor that can influence the value of this measure is the request processing ability of the web server/web application server that is hosting the web site/web application being monitored.
Any slowdown in the backend web server/web application server – caused by the lack of adequate processing power in or improper configuration of the backend server - can significantly delay request processing by the server. In its aftermath, the Average server time will increase, leaving users with an unsatisfactory experience with the web site/web application.
Note:
This test uses the Navigation Timing API to measure web site performance. The Navigation Timing API typically exposes several properties that offer information about the time at which different page load events happen (see ) - eg., the requestStart event, the responseStart event, etc. This test uses the time stamps provided by the Navigation Timing API to compute and report the duration of page load events, so you can accurately identify where page loading is bottlenecked.
The Navigation Timing API on Internet Explorer (IE) v11 reports an incorrect time stamp for the requestStart event of the page loading process (see ). As a result, for page view requests initiated from IE 11 browsers alone, eG RUM will report incorrect values for this measure.
This issue was noticed in IE 11 in April 2019. It is recommended that you track hot fixes/patches released by Microsoft post April 2019, study the release notes of such fixes/patches, and determine if this bug has been fixed in any. If so, then you are advised to apply that fix/patch on IE 11 to resolve the issue.
Until then, we recommend that you use the following workaround to accurately measure the Average server time of a page view request.
- Deploy eG BTM (Java/.NET, as the case may be) on the backend server hosting the target web site/web application.
- Use eG BTM to trace the path of transactions to the target web site/web application and enable the (Java or .NET) Business Transactions test to capture metrics on transaction performance.
- Next, use the detailed diagnostics reported by eG RUM to identify the page view requests coming from the IE 11 browser. Make a note of the values in the Request time, URL and Query Params columns of detailed diagnosis.
- Then, search the detailed diagnostics of the (Java or .NET) Business Transactions test for transactions with the same URL, Request time, and Query Params as reported by eG RUM.
- The response time that eG BTM reports for each of these transactions is the Average server time of those transactions.
Note that this workaround applies only for those transaction URLs that are captured and reported as part of detailed diagnostics.
|
Average network time
|
Indicates the elapsed time since a user initiates a request and the start of fetching the response document from the server or application.
|
ms
|
In , the time spent between navigationStart and requestStart makes up the Average network time. This includes the time to perform DNS lookups, the time to establish a TCP connection with the server, and the time to perform an SSL handshake. In other words, the value of this measure is nothing but the sum of the Average DNS lookup time , the Average TCP connection time , and the Average SSL handshake time measures.
Ideally, the value of this measure should be low. A very high value will often end up delaying page loading and damaging the quality of the web site service. In the event that the network time is high therefore, simply compare the values of the Average DNS lookup time , Average TCP connection time, and Average SSL handshake time measures to know to what this delay can be attributed – a delay in domain name resolution? a poor network connection to the server? or slowness in SSL negotiations between the browser and the server?
Note:
This test uses the Navigation Timing API to measure web site performance. The Navigation Timing API typically exposes several properties that offer information about the time at which different page load events happen (see ) - eg., the requestStart event, the responseStart event, etc. This test uses the time stamps provided by the Navigation Timing API to compute and report the duration of page load events, so you can accurately identify where page loading is bottlenecked.
The Navigation Timing API on Internet Explorer (IE) v11 reports an incorrect time stamp for the requestStart event of the page loading process (see ). As a result, for page view requests initiated from IE 11 browsers alone, eG RUM will report incorrect values for this measure.
This issue was noticed in IE 11 in April 2019. It is recommended that you track hot fixes/patches released by Microsoft post April 2019, study the release notes of such fixes/patches, and determine if this bug has been fixed in any. If so, then you are advised to apply that fix/patch on IE 11 to resolve the issue.
|
Average content download time
|
Indicates the time to make the complete HTML document (DOM) available for JavaScript to apply rendering logic.
|
ms
|
This is the time spent between the responseStart event (shown in ) and the documentContentLoadedEventStart event (not shown in ). The documentContentLoadedEventStart is typically fired just before the domContentLoaded event, which is just after browser has finished downloading and parsing all the scripts that had defer set and no async attribute.
In summary, the Average content download time measure is the sum of the values of the Average DOM download time and Average DOM processing time measures.
This means that an abnormal increase in any of the above-mentioned time values will increase the value of this measure.
If content downloading takes unusually long for a request, then you must compare the values of the Average DOM download time and Average DOM processing time measures to figure out what is causing the delay - is it because of the poor responsiveness of server, cache, or local resource? or is it because DOM processing took too long?
|
Average browser render time
|
Indicates the time taken to complete the download of remaining resources, including images, and to finish rendering the page.
|
ms
|
A high value of this measure indicates that page rendering is taking too long. This can be attributed to an in-optimal HTML document architecture, complex CSS' (eg., deeply nested rules, slow selectors, complicated effects such as round borders, gradients etc.), and large sized images.
If the Average page load time measure reports an abnormally high value, then you may want to compare the value of this measure with that of the Average browser initial request time, Average network time, Average server time, Average content download time , and Average browser rendering time measures to nail the exact source of the bottleneck.
|
Average DOM download time
|
Indicates the time for the browser to download the complete HTML document content.
|
ms
|
The value of this measure is the time that elapsed between the responseStart and responseEnd events in .
Higher the download time of the document, longer will be the time taken to make the document available for page rendering. As a result, the overall user experience with the web site/web application will be affected! This is why, a low value is desired for this measure at all times.
|
Average DOM processing time
|
Indicates the time taken to build the Document Object Model (DOM) and make it available for JavaScript to apply rendering logic.
|
ms
|
An unusually high value for this measure is a clear indicator that DOM building is taking longer than normal. Consequently, content download will be delayed, thus adversely impacting user experience with the web site/web application. Ideally therefore, the value of this measure should be low.
|
Average browser wait time
|
Indicates the time spent by a request on the browser, waiting for another request to complete.
|
ms
|
This is the sum of the time between every two consecutive events, starting with the navigationStart event till the requestStart event in .
Typically, web browsers limit the number of active connections for each domain. Most modern browsers (eg., Chrome) support only six simultaneous requests/connections. In this case therefore, when the seventh request comes in, that request waits on the browser until the six requests sent previously are processed. The waiting time of the seventh request is the browser wait time.
High browser wait time can prolong the browser's initial request time, thus adversely impacting the overall responsiveness of the web site/application. This is why, if the Average browser initial request time measure reports an abnormally high value, you will have to compare the values of the Average browser wait time, Average redirection time, and Average AppCache time measure to determine whether/not the initial request delay observed on the browser is because requests have been waiting on the browser for too long a time.
Some of the means by which you can reduce browser waits are briefly discussed below:
- Browsers such as Mozilla Firefox support up to 10 parallel requests. You may want to recommend such browsers for your web site/web application users, so that more requests are processed and fewer requests are queued on the browser, thus reducing browser wait time.
- Web site/application developers can try domain sharding - i.e., split content across multiple domains. Typically, when a user connects to a web page, his or her browser scans the resulting HTML for resources to download. Normally these resources are supplied by a single domain - the domain providing the web page or a domain created specifically for resources. With domain sharding, the user’s browser connects to two or more different domains to simultaneously download the resources needed to render the web page. This allows website/application to be delivered faster to users as they do not have to wait for the previous set of requests to end before beginning the next set.
|
Average redirection time
|
Indicates the time spent in redirection before fetching the page.
|
ms
|
In , this is the elapsed time between the redirectStart and redirectEnd events.
URL redirection, also known as URL forwarding, is a technique to give a page, a form, or a whole web site/application, more than one URL address. Usually, web site administrators use URL redirection to:
- Redirect users to the mobile version of the site
- Redirect users to secured pages
- Redirect users to the latest version of the resource/content
- Redirect users to pages specific to their geo location
- Redirecting canonical URLs
Though redirects are useful, they have to be kept at a minimum, as each redirect on a page adds latency to the overall page load time. This is because, when a user enters a domain into the browser and hits enter, the DNS resolution process is triggered and the domain is resolved to its corresponding IP address in a few milliseconds. If the landing page has another redirect, then the browser repeats the entire DNS resolution process once again to guide the user to the correct web page. The multiple redirect requests are taxing on the browser resources and slow down the page load.
Web page load time is also affected by internal redirects; for example, if the page tries to load content from a URL that has been redirected to newer or updated content, then the browser must create additional requests to fetch the valid content. These redirects result in additional round trips between the browser and the web server which pushes the load time higher; the perceived performance is degraded every time the browser encounters a redundant redirect.
Web site/application performance is also impacted if redirects are not implemented correctly. Some of the common redirect errors are:
- Multiple redirects: The higher the number of redirects on a page, the higher is its page load time.
- Invalid redirects: There are often instances where the web site administrator sets up bulk redirects without verifying the validity of the redirects. The site may also have old redirects that were never cleaned up. This can cause several issues on the site like broken links and 404s.
- Redirect loop: When there are several redirects on the page that are linked to each other, it creates a chain of redirects which may loop back to the same URL that initiated the redirect. This results in a redirect loop error and the user will not be able to access the site.
Therefore, if you find that the value of the Average page load time measure is abnormally high owing to an unusually high value for the Average redirection time measure, then make sure you follow the best practices outlined below while implementing redirects, so you can significantly reduce page load time and improve user experience:
- Avoid redundant redirects: It’s recommended to avoid redirects where possible and to use this method only when absolutely needed. This will cut down unnecessary overhead and improve the perceived performance of the page.
- Chain redirects: When a URL is linked to another URL, this creates a chained redirect. Each URL added to the chain adds latency to the page. Chained redirects have a negative impact not only on page speed, but also SEO.
-
Clean-up redirects: You may have hundreds of redirects on your website and it could be one of the main factors affecting page speed. Old redirects may conflict with new URLs, backlinks can cause odd errors on the page. It is recommended to verify all the redirects you have set up and to remove the ones that are no longer needed. Retain the old links that have major referral traffic, while those that are rarely accessed can be removed. This exercise will help improve page speed significantly.
|
Average AppCache time
|
Indicates the time taken to check whether/not the requested URL is available in the AppCache.
|
ms
|
HTML5 provides an application caching mechanism that lets web-based applications run offline. Developers can use the Application Cache (AppCache) interface to specify resources that the browser should cache and make available to offline users. Applications that are cached load and work correctly even if users click the refresh button when they are offline.
Using an application cache gives an application the following benefits:
- Offline browsing: users can navigate a site even when they are offline.
- Speed: cached resources are local, and therefore load faster.
- Reduced server load: the browser only downloads resources that have changed from the server.
To enable the application cache for an application, you must include the manifest attribute in the <html> element in your application's pages The manifest attribute references a cache manifest file, which is a text file that lists resources (files) that the browser should cache for your application. The browser does not cache pages that do not contain the manifest attribute, unless such pages are explicitly listed in the manifest file itself. You do not need to list all the pages you want cached in the manifest file, the browser implicitly adds every page that the user visits and that has the manifest attribute set to the application cache.
When the browser visits a document that includes the manifest attribute, if no application cache exists, the browser loads the document and then fetches all the entries listed in the manifest file, creating the first version of the application cache.
Subsequent visits to that document cause the browser to load the document and other assets specified in the manifest file from the application cache.
If the manifest file has changed, all the files listed in the manifest—as well as those already added to the cache —are fetched into a temporary cache.
Once all the files have been successfully retrieved, they are moved into the real offline cache automatically. Since the document has already been loaded into the browser from the cache, the updated document will not be rendered until the document is reloaded (either manually or programmatically).
A high value of this measure signifies that requests are spending too much time in the AppCache. This also introduces page loading latencies, which have an adverse effect on user-perceived performance of a web site/application. Common reasons for AppCaching issues and their practical solutions are detailed below:
- If the media type is not set, then AppCache will not work. To avoid this, make sure that the manifest file is always served under the correct media type of text/cache-manifest.
- If the manifest file is not served to the web browser from the same origin as the host page, the manifest file will fail to load. To avoid this, make sure that the manifest file is always served from the same origin as the host page. However, note that the manifest file can hold reference to resources to be cached from other domains.
- The relative URLs that you mention in the manifest are relative to the manifest file and not to the document where you reference the manifest file. If you make this error when the manifest and the reference are not in the same path, the resources will fail to load and in turn the manifest file will not be loaded. This will stall appcaching.
-
Any change made to the manifest file will cause the entire set of files to be downloaded again. Moreover, if a manifest file is added to a HTML file, it forces all resources to be downloaded synchronously as soon as the manifest file is downloaded. As a result, resources that may not yet be required, such as JavaScript or an image below the fold, will be downloaded at the start of the page. This can increase page load time significantly. The solution to this is to load the Application Cache from a simple HTML file loaded in an iframe. This not only avoids caching dynamic HTML but also avoids the Application Cache being downloaded asynchronously after the page load has completed.
|
Average DNS lookup time
|
Indicates the time taken by the browser to perform the domain lookup for connecting to this web site/web application.
|
ms
|
A high value for this measure will not only affect DNS lookup, but will also impact the Average network time and Average page load time of the web site/web application. This naturally will have a disastrous effect on user experience.
|
Average TCP connection time
|
Indicates the time taken by the browser to establish a TCP connection with the server.
|
ms
|
A bad network connection between the browser client and the server can delay TCP connections to the server As a result, the Average network time too will increase, thus impacting page load time and overall user experience with the web site/web application.
|
Average SSL handshake time
|
Indicates the time taken to complete the SSL handshake.
|
ms
|
An SSL handshake happens when a browser makes a secure request for content, also known as an encrypted HTTPS connection. The user’s browser and server negotiate encrypted keys and certificates to establish a secure connection between each other. Because this SSL negotiation requires exchanges between the browser and your server, it increases the time spent by the request on the network (i.e., it adds to the value of the Average network time measure). This in turn increases page load time.
In fact, an SSL handshake, along with DNS lookup and TCP handshake add three round trips to the page load time.
A quick fix to this is to use HTTP/2. HTTP/2 can use caching to reduce SSL setup to only one round trip.
|