Users By GPU - AVD Test
GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. GPU-accelerated computing enhances application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU.
If a GPU-enabled session host / AVD is not sized with adequate GPU resources, then any user accessing graphic applications on that host is bound to complain of slowness. Also, if even a single user of an AVD is accessing graphic applications that are utilizing GPU memory and processing resources excessively, it could cause a GPU contention on the AVD that may adversely impact the graphic experience of other AVD users. To ensure an above-par graphic experience for all AVD users, administrators must track how much GPU resources are used up by each user, and proactively identify that user who may potentially cause a GPU crunch on a chosen session host / AVD. This can be achieved using the Users by GPU - AVD test. This test auto-discovers the users who are accessing graphic applications on a chosen session host / AVD, and reports the memory and processing resources used by each user. In the process, the test points you to users who are accessing resource-hungry graphic applications on the AVD.
Target of the test : An Azure Virtual Desktop
Agent deploying the test : An internal agent
Outputs of the test : One set of results for each user who is accessing graphic applications on the chosen Session Host / Azure Virtual Desktop that is using GPU
Parameters | Description |
---|---|
Test Period |
How often should the test be executed. |
Host |
The host for which the test is to be configured. |
Port |
The default port is NULL. |
NVIDIA Home |
This test uses NVIDIA WMI (NVWMI) to pull metrics on GPU usage by applications. For this test to run therefore, you need to install NVWMI on the target Azure Virtual Desktop. If the NVWMI is installed in its default directory, then this test will automatically discover the location of NVWMI and use it for pulling the desired metrics. This is why, the NVIDIA Home parameter is set to none by default. However, if you have installed NVWMI in a different directory, then you have to explicitly configure the path to that installation directory against the NVIDIA Home parameter. |
Report By Domain Name |
By default, this flag is set to Yes. This implies that by default, the detailed diagnosis of this test will display the domainname\username of each user who accessed an application on the AVD. This way, administrators will be able to quickly determine which user logged into the AVD from which domain. If you want the detailed diagnosis to display only the user name of these users, set this flag to No. |
DD Frequency |
Refers to the frequency with which detailed diagnosis measures are to be generated for this test. The default is 1:1. This indicates that, by default, detailed measures will be generated every time this test runs, and also every time the test detects a problem. You can modify this frequency, if you so desire. Also, if you intend to disable the detailed diagnosis capability for this test, you can do so by specifying none against DD Frequency. |
Detailed Diagnosis |
To make diagnosis more efficient and accurate, the eG Enterprise embeds an optional detailed diagnostic capability. With this capability, the eG agents can be configured to run detailed, more elaborate tests as and when specific problems are detected. To enable the detailed diagnosis capability of this test for a particular server, choose the On option. To disable the capability, click on the Off option. The option to selectively enable/disable the detailed diagnosis capability will be available only if the following conditions are fulfilled:
|
Measurement | Description | Measurement Unit | Interpretation |
---|---|---|---|
User sessions |
Indicates the number of open sessions for this user. |
Number |
|
GPU processes running in user's sessions |
Indicates the number of instances of graphic applications that are currently running across all sessions of this user. |
Number |
|
GPU compute usage for user's processes |
Indicates the percentage of time for which GPU was utilized by this user's sessions. |
Percent |
A value close to 100% indicates that the a particular is hogging the GPU resources. To zoom into the exact session where maximum GPU was consumed, and the precise graphic application that was accessed during that session, use the detailed diagnosis of this measure. |
Encoder usage for user's processes |
Indicates the amount of GPU that graphic applications accessed by this user are utilizing for the encoding process. |
Percent |
A value close to 100 is a cause of concern. By closely analyzing these measures, administrators can easily be alerted to situations where graphics processing is a bottleneck.
|
Decoder usage for user's processes |
Indicates the amount of GPU that graphic applications accessed by this user are utilizing for the decoding process. |
Percent |
|
Memory compute usage for user's processes |
Indicates the percentage of time during which memory on the GPU was read from/written to by the graphic applications accessed by this user. |
Percent |
A value close to 100% is a cause for concern as it indicates that a user is almost always using the graphics memory on a GPU. |
Memory used for user's processes |
Indicates the amount of graphics memory used by the graphic applications accessed by this user. |
MiB |
Compare the value of this measure across users to know which user is hogging GPU memory. |