Correct calculation for VDI (part 1)

I present to you a series of two posts where I will try to talk about developing a fairly standard VDI solution for a medium-sized enterprise. In the first part - preparation for implementation, planning; in the second - real practical examples.

It often happens that the infrastructure of our potential customer is already settled, and serious changes in equipment are unacceptable. Therefore, within the framework of many new projects, tasks arise to optimize the operation of current equipment.

For example, one of the customers, a large domestic software company, has a rather large fleet of servers and storage systems. Including - several HP ProLiant servers of the 6th and 7th generation and the HP EVA storage system, which were in reserve. It was on their basis that a solution had to be developed.
The voiced requirements for the VDI solution were:

  • Floating Desktops Pool (with saving changes after the end of the session);
  • The initial configuration is 700 users, with an extension of up to 1000.

I had to calculate how many servers and storage systems would eventually go from reserve to solution.
VMware was chosen as the virtualization environment. The working scheme turned out to be something like this:
One of the servers acts as a connection broker, clients connect to it. Connection broker selects from a pool of physical servers on which to start the virtual machine to serve the session.

The rest of the servers are ESX hypervisors that run virtual machines.
ESX hypervisors connect to a storage system that stores virtual machine images.



For ESX hypervisors were allocated quite powerful servers with 6-core Intel Xeon processors. At first glance, the “weak link” is the data storage system, because for VDI the hidden killer is IOPS. But of course, when developing a VDI solution, there are many other things to consider. I’ll tell you about some of them:

  1. What you need to remember - a significant part of the cost of the solution will be software licenses. Most often, it is more profitable to consider offers from hardware vendors, as The cost of OEM licenses for virtualization software is less.
  2. Secondly, it is worth considering the possibility of installing graphics accelerator cards for working with a large number of users with multimedia or in graphic editors.
  3. An interesting solution from HP is the HP ProLiant WS460c Gen8 Workstation Blade Server. Its distinctive feature: the ability to install graphics cards directly into the blade, without losing space for 2 hard drives, 2 processors and 16 memory slats. Graphics accelerators support up to 240 CUDA cores, 2.0 GB of GDDR5 memory (an interesting read here ).
  4. Third, you need to calculate in advance the total cost of ownership (aka TCO). Buying equipment is, of course, a big waste, but you can and should show the savings from implementing the solution, the cost of updating and repair, as well as the cost of renewing software licenses.

Finally, let's move on to I / O and the main I / O issues.

Windows, running on a local PC with a hard drive, has approximately 40-50 IOPSs. When a set of services is loaded on such a PC along with the base OS - prefetching, indexing services, hardware services, etc. - Often this is unnecessary functionality for the user, but he does not suffer large losses in performance.

But when the VDI client is used, almost all additional services are counterproductive - they produce a large number of I / O requests in an attempt to optimize the speed and load time, but the effect is reversed.

Also, Windows is trying to optimize data blocks so that access to them is mostly consistent, because on the local hard drive, sequential read and write requires less movement of the head of the hard drive. For VDI, this needs special attention - see the end of the post.

The number of IOPS that a client requires is more dependent on the services it needs. On average, this figure is 10-20 IOPS (IOPS, which is necessary in each case, can be measured using the mechanisms provided, for example, by Liquidware Labs ). Most IOPS are write operations. On average, in a virtual infrastructure, the ratio of read / write operations can reach 20/80.

What all this means in detail:
1. The boot / logon storms problem - cache and policies, policies and cache.
At the moment when a user accesses his virtual machine to log in, a big load on the disk subsystem is created. Our task is to make this load predictable, that is, reduce most of it to read operations, and then effectively use the dedicated cache for typical read data.

To achieve this, it is necessary to optimize not only the image of the client’s virtual machine, but also user profiles. When this is configured correctly, the IOPS load becomes a predictable value. In a well-functioning virtual infrastructure, the read / write ratio at the time of loading will be 90/10 or even 95/5.

But if we are dealing with the simultaneous start of work for a large number of users at once, then the data storage system should be quite large, otherwise the login process for some users may take several hours. The only way out is to correctly calculate the volume of the system, knowing the maximum number of simultaneous connections.

For example, if the image is loaded for 30 seconds, and if at the peak time the number of simultaneous connections of users is 10% of their total, then this creates a twofold write load and a tenfold read load, which is 36% of the normal storage load. If the number of simultaneous connections is 3%, then the load on the storage system increases by only 11% compared to regular loading. We give advice to the customer - encourage lateness to work! (
just kidding)
But one should not forget that the read / write proportions after the loading phase change diametrically: read IOPS drops to 5 IOPS per session, but the number of write IOPS does not decrease. If you forget about it, this is a hello to serious problems.

2. OPS storage systems - choose the right RAID

When requests from users come to a common storage system (SAN, iSCSI, SAS), then all input / output operations from the point of view of storage are 100% random. The performance of a drive with a rotation speed of 15,000 RPM is 150-180 IOPS, in a SAS / SATA storage system, disks in a RAID group (belonging to ATA, i.e. all disks in RAID are waiting for synchronization) will give 30% less performance than IOPS one SAS / SATA drive. The proportions are as follows:

  • In RAID5: 30-45 IOPS from disk to write, 160 IOPS to read;
  • In RAID1: 70-80 IOPS from disk to write, 160 IOPS to read;
  • In RAID0 140-150 IOPS from disk to write, 160 IOPS to read.

Therefore, for virtualization, it is recommended to use RAID with higher write performance (RAID1, RAID0).

3. Location on disk - alignment is most important

Because we want to minimize I / O operations from the storage system - our main task is to make each operation the most efficient. The location on the disk is one of the main factors. Each byte requested from the storage system is not read separately from the others. Depending on the vendor, the data in the storage system is divided into 32 KB, 64 KB, 128 KB blocks. If the file system on top of these blocks is not “aligned” with these blocks, then IOPS request 1 from the file system side will give 2 IOPS request from the storage system. If this system sits on a virtual disk, and this disk is on a file system that is not aligned, then requesting 1 IOPS by the operating system in this case will lead to request 3 IOPS from the file system. This shows that alignment at all levels is of utmost importance.


Unfortunately, Windows XP and Windows 2003 create a signature on the first part of the disk during the installation of the operating system and start recording on the last sectors of the first block, this completely shifts the OS file system relative to the storage system blocks. To fix this, you must create partitions presented to the host or virtual machine using diskpart or fdisk utilities. And set the start of recording from sector 128. The sector is 512 bytes and we put the beginning of recording exactly on the 64KB marker. Once the partition is aligned we will receive 1 IOPS from the storage system per request from the file system.



The same goes for VMFS. When a partition is created through the ESX Service Console, it will not match the storage system by default. In this case, you must use fdisk or create a partition through VMware vCenter, which performs the alignment automatically. Windows Vista, Windows 7, Windows Server 2008 and later products by default try to align the partition to 1 MB, but it is better to check the alignment yourself.

The performance improvement from alignment can be around 5% for large files and 30-50% for small files and random IOPS. And since the load of random IOPS is more characteristic of VDI, alignment is of great importance.

4. Defragmentation and prefetching should be disabled

The NTFS file system consists of 4KB blocks. Fortunately, Windows tries to arrange the blocks so that the access is as consistent as possible. When a user launches applications, requests are more written, rather than read. The defragmentation process is trying to guess how the data will be read. Defragmentation, in this case, generates a load on the IO without giving a significant positive effect. Therefore, it is recommended that you disable the defragmentation process for VDI solutions.

The same goes for the prefetching process. Prefetching is the process that puts the files that are accessed most of all into a special cache directory in Windows so that reading these files is consistent, thus minimizing IOPS. But since requests from a large number of users make IOPS completely random from the point of view of storage, the prefetching process does not give advantages, only the input / output traffic is wasted. Exit - the prefetching function must be completely disabled.

If the storage system uses deduplication, then this is another argument in favor of disabling prefetching and defragmentation functions - the prefetching process, moving files from one disk to another, seriously reduces the efficiency of the deduplication process, for which it is critical to store a table of rarely changed disk data blocks.