Home > Scale > Designing a Virtual Infrastructure that Scales: Part 4, Design Thinking

Designing a Virtual Infrastructure that Scales: Part 4, Design Thinking

January 10th, 2011

This post is the fourth in the series: Designing a Virtual Infrastructure that Scales.

In Part 3, Scaling for VDI, I went through some of the considerations specific to VDI and talked about how big your environment can get if you decide to go VDI.

In this post it’s time to talk about actual infrastructure design and start thinking and planning for how to handle scale.

In Part 1, Taking Stock, I talked about how the virtual hosting environment you built a few years ago may be starting to get a little unwieldy to manage. I suggested: “Now is the time to pause if you can and take stock. Have a good look at your current environment and then zoom out and look at the big picture to plan the next stage of your virtual infrastructure because if you don’t you may find it running away from you.”

Hopefully by now you have an idea of what you actually need to virtualise. You’ve identified the number of servers and workstations, what resources they will require, who is going to be accesssing them, from where and at what times. You should know which VMs require business recovery and to where. You have done some calculations on how many hosts you will need to host your VMs and planned failover capacity for HA and DRS. You have an idea of VM network requirements, storage space and IOPS required.

Now is the time to use all that information gathering and see how you can build an infrastructure to run it all.

The key to designing an infrastructure to cope with scale is to think as big as you can and then break down the components into repeatable building blocks.

When you are putting your design thinking cap on, the most important thing to consider throughout all your planning is simplicity. If you start with something complicated, when it grows and scales, that complexity will multiply and be harder to manage, support and add to and will most often land up being more expensive. Keep it simple from the beginning and it will be so much easier to create a model where you can add another building block to add capacity. This is what prepackaged solutions like FlexPod and vBlock offer. They don’t suite everyone but you can use the idea in your own environment by creating up the building blocks yourselves

Virtualisation is all about sharing resources to better utilise resources. This thinking needs to be spread out to the rest of your infrastructure. Sharing storage resources and especially network reaources combined with scalable server compute resources surely has to be the way to go.

Networking
You may have been dedicating maybe 8 Gigabit Nics to each ESX host. 2 for VM traffic, 2 for Storage, 2 for VMotion and 2 for Service Console. OK, you may have done a little bit of consolidation and shared VMotion and Service Console or Service Console and VM traffic but you had a lot of cables coming out of the back of your servers.

Ask a network person about what traffic should be directed over what Nics and you may get a response like “Traffic is Traffic” which means it doesn’t really matter to a network person what traffic goes over what, as long as there’s enough bandwidth and latency is small enough for the traffic to get there.

Different virtualisation traffic is only split out to be extra careful. Look at your switch port utilisation and you will generally see very little traffic utilisation. OK, you may need to protect VMotion traffic from saturating your VM or Storage traffic but surely you can consolidate some of that networking traffic to be more efficient.

Network consolidation has been given a major boost by 10 GbE arriving on the scene at just the right time. With such a big network pipe you can have multiple types of virtualisation traffic running over the same cables and not be worried about saturating your bandwidth. As 10GbE is ethernet traffic there are also no more skills to learn. In fact I see the adoption of 10GbE will further drive use of NFS as a storage protocol as one reason people have used fibre channel was 1Gb Ethernet bandwidth was seen as limited.

In designing your network infrastructure you should be planning on using as little cabling as you can, sharing as much cabling as possible while providing as much bandwidth as possible over your cables.

Get traffic flowing on all your cables, don’t use an active / passive model where half your bandwidth is sitting idle. You can be consolidating your network traffic and still protect it using ESX Network Teaming policies by directing Storage, VM and Service Console over different Nics.

Storage
Storage is often your most expensive budget consideration with virtualisation. Everyone is trying as hard as they can to reduce the actual raw storage required to host VMs which has a direct cost implication. VMware now has thin virtual disks which don’t require you to have 20Gb reserved on your storage for a VM disk file when you are only using 5Gb of it wasting 15Gb of expensive storage space. This has been especially useful in iSCSI and FC implementations with LUNs. NFS has always been thin by default. For VDI, VMware has linked clones and Citrix has Provisioning Services so you only need to store one copy of your VM disk file which can be shared by many VMs drastically reducing storage.

Storage vendors are also trying to help you reduce storage which may seem strange but if storage is too expensive then you won’t be buying their products so they need to bring storage costs down enough for it to still be an option. Vendors are using deduplication and even compression to reduce disk space. They are also building vCenter plugins to make the management of storage easier by VM admins and also to be able to create space efficient clones of VMs at the storage layer without having to copy blocks and deduplicate them afterwards which makes it all more effiicient. VMware is further building its APIs with VAAI to offload storage related tasks to the storage vendor arrays for efficient use of host resources and disk space.

Disk space is only one part of the storage equation. IOPS is the other often overlooked part of storage design. You may have the disk space but do you have enough spindles to be able to read and write all that data quickly enough? Storage vendors are using tiered storage, solid state disks and caching to reduce the disk IO requirements.

Getting your storage wrong is probably the biggest issue you will face as adding capacity is very expensive. Running out of disk space and/or IO for your VMs is far harder to solve than running out of CPU or Memory.

Ensure you have your storage team engaged from the beginning and together plan disk space and IO requirements. You need to be thinking of normal IO, boot storms for VDI and what happens when AV/inventory/compliance agents run. You need to be planning on where your VMs will be stored, how they will be split across multiple storage controllers. Will you back up and restore VMs at the storage layer? What storage is mirrored to where for business recovery? What maximum capacity will your storage support? How can you add to it? Have you looked at vCenter storage plugins? How can you reduce storage costs?

Compute
Compute resources are the CPU and Memory requirements your hosts need to run your virtual machines. You should by now have a good idea of what your VM CPU and Memory requirements are so you can plan how many hosts you need.

What hosts will you use? You may already be using IBM or Dell or HP servers. Ask yourself whether you can use your current server hardware in the same way to scale your virtual environment. If you are using rack mounted servers is it practical to keep adding more and more physical hosts? Can your network support these connections. Could you consider going 10GbE? Is this the time to change server vendor? Should you look at blades?

Have you also thought about failover capacity? Running all your hosts at 90% all the time is very efficient use of resources but what if one fails, you have a resource spike or you need to do maintenance on a host to replace a faulty memory model. Many people talk about an N+1 model where each cluster has an additional server for this purpose but that doesn’t really work in all scenarios. If you had a cluster of 2 hosts or a cluster of 31 hosts would you only add 1 extra host for failover? There are many pros and cons discussions for how big your clusters should or shouldn’t be but I am in favour of bigger clusters as I don’t believe in segregating your environment if you don’t need to. This adds more administrative boundaries and may not provide any performance benefit. If you do have larger clusters though you need to factor in more failover capacity than N+1. If you used to have 8 hosts in a cluster which was actually 7 hosts + 1 failover, just extend the cluster using the same formula. If you have 24 hosts, plan on 21 hosts + 3 failover.

When thinking about failover, you shouldn’t just be thinking of extra capacity within your clusters. What about a blade chassis failing or a switch failing or a storage head failing or a rack failing? As your infrastructure grows you need to think about spreading your VMs across your infrastructure so any component failure doesn’t bring down a whole site or department or application. As you get bigger you need to spread wider so if a rack does go down you don’t bring down all your business critical functions. Spread your hosts across chassis. Spread your VMs across datastores. Spread your critical servers across sites if you can. Keep thinking about what could go wrong and what the effect could be. Consolidation is great but if that means that one component failing brings everything down you haven’t designed your infrastructure correctly. Let your management understand the components and what you are and are not protected against.

Blades
Network consolidation has allowed the various server hardware vendors to get very serious about blades and has even made Cisco now a server vendor with its UCS.

Blade servers have really grown up fast and offer a great way to consolidate power and network and provide an excellent way to move towards a building block approach. You can use a chassis as a building block or a rack with multiple chassis as a building block for larger environments. The idea of a building block is to have a modular design that can be repeatable.

If you use a rack as your base building block then think of all the components that go together and use that as your first block. Plan power, cooling and networking to support an entire rack. Build all the blade hosts to the same configuration to reduce the difference within your building block and ensure you have enough networking and storage capacity for a rack full of VMs.

Design it in a way that you can repeat your exact rack design to add capacity. Maybe you won’t need another rack full of actual server hosts but your next building block can still be modular by adding the chassis and networking components for a whole rack and then when you need further capacity you just insert and build blades into a preconfigured rack of power and networking to add the compute resources and also have some plan for adding storage. This makes it very quick and easy to add capacity when the physical parts of the comissioning are mostly done.

As your environment scales having the same modular building block for every rack hosting VMs in every datacenter wherever it may be in the globe gives you a standard approach that reduces differences making it easier to support, troubleshoot and scale.

Keep it simple, small and repeatable everywhere and you will be giving your infrastructure the building blocks to allow you to scale your virtual environment easily and predictably.

Categories: Scale Tags: ,
Comments are closed.