Posts Tagged ‘availability’

What’s in PernixData FVP’s secret sauce

July 31st, 2014 No comments

Anyone who manages or architects a virtualisation environment battles against storage performance at some stage or another. If you run into compute resource constraints, it is very easy and fairly cheap to add more memory or perhaps another host to your cluster.

Being able to add to compute incrementally makes it very simple and cost effective to scale. Networking is similar, it is very easy to patch in another 1GB port and with 10GB becoming far more common, network bandwidth constraints seem to be the least of your worries. It’s not the same with storage. This is mainly down to a cost issue and the fact that spinning hard drives haven’t got any faster. You can’t just swap out a slow drive for a faster one in a drive array and a new array shelf is a large incremental cost.

imageSure, flash is revolutionising array storage but its going to take time to replace spinning rust with flash and again it often comes down to cost. Purchasing an all flash array or even just a shelf of flash for your existing array is expensive and a large incremental jump when perhaps you just need some more oomph during your month end job runs.

VDI environments have often borne the brunt of storage performance issues simply due to the number of VMs involved, poor client software that was never written to be careful with storage IO and latency along with operational update procedures used to mass updates of AV/patching etc. that simply kill any storage. VDI was often incorrectly justified with cost reduction as part of the benefit which meant you never had any money to spend on storage for what ultimately grew into a massive environment with annoyed users battling poor performance.

Large performance critical VMs are also affected by storage. Any IO that has to travel along a remote path to a storage array is going to be that little bit slower. Your big databases would benefit enormously by reducing this round trip time.



Along came PernixData at just the right time with what was such a simple solution called FVP. Install some flash SSD or PCIe into your ESXi host, cluster them as a pooled resource and then use software to offload IO from the storage array to the ESXi host. Even better, be able to cache writes as well and also protect them in the flash cluster. The best IO in the world is the IO you don’t have to do and you could give your storage array a little more breathing room. The benefit was you could use your existing array with its long update cycles and squeeze a little bit more life out of it without an expensive upgrade or even moving VM storage. FVP the name doesn’t stand for anything by the way, it doesn’t stand for Flash Virtualisation Platform if you were wondering which would be incorrect anyway as FVP accelerates more than flash.

Read more…

What VMware’s EOL of vCenter Server Heartbeat means for availability?

June 6th, 2014 2 comments

image VMware has very surprisingly and suddenly stopped selling vCenter Server Heartbeat from 2nd June 2014. If you have already purchased vCenter Server Heartbeat you will still get support until 2018 so no panic that the whole carpet has been pulled from under your feet but it does beg the question, what to do going forward to make your vCenter installation more highlight available if you need it?

In the EOL announcement, VMware suggests first of all making your vCenter a VM to be able to take advantage of HA to provide high availability. If you cannot for some reason (and you really need to ask yourself why) run vCenter as a VM and it is/needs to be physical then the only solution is to use a backup solution to be able to restore vCenter if it fails.

Read more…

What’s New in vCloud Suite 5.5: vSphere App HA

August 26th, 2013 No comments

VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

To read the updates for all the suite components, see my post: What’s New in vCloud Suite 5.5: Introduction

vSphere App HA is another new product from VMware in 5.5 to provide application level HA in addition to what is available with vSphere HA. vSphere HA can only recovers VMs when an ESXi hosts dies or restart a VM if the OS hangs. It is not application aware and can’t detect and remediate software failures.

24_7 vSphere App HA provides application protection by detecting application availability issues and automatically remediating them.

Applications and their availability status are auto-discovered and a remediation policy can be created with just 3 clicks.

The policy can be configured to restart the application service and attempt a safe VM restart using the HA API if the application restart fails.

App HA is integrated with VC alarms to provide visibility to application downtime.

It is deployed as a virtual appliance and is a plug-in to the vSphere Web Client.

App HA currently supports the following services and can run up to 400 agents:

  • MSSQL 2005, 2008, 2008R2, 2012
  • Tomcat 6.0, 7.0
  • TC Server Runtime 6.0, 7.0
  • IIS 6.0, 7.0, 8.0
  • Apache HTTP Server 1.3, 2.0, 2.2.

You can only install one vFabric Hyperic server on one vCenter server with one vSphere App HA plug-in installed.

It will be interesting to see how this product develops, support for more services must be on the roadmap. Perhaps this will also take over what vCenter Heartbeat currently does although I hope vCenter in the future works in a more active and federated way and doesn’t require active/passive nodes.

Investigating the health of a vCenter database server

March 9th, 2011 No comments

VMware has released a new KB article all about investigting the health of a vCenter database.

I’ve blogged before on the major issue with vCenter being a massive single point of failure and also on some steps to work out excessive growth in the database which is now included in this article.

This new KB article does provide good advice and plenty of additional troubleshooting steps for working out where your issues are but the fact still remains that the current design for vCenter is far too monolithic, relying on a database that vCenter itself can corrupt, especially when VDI may require constant availability and more and more management products “bolt-on” to vCenter

Also, alarmingly, the final troubleshooting step is:

Reinitializing the vCenter database
A reinitialization of the vCenter database will reset it to the default configuration as if the vCenter server was newly installed. The following are a few situations which could warrant reseting the database:

  • Rebuild of vCenter is required
  • Data corruption is suspected
  • At the request of VMware Support


Categories: vCenter, VMware Tags: , ,

Designing a Virtual Infrastructure that Scales: Part 5, So, after all that…

January 13th, 2011 No comments

This post is the last in the series: Designing a Virtual Infrastructure that Scales.

I hope I’ve managed to give you some information on what you need to be considering when scaling your virtual environment but this series isn’t actually about giving you all the answers but rather to help you think about the questions you need to be asking to make sure you are getting the answers specific to your environment.

So, in summary:

  • Keep it simple!
  • Engage everyone
  • Do your research
  • Do thorough planning
  • Think big
  • Think ahead
  • Start small
  • Create modular building blocks
  • Make it the same everywhere

This all started as a presentation at the London VMware User Group meeting, #LonVMUG where I talked about some of the things you should be thinking about when scaling your virtual infrastructure.

Here are the links to all the posts:
Part 1, Taking Stock
Part 2, Speak to the People

Part 3, Scaling for VDI

Part 4, Designing Thinking
Part 5, So, after all that…

Categories: Scale Tags: ,

Designing a Virtual Infrastructure that Scales: Part 4, Design Thinking

January 10th, 2011 No comments

This post is the fourth in the series: Designing a Virtual Infrastructure that Scales.

In Part 3, Scaling for VDI, I went through some of the considerations specific to VDI and talked about how big your environment can get if you decide to go VDI.

In this post it’s time to talk about actual infrastructure design and start thinking and planning for how to handle scale.

In Part 1, Taking Stock, I talked about how the virtual hosting environment you built a few years ago may be starting to get a little unwieldy to manage. I suggested: “Now is the time to pause if you can and take stock. Have a good look at your current environment and then zoom out and look at the big picture to plan the next stage of your virtual infrastructure because if you don’t you may find it running away from you.”

Hopefully by now you have an idea of what you actually need to virtualise. You’ve identified the number of servers and workstations, what resources they will require, who is going to be accesssing them, from where and at what times. You should know which VMs require business recovery and to where. You have done some calculations on how many hosts you will need to host your VMs and planned failover capacity for HA and DRS. You have an idea of VM network requirements, storage space and IOPS required.

Now is the time to use all that information gathering and see how you can build an infrastructure to run it all.

Read more…

Categories: Scale Tags: ,

Designing a Virtual Infrastructure that Scales: Part 3, Scaling for VDI

December 15th, 2010 2 comments

This post is the third in the series: Designing a Virtual Infrastructure that Scales.

In Part 2, Speak to the people, I went through some communication ideas to involve more people in the information gathering stage to ultimately be able to put together a better infrastructure

In this post I’m tackling the things you need to be thinking about when you consider VDI.

So…VDI, the promised solution to all your IT needs. No PCs on desks, thin clients, zero clients, happy clients, automated provisioning, stateless VMs, thin apps, streaming.

VDI is certainly a very different way of delivering IT to clients, it can be seen as terminal services on steroids, using some of the benefits of shared resources but allowing separation when you need it.

Why is VDI any different from just having VM workstations which people connect to?

Read more…

Categories: Scale Tags: ,

Designing a Virtual Infrastructure that Scales: Part 2, Speak to the People

December 9th, 2010 1 comment

This post is the second in the series: Designing a Virtual Infrastructure that Scales.

In Part 1, Taking Stock, I talked about looking at your current environment and seeing where you need to get to by doing a bit of crystal ball future capacity planning combined with understanding your current infrastructure limitations.

In this post I’m going to talk about something that isn’t done enough in infrastructure projects and that’s actually talking to real life people.

Many enterprise IT departments are pretty big places aften spread across the globe working in different timezones in multiple separate vertical silos. You quite possibly may not have met everyone in your own team let alone know what everybody else does in your office.

If you’re going to think about an infrastructure change to support a much bigger virtual environment, isn’t it worth looking at the really super-duper-bigger picture. Unless you know anything and everything (and there are some of you that may do!) you really should be getting other people involved from the very beginning to see if they would like to jump on the bandwagon and make changes to their environment for the greater good.

Read more…

Categories: Scale Tags: ,

Designing a Virtual Infrastructure that Scales: Part 1, Taking Stock

December 7th, 2010 No comments

I recently had the privilege of presenting at the London VMware User Group meeting, #LonVMUG where I talked about some of the things you should be thinking about when scaling your virtual infrastructure.

I’ve turned part of the presentation into a series of posts, going through some of the aspects you should be considering when your virtual environment demands bigger, better, faster, more!

Part 1, Taking Stock
Part 2, Speak to the People

Part 3, Scaling for VDI

Part 4, Designing Thinking
Part 5, So, after all that…

Designing a Virtual Infrastructure that Scales: Part 1, Taking Stock

Read more…

Categories: Scale Tags: ,

Planning for and testing against failure, big and small

November 30th, 2010 No comments

After my previous post about vCenter availability I thought I should expand on some other factors related to availability and what you should be thinking about to protect your business against failure.

Too often IT solutions are put in place without properly considering what could go wrong and then people get suprised when they do. Sometimes the smallest things can make the biggest difference and you can land up with your business not operating because some very small technical glitch that could have been avoided brings everything down.

Planning for failure should be a major part of any IT project and the cliche is certainly true, “If you fail to plan you plan to fail.” Planning for failure includes big things like a full site disaster recovery plan but also includes small things like ensuring all your infrastructure components are redundent and you don’t have any single points of failure.

Read more…