Archive

Posts Tagged ‘storage’

What’s in PernixData FVP’s secret sauce

July 31st, 2014 No comments

Anyone who manages or architects a virtualisation environment battles against storage performance at some stage or another. If you run into compute resource constraints, it is very easy and fairly cheap to add more memory or perhaps another host to your cluster.

Being able to add to compute incrementally makes it very simple and cost effective to scale. Networking is similar, it is very easy to patch in another 1GB port and with 10GB becoming far more common, network bandwidth constraints seem to be the least of your worries. It’s not the same with storage. This is mainly down to a cost issue and the fact that spinning hard drives haven’t got any faster. You can’t just swap out a slow drive for a faster one in a drive array and a new array shelf is a large incremental cost.

imageSure, flash is revolutionising array storage but its going to take time to replace spinning rust with flash and again it often comes down to cost. Purchasing an all flash array or even just a shelf of flash for your existing array is expensive and a large incremental jump when perhaps you just need some more oomph during your month end job runs.

VDI environments have often borne the brunt of storage performance issues simply due to the number of VMs involved, poor client software that was never written to be careful with storage IO and latency along with operational update procedures used to mass updates of AV/patching etc. that simply kill any storage. VDI was often incorrectly justified with cost reduction as part of the benefit which meant you never had any money to spend on storage for what ultimately grew into a massive environment with annoyed users battling poor performance.

Large performance critical VMs are also affected by storage. Any IO that has to travel along a remote path to a storage array is going to be that little bit slower. Your big databases would benefit enormously by reducing this round trip time.

FVP

Home

Along came PernixData at just the right time with what was such a simple solution called FVP. Install some flash SSD or PCIe into your ESXi host, cluster them as a pooled resource and then use software to offload IO from the storage array to the ESXi host. Even better, be able to cache writes as well and also protect them in the flash cluster. The best IO in the world is the IO you don’t have to do and you could give your storage array a little more breathing room. The benefit was you could use your existing array with its long update cycles and squeeze a little bit more life out of it without an expensive upgrade or even moving VM storage. FVP the name doesn’t stand for anything by the way, it doesn’t stand for Flash Virtualisation Platform if you were wondering which would be incorrect anyway as FVP accelerates more than flash.

Read more…

HP Smart Array Controller can cause an ESXi PSOD – patch now available

July 7th, 2014 No comments

Time to check your HP Smart Array Controller driver versions.

HP has issued an advisory for ESXi 5.x with a number of Smart Array Controllers that can cause an out of memory condition which could lead to a PSOD if you are running the hpsa driver version 5.x.0.58-1. VMware also has a KB explaining the issue.

You can now avoid this without having to downgrade the driver but upgrade to the 5.x.0.60-1 version so that’s HP Smart Array Controller Driver (hpsa) Version 5.0.0.60-1 (ESXi 5.0 and ESXi 5.1) or Version 5.5.0.60-1 (ESXi 5.5).

You can download the new driver in various formats and update your hosts using a VIB file, the HP software depot or grab the latest offline bundle.

The latest HP supplied ESXi images for June 2014 do contain this latest patch so probably easiest to upgrade using these if you are happy to update the whole bundle.

Categories: ESX, HP, VMware Tags: , ,

HP’s new management appliance OneView updated to 1.1

July 4th, 2014 No comments

HP has updated its new all singing all dancing management appliance, OneView to 1.1

This is now available for download after being announced at HP Discover last month.

image

HP OneView will be the ultimate replacement for HP Systems Insight Manager (HP SIM), HP Virtual Connect Enterprise Manager (VCEM), HP Insight Control and HP Intelligent Provisioning. It is delivered as a virtual appliance running on a hypervisor.

HP is putting a lot of effort into OneView and really trying to reimagine server management. I was never a fan of HPSIM as I felt it was unnecessarily cumbersome, HP has specifically said one of the goals of OneView is to make server management far easier and quicker with a lighter touch. In fact they are not rushing to add functionality to OneView but taking a pragmatic approach and only adding what is absolutely needed. HPs answer to Vblock is its Converged Systems which are built, configured and managed by OneView so HP has skin in the management game. Converged infrastructure is not just connecting hardware together but requires converged management which OneView aims to deliver.

Moving over to OneView is going to be a long process however as OneView has been designed to manage only Gen8 and future servers with just a little bit of management available for G7 servers. Far more complicated though is there is no migration path from Virtual Connect to OneView, you need to delete your virtual connect domains and recreate them in OneView which means shutting down every blade in your domain (up to 4 chassis) and starting from scratch. HP calls this a transition, not a migration. Not all current Virtual Connect functionality is available in OneView so you may not even be able to configure your newly purchased chassis in OneView depending on your required network config.

Saying that, OneView is going to be the future of server management so you should be thinking in that direction for your future plans. One of the stumblers may be licensing, you need to purchase or upgrade existing management software licenses to use OneView.

What’s new with 1.1?

  • Now available as a Hyper-V appliance along with ESXi
  • You can now provision and manage 3PAR storage, integrating the configuration into server profiles.
  • Added support for the new 20/40 FlexFabric Modules
  • Virtual Connect support for untagged traffic and VLAN tunnelling (OneView was pretty hampered by this before).
  • BIOS settings as part of server profiles (nice one!)
  • Inventory views of Cisco Nexus 5000 switches and HP FEX module which will be very useful.
  • Server Profiles for Gen8 rack mount servers to update firmware and BIOS settings for DL360/DL380
  • HP Insight Control for VMware vCenter Server is now HP OneView for VMware vCenter
  • HP Insight Control for Microsoft System Center is now HP OneView for Microsoft System Center
  • HP Insight Control for Red Hat Enterprise Virtualization is now HP OneView for Red Hat Enterprise Virtualization

Here’s all the documentation and another link with some of the others guides (HP, may be worth putting them all in one place).

Categories: HP Tags: , , , ,

HP Discover here I come!

June 2nd, 2014 No comments

imageI’m very excited that HP has invited me to attend HP Discover in Las Vegas next week as a blogger. It’s going to be an intense few days, 15 hours of travel each way for 2.5 days of HP Discover!

I’m particularly looking forward to speaking to HP product managers and executives and chatting to other bloggers and attendees to get a sense of what’s new with the “new” HP.

Management

Product wise, I’m first of all keen to delve deeper into HP OneView, HP’s converged infrastructure manager which aims to finally bring together HP’s disparate management tools, ultimately replacing HP SIM which I really don’t like and incorporating Virtual Connect Enterprise Manager. HP OneView runs as a virtual appliance and you pull in your servers, iLOs, chassis, Virtual Connects etc. where they can be managed and reported on from one place. It has an API so you can finally script against it with PowerCLI and other tools which cannot come soon enough.

 

Read more…

What’s New in vCloud Suite 5.5: vSphere Replication and vCenter Site Recovery Manager

August 26th, 2013 No comments

VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

To read the updates for all the suite components, see my post: What’s New in vCloud Suite 5.5: Introduction

vSphere Replication

replication-image3.jpg What’s New:

  • The user interface within the Web Client has been beefed up. The VM and vCenter management panes have been enhanced to configure and monitor replication.
  • You can now deploy new vSphere Replication appliances to allow for replication between clusters and non-shared storage deployments and also to meet load balancing requirements.
  • There are now multiple points-in-time snapshots so if you have VM with an OS corruption that has already been replicated you can select an earlier snapshot to recover from before the corruption occurred. This isn’t the same is replicating VMs with existing snapshots which isn’t supported. Point-in-time snapshots are created at the recovery site after replication.
  • There is now Storage DRS Interoperability so replicated VMs can be Storage vMotioned across datastores without interrupting ongoing replication.
  • VSAN support has been added to protect and recover VMs running on the new VSAN datastores.

vCenter Site Recovery Manager

What’s New:

  • recovery Storage DRS and Storage vMotion are now supported when VMs are migrated within a consistency group.
  • VMs running on (Virtual SAN) VSAN datastores can be protected using vSphere Replication. You can use VSAN datastores on both the protected and recovery sites. There are a few considerations when using VSAN and SRM so read the documentation.
  • You can now recover and preserve multiple point-in-time snapshots of VMs that were protected with vSphere Replication.
  • VMs that reside on Virtual Flash (VFlash) can be protected. VFlash is disabled on VMs after recovery.
  • IBM DB2 is no longer supported as an SRM database

What’s New in vCloud Suite 5.5: VMware Virtual Flash (vFlash)

August 26th, 2013 No comments

VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

To read the updates for all the suite components, see my post: What’s New in vCloud Suite 5.5: Introduction

speed VMware Virtual Flash (vFlash) or to use its official name, “vSphere Flash Read Cache” is one of the new standout feature of vCloud Suite 5.5.

vFlash allows you to take multiple Flash devices in hosts in a cluster and virtualises them to be managed as a single pool. In the same way CPU and memory is seen as a single virtualised resource across a cluster, vFlash does the same by creating a cluster wide Flash resource.

VMs can be configured to use this vFlash resource to accelerate performance for reads. vFlash works in write-through cache mode so doesn’t in effect cache writes in this release, it just passes them to the back-end storage. You don’t need to use in-guest agents or change the guest OS or application to take advantage of vFlash. You can have up to 2TB of Flash per host and all kinds of datastores are supported, NFS, VMDK and RDMs. Hosts can also use this resource for the Host Swap Cache which is used when the host needs to page memory to disk.

A VMDK can be configured with a set amount of vFlash cache giving you control over exactly which VM disks get the performance boost so you can pick your app database drive without having to boost your VM OS disk as well. You can configure DRS-based vFlash reservations, there aren’t any shares settings but this may be coming in a future release. vMotion is also supported, you can choose whether to vMotion the cache along with the VM or to recreate it again on the destination host. vSphere HA also is supported but when the VM starts the cache will need to recreate again on the recovery host.

Read more…

What’s New in vCloud Suite 5.5: Virtual SAN (VSAN)

August 26th, 2013 No comments

VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

To read the updates for all the suite components, see my post: What’s New in vCloud Suite 5.5: Introduction

Virtual SAN (VSAN) is one of the highlights of the new vCloud Suite 5.5 and is a really strong push further into VMware’s vision of the Software Defined Data Center (SDDC). VSAN was previewed at VMworld 2012 when it was then called VMware Distributed Storage. VSAN is in public beta and won’t be available with the initial release of vSphere 5.5 but with the first update which is scheduled for next year.

VSAN is a VMware developed software-based storage solution built into the ESXi hypervisor that uses the host’s local disk drives (SSD & HDD) and then aggregates them so they appear as a cluster wide pool of storage shared across all hosts.

It is a highly available scale-out clustered storage solution to host VMs. It bring CPU, memory and storage closer together which is certainly the idea that Nutanix has been successfully running with.

imageIn a simplistic way this is just another Virtual Storage Appliance (VSA) but embedded within the hypervisor rather than as an appliance. However, a VSA in my opinion by itself isn’t a true part of the SDDC in the same way that a firewall that happens to be running as a VM isn’t true software defined networking.

VMware makes the VSAN more software defined than a standard VSA by implementing automated storage management with per-VM policies and per-VM QoS enforcement. That’s a lot to actually digest but I’ll get back to what that actually means.

VSAN is fully integrated with vCenter, managed through the vSphere Web Client and works seamlessly with HA, DRS and vMotion. It is very easy to setup, configure and manage and yet provides enterprise features and performance scaling from terabytes to petabytes.

Thin provisioning is available along with support for VM snapshots, cloning, backup and replication using vSphere Replication and Site Recovery Manager.

VMware is listing VSAN use cases as VDI, test/dev, Big Data and DR. I think they are covering themselves by not including production workloads as this is a version 1 release and they want to see how it works in the real world at scale before committing and will also want to mature the functionality in the future. I’d love to see anyone who considers a real VDI deployment not as important as production though!

If you’re wondering, the difference between VSAN and VMware’s own vSphere Storage Appliance (VSA) is VSAN is implemented in the hypervisor, VSA is a virtual appliance presenting a NFS datastore, VSAN can use Flash as a read cache and write buffer, VSA can’t and VSAN has the whole policy-based per-VM management which VSA does not.

Requirements

You need a minimum of 3 vSphere 5.5 ESXi hosts with local storage to create a VSAN. You can scale out to 8 hosts providing storage to the VSAN with the maximum cluster size of 32 hosts being able to consume that storage.

You obviously need vCenter to manage VSAN.

You need at least 1 x HDD and 1 x SSD in each host, SSDs are used as a read cache and write buffer and the HDDs are used as a persistent store. Not all hosts in the cluster have to have local storage, some can just be compute nodes but you need at least 3 with local storage to create a VSAN. They don’t have to have the same drive sizes as long as each host contributing storage has at least 1 x SSD and 1 x HDD. Hosts with no storage can still use the VSAN.

You cannot have ESXi and VSAN using the same disk and in this release VSAN won’t work with auto deploy. This means you will either need another disk or disks for the ESXi boot partition or boot from SAN or use a SD card or USB stick for the ESXi installation.

You need a SAS/SATA RAID Controller that works in pass-thru or HBA mode as the disk needs to be presented as a SCSI device to VSAN. Some PCIe Flash devices are presented as a block device and so won’t work with VSAN. You can use the same RAID controller for the SSD and HDD disks but if your RAID controller is going to be an IO bottleneck you would then need to think about having separate controllers or spreading out the IO.

The VSAN uses a VMkernel port to connect the local hosts’ storage together. You can use either 1Gb or 10Gb networking but 10Gb is obviously preferred. You tag a VMkernel port with the Virtual SAN traffic service just like you do with vMotion and FT.

image

Policies and QoS

VM storage policies is what sets VSAN apart from a standard VSA and really makes it software-defined but this can take a little time to get your head around so bear with me.

To understand how this works you need to separate the underlying VSAN datastore from the VM and put a layer of policy between the two.

When you create a vSAN cluster, you select the local disks to use across your hosts and a vsanDatastore is created automatically. You don’t actually just deploy a VM directly to this vsanDatastore, that wouldn’t be very software-defined would it!

What happens is the vsanDatastore tells the VSAN using the standard VMware vStorage APIs for Storage Awareness (VASA) a number of capabilities it can offer VMs such as how many host or disk failures to tolerate and a way to define how many IOPS a VM requires and how much of a VMs reads should always be kept in SSD cache.

What you then do is create a VM Storage Policy to define how you want your VMs to use these capabilities. The standard explanation for these kinds of things is Gold, Silver and Bronze policies but that isn’t particularly meaningful. You can in fact use different policies for individual VM disk files (called storage objects).

You could create a simple VM Storage Policy called “High Performance VMs” which would say that VM disks based on this policy are stored with at least 6 replicas for higher performance. You could then create another simple policy called “Critical Availability VMs” which would ensure that VM disks based on this policy are stored on at least 3 hosts so even with two host failures your VMs will continue to function. You can also create policies which specify multiple capabilities such as “High Performance Critical Availability VMs” which would ensure there are 6 data replicas for higher performance spread across at least 3 hosts (remember, you can have multiple disks in a host).

For VSAN, you can create policies based on 5 capabilities:

Stripe Width

The number of physical disks across which each replica of a storage object is distributed up to a max of 12. Having more replicas can give you better performance (throughput and bandwidth) but also results in higher system resource use as multiple copies means more writes.

Component Failures To Tolerate

Defines the number of host, disk or network failures a storage object can tolerate up to a maximum of 3. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” hosts are required.

Proportional Capacity %

Percentage of the logical size of the storage object that should be reserved (thick provisioned) up to 100%. The rest of the storage object is thin provisioned.

Cache Reservation

Flash capacity reserved as read cache for the storage object which is specified as a percentage of the logical size of the object up to 100%. This is only used for addressing read performance issues. Reserved flash capacity cannot be used by other objects and unreserved Flash is shared fairly between all objects.

Force Provisioning

If this option is enabled, the object will be provisioned even if the policy specified
in the storage service level can’t be satisfied with the resources currently available in the cluster. VSAN will try to bring the object into compliance if and when resources become available. This is disabled by default.

image

When you deploy a VM you don’t actually select a datastore on which to provision the VM disks but rather assign the VM storage provisioning to one of the policies you have created with possibly separate policies for each VM disk.

When the VM is deployed, the VM Storage Policy is sent down to the VSAN which then lays out the VMDK across the cluster to satisfy the policy settings. Any VM deployed with your “High Performance Critical Availability VMs” policy is therefore stored based on the policy rules.

image

With VSAN this means you can create a single cluster wide datastore and enforce different QoS policy levels for each VM or virtual disk. That is pretty powerful stuff. Also if you then decide to change the policy, all VMs or disks based on that policy will have their storage layout amended to comply.

This policy based system isn’t just for VMware VSAN. EMC, NetApp, Dell etc. will have their own set of capabilities their storage arrays can provide which will be sent up through VASA to be used within Storage Policies.

VSAN is pretty exciting as it brings shared storage to everyone without requiring a traditional SAN. What is even more interesting is we can see the power of policy based VM storage provisioning. I can already think of ways this can be extended by having capabilities available for replication based on various RPOs and RTOs.

It will be interesting to see how the recently released PernixData FVP plays in this area as FVP is a transparent storage performance tier and also runs as part of the hypervisor leveraging SSD for high performance and HDD for capacity but has deduplication built in. Can you use FVP on top of VSAN? Interesting times.

Categories: vCenter, VMware, VMworld Tags: , ,

What’s New in vCloud Suite 5.5: Introduction

August 26th, 2013 1 comment

vmw_logo_1CloudComputing_2 VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

I would say that this is an evolutionary rather than revolutionary update being the third major release in the vSphere 5 family (5.0,5.1,5.5).

There are however some significant storage additions such as Virtual SAN (VSAN) and VMware Virtual Flash (vFlash) as well as a new vSphere App HA to provide application software high availability which is in addition to vSphere HA.

VMware has also responded to the customer frustration over Single-Sign on (SSO) which is an authentication proxy for vCenter and made some changes to SSO to hopefully make it easier to deploy. Every component of the suite has been updated in some way which is an impressive undertaking to get everything in sync.

Here are all the details:

  1. What’s New in vCloud Suite 5.5: Introduction
  2. What’s New in vCloud Suite 5.5: vCenter Server and ESXi
  3. What’s New in vCloud Suite 5.5: vCenter Server SSO fixes
  4. What’s New in vCloud Suite 5.5: Virtual SAN (VSAN)
  5. What’s New in vCloud Suite 5.5: VMware Virtual Flash (vFlash)
  6. What’s New in vCloud Suite 5.5: vCloud Director
  7. What’s New in vCloud Suite 5.5: vCenter Orchestrator
  8. What’s New in vCloud Suite 5.5: vCloud Networking & Security
  9. What’s New in vCloud Suite 5.5: vSphere App HA
  10. What’s New in vCloud Suite 5.5: vSphere Replication and vCenter Site Recovery Manager
    VMware is certainly evolving their strategy of the software defined data center, this release puts software defined storage (SDS) on the map at least from a VMware perspective, a multi-year project. VMware vVolumes hasn’t made it into this release which shows what a major undertaking it is, we will have to wait for vSphere 6!
    SDS is going to have a huge push this year from VMware and of course all the other storage vendors, expect some exciting innovation.
    Software defined networking is the next traditional IT infrastructure piece to “Defy convention” and is arguably by far the hardest one to change. Another multi-year project is just beginning.

NetApp PowerShell Toolkit, DataONTAP 3 released with new Performance Monitoring and full ONTAP 8.2 API Support

August 9th, 2013 No comments

NetApp has updated its PowerShell Toolkit, DataONTAP to version 3.

Two major features have been added:
A new cmdlet Invoke-NcSysstat which is like Invoke-NaSysstat and allows you to monitor cluster system performance stats for:  System, FCP, NFSv3, NFSv4, CIFS, iSCSI, Volume, Ifnet, LUN, and Disk.

Invoke-NcSysstat works in both the cluster and Vserver context for Data ONTAP 8.2 and up. For Data ONTAP versions previous to 8.2, Invoke-NcSysstat must be run in the cluster context. Ifnet and Disk performance stats aren’t available when running against the Vserver context.

image

Invoke-NcSysstat can also aggregate performance stats for selected objects.

imageONTAP 8.2 API support is now complete with 67 new cmdlets in the clustered ONTAP set and 27 cmdlets with new parameters for Data ONTAP 8.2for a total of 1738 cmdlets.

Read more…

NetApp releases its Virtual Storage Console (VSC) PowerShell Toolkit 1.0

December 5th, 2012 No comments

NetApp has extended its PowerShell management to its virtual center plug-in, the Virtual Storage Console. The toolkit can be downladed from here.

NetApp currently has a PowerShell toolkit called DataONTAP for managing its controllers but this new toolkit has been developed to manage the VSC directly.

As this is a 1.0 release not all VSC functionality is available. Currently it can do the Provisioning & Cloning operations of VSC such as creating and deleting datastores and starting a rapid VSC clone and redeploy. Also included is the new ability in VSC 4.1 to do a vCloud Director vApp clone.

Going forward NetApp are working on exposing more of the VSC functionality through an API which will be available to the toolkit so expect plenty more to come.

This is a great addition from NetApp as it allows you to include the cleverness of the VSC along with your PowerCLI automation in the same scripts.

Here is a list of the available Cmdlets

  • Connect-vsc: Connect to the web service hosting the VSC Provisioning and Cloning APIs.
  • Get-vscManagedObjectRef: Get the managed object reference (aka MORef) string for a vCenter object.
  • Get-vscVirtualMachine: Get vmSpec objects for all virtual machines that were created based on the virtual machine specified.
  • Get-vscVmFileSpec: Get vmFileSpec objects that define a virtual machine clone source.
  • Get-VCloudCredentialStatus: Verifies vCloud Director connection status.
  • Get-VCloudVCenterCredentialsStatus: Verifies vCenter(s) connection status information.
  • New-vscControllerSpec: Convenience cmdlet to create a controllerSpec object with the ability to prompt for credentials.
  • New-vscDatastore: Create a new VMware datastore.
  • New-VAppClone: Perform vApp cloning or provisioning operation.
  • Remove-vscDatastore: Delete a VMware datastore.
  • Remove-VCloudCredentials: Remove vCloud Director credentials.
  • Remove-VCloudVCenterCredentials: Remove vCenter credentials.
  • Set-vscDatastoreSize: Resize a VMware datastore.
  • Set-VCloudCredentials: Set vCloud Director credentials.
  • Set-VCloudVCenterCredentials: Set vCenter credentials.
  • Start-vscClone: Start a vsc rapid cloning operation.
  • Start-vscRedeploy: Start a vsc virtual machine redeploy operation.