Archive

Archive for the ‘ESX’ Category

What’s New in vSphere 6.0: Virtual Volumes

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Virtual Volumes (VVols) is one of the big new additions to vSphere 6.0. VMware has been talking about it publicly since VMworld 2011 (I called VVols “VMware’s game changer for storage”) and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give 6.0 something big.

VVols is all about changing the way storage is deployed, managed and consumed making the storage system VM-centric, VMware likes to use the term “making the VMDK a first class citizen in the storage world”.

 

image

Virtual Volumes is part of VMware’s Software Defined Storage story which is split between the control plane with Virtual Data Services which is all policy driven and the data plane with Virtual Data Plane which is where the data is actually stored.

 

image

Read more…

What’s New in vSphere 6.0: Multi-CPU Fault Tolerance

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

It’s been many many years in the making but at last Fault Tolerance for Multi-Processor VMs has seen the light of day and was announced during the VMworld keynote today.

FT will now support VMs with up to 4 x vCPUs and 64GB RAM. SMP-FT as it’s called works differently than FT for single CPUs. There is a new fast check-pointing mechanism to keep the primary and secondary in sync. Previously a “Record-Replay” sync mechanism was used but the new fast check-pointing has allowed FT to expand beyond 1 x vCPU. Record-Replay kept a secondary VM in “virtual lockstep” with the primary. With fast check-pointing the primary and secondary VM execute the same instruction stream simultaneously making it much faster. If the FT network latency is too high for VMs to stay in sync, the primary will be slowed down to the point that the secondary can keep up. You can also now hot-configure FT.

image

Read more…

What’s New in vSphere 6.0: Virtual Data Center (removed from release)

August 26th, 2014 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

UPDATE: 02/02/2015 Virtual Data Center and a Policy Based Management component which were both talked about at VMworld have been pulled from the final release. It seems VMware needs more time to work out which policy and automation functionality goes into vRealize Automation Center, vCloud Director and vCenter itself. It’s a shame really as these components were real enablers for the SDDC, being able to control placement of VMs by policy will have to wait until another day.

Briefly shown in the VMworld Day 2 keynote demos was deploying a VM to a Virtual Datacenter which is in fact planned as a new addition to vSphere. Well, I say new addition which is true but its an old name brought back to life. The whole message of vSphere 4.0 was about creating a “Virtual Datacenter”. You could move physical machines into your virtual datacenter! Now we’ve progressed full circle and “Virtual Data Centers” are back!

image In vSphere 6.0, a Virtual Datacenter aggregates compute clusters, storage clusters, network and policies. In this first release, a virtual datacenter can aggregate resources across multiple clusters within a single vCenter Server into a single large pool of capacity. This will benefit large deployments such as VDI where you have multiple clusters with similar network and storage connections and now you can group them together.

Within this single pool of capacity, the Virtual Data Center will automate VM initial placement by deciding in which cluster the VM should be placed based on capacity and capability.

You can then create VM placement and storage policies and associate these policies with specific clusters or hosts as well as the datastores they are connected to. This policy may be a policy to store SQL VMs on a subset of hosts within a particular cluster for licensing reasons. You can then monitor adherence to these policies and automatically remediate any issues. When you deploy a VM, you would select from various policies and the Virtual Datacenter, based on the policies would decide where a VM would be placed. This again is to try reduce the opex admin decisions of where VMs are placed.

imageVirtual Data Centers require clusters with DRS enabled to handle the initial placement, individual hosts cannot be added. You can remove a host from a cluster within a Virtual Data Center by putting it in maintenance mode, all VMs will stay within the VDC moving to other hosts in the cluster. If you need to remove a cluster or turn off DRS for any reason and can’t use Partially Automated Mode, you would remove the cluster from the Virtual Data Center. The VMs would stay in the cluster but will no longer have VM placement policy monitoring checks done until the cluster rejoins a Virtual Data Center. You could manually vMotion VMs to other clusters within the VDC before removing a cluster.

Read more…

What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion

August 26th, 2014 3 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vMotion is one of most basic yet coolest features of vSphere, People generally consider the time they saw vMotion work for the first time as their “wow” moment showing the power of virtualisation. in vSphere 5.5, vMotion is possible within a single cluster and across clusters within the same Datacenter and vCenter. With vSphere 6.0 vMotion is being expanded to include vMotion across vCenters, across virtual switches, across long distances and routed vMotion networks aligning vMotion capabilities with larger data center environments.

vMotion across vCenters will simultaneously change compute, storage, networks, and management. This leverages vMotion with unshared storage and will support local, metro and cross-continental distances.

imageYou will need the same SSO domain for both vCenters if you use the GUI to initiate the vMotion as the VM UUID can be maintained across vCenter Server instances but it is possible with the API to have a different SSO domain. VM historical data is preserved such as Events, Alarms and Task History. Performance Data will be preserved once the VM is moved but is not aggregated in the vCenter UI. the information can still be accessed using 3rd party tools or the .API using the VM instance ID which will remain across vCenters.

Read more…

EVO: Rail – Integrated Hardware and Software

August 25th, 2014 No comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image Each of the four compute nodes within the 2U appliance has a very specific minimum set of specifications. Some hardware vendors may go above and beyond this by adding say GPU cards for VDI or more RAM per host but VMware wants a standard approach. These kinds of servers don’t exist on the market currently other than what other hyper-converged companies whitebox from say SuperMicro so we’re talking about new hardware from partners.

Each of the four EVO: RAIL nodes within a single appliance will have at a minimum the following:

  • Two Intel E5-2620v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD for the ESXi boot device
  • Three SAS 10K RPM 1.2TB HDD for the VSAN datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One VSAN certified pass-through disk controller
  • Two 10GbE NIC ports (either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for out-of-band management

Each appliance is fully redundant with dual power supplies. As there are four ESXi hosts per appliance, you are covered for hardware failures or maintenance. The ESXi boot device and all HDDs and SSDs are all enterprise-grade. VSAN itself is resilient. EVO: RAIL Version 1.0 can scale out to four appliances giving you a total of 16 ESXi hosts, backed by a single vCenter and a single VSAN datastore.There is some new intelligence added which automatically scans the local network for new EVO:RAIL appliances when they have been connected and easily adds them to the EVO: RAIL cluster.

Read more…

EVO: Rail – Management Re-imagined

August 25th, 2014 2 comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image The EVO: RAIL management software has been built to dramatically simplify the deployment of the appliances as well as provisioning VMs. The user guide is only 29 pages so you can get an idea of how VMware is driving simplicity. Marvin actually exists as a character icon within the management interface with an embedded “V” and “M”.

VMware recognises that vCenter has had a rather large feature bloat problem over the years. They have introduced new components like SSO which do provide needed functionality but add to the complexity of deploying vSphere. VMware has also tried to bring all these components together in the vCenter Server Appliance (VCSA).

This is great but has some functionality missing compared to the Windows version like Linked-Mode and some customers worry about managing the embedded database for large deployments. As EVO:RAIL is aimed at smaller deployments and isn’t concerned with linking vCenters together, the VCSA is a good option and the EVO:RAIL software is in fact a package which runs as part of the VCSA. There is no additional database required, it is all built into the appliance and uses the same public APIs to communicate with vCenter but acts as a layer to provide a simpler user experience, hiding some of the complexity of vCenter. vCenter  is still there so you can always connect directly with the Web Client and manage VMs as you do normally and any changes made in either environment are common so no conflicts.

EVO:RAIL is also also written purely in HTML5 even for the VM console, no yucky Flash like the vSphere Web Client and it works on any browser, even an iPad. Interestingly is has a look which is a little similar to Microsoft Azure Pack. Who would ever have thought VMware would have written a VM management interface built for simplicity that is similar to an existing Microsoft one!

Read more…

VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance

August 25th, 2014 1 comment

image image VMware will announce shortly today at VMworld US that it is entering the hyper-converged appliance market with a solution called EVO:Rail. This has been rumoured for a while since an eagle eyed visitor to the VMware campus spotted a sign for Marvin in their briefing center. Marvin was the engineering name and has still stuck around in parts of the product but its grown up name is EVO:Rail.

EVO(lution) is eventually going to be a suite of products/solutions, Rail is the first announcement named for the smallest part of a data center rack, the rail, so you can infer that VMware intends to build this portfolio out to an EVO:RACK and beyond.

EVO:Rail combines compute, storage and networking resources into a hyper-converged infrastructure appliance with the intention to dramatically simplify infrastructure deployment. Hardware wise this is pretty much what Nutanix and Simplivity as two examples do today. Spot the acronym, HCIA, to hunt for newly added VMworld sessions.

VMware is not however entering the hardware business itself, that would kill billions of marketing budget spent on the Software Defined Data Center message of software ruling the world. Partner hardware vendors will be building the appliance to strict specifications with VMware’s EVO:RAIL software bundle pre-installed and the appliance delivered as a single SKU. Some may see this as a technicality. VMware has always said if you need specific  hardware you are not software defined. Does EVO:RAIL count as specific hardware?

Support will be with the hardware vendor for both hardware and software with VMware providing software support to the hardware vendor at the back-end.

image

Read more…

How Policy will drive the Software Defined Data Center

July 25th, 2014 3 comments

SDDC

Many companies trying to take advantage of cloud computing are embracing the moniker of the “Software Defined Data Center” as one way to understand and communicate the benefits of moving towards an infrastructure resource utility model. VMware has taken on the term SDDC to mean doing everything in your data center with software and not requiring any custom hardware. Other companies sell “software-defined” products which do require particular hardware for various reasons but the functionality can be programmatically controlled and requested all in software. Whether your definition of “software-defined” mandates hardware or not the general premise (nothing to do with premises!) is being able to deliver and scale IT resources programmatically.

This is great but I think SDDC is just a stepping stone to what we are really trying to achieve which is the “Policy Defined Data Center”.

Once you can deliver IT resources in software, the next step is ensuring those IT resources are following your business rules and processes, what you would probably call business intelligence policy enforcement. These are the things that your business asks of IT partly for regulatory reasons like data retention and storing credit cards securely but also encompasses a huge amount of what you do in IT.

Here are a few examples of what kinds of policies may you have:

  • Users need to change their passwords every 30 days.
  • Local admin access to servers is strictly controlled by AD groups.
  • Developers cannot have access to production systems.
  • You can only RDP to servers over a management connection.
  • Critical services need to be replicated to a DR site, some synchronously, others not.
  • Production servers need to get priority over test and development servers.
  • Web server connections need to be secured with SSL.
  • SQL Server storage needs to have higher priority over say print servers.
  • Oracle VMs need to run on particular hosts for licensing considerations.
  • Load balanced web servers need to sit in different blade chassis in different racks.
  • Your trading application needs to have maximum x latency and minimum y IOPS
  • Your widget application needs to be recoverable within an hour and no be more than 2 hours out of date.
  • Your credit card database storage needs to be encrypted
  • All production servers need to be backed up, some need to be kept for 7 years.

Read more…

HP Smart Array Controller can cause an ESXi PSOD – patch now available

July 7th, 2014 No comments

Time to check your HP Smart Array Controller driver versions.

HP has issued an advisory for ESXi 5.x with a number of Smart Array Controllers that can cause an out of memory condition which could lead to a PSOD if you are running the hpsa driver version 5.x.0.58-1. VMware also has a KB explaining the issue.

You can now avoid this without having to downgrade the driver but upgrade to the 5.x.0.60-1 version so that’s HP Smart Array Controller Driver (hpsa) Version 5.0.0.60-1 (ESXi 5.0 and ESXi 5.1) or Version 5.5.0.60-1 (ESXi 5.5).

You can download the new driver in various formats and update your hosts using a VIB file, the HP software depot or grab the latest offline bundle.

The latest HP supplied ESXi images for June 2014 do contain this latest patch so probably easiest to upgrade using these if you are happy to update the whole bundle.

Categories: ESX, HP, VMware Tags: , ,

HP updates its customised images for VMware ESXi 5.5/5.1

October 25th, 2013 No comments

HP has updated its ESXi customised images to reflect the recent release of ESXi 5.5 as well as its September 2013 Service Pack for Proliant.

HP’s customised images are fully integrated sets of specific drivers and software that are tested to work together. You can see the list of Driver Versions in HP supplied VMware ESX/ESXi images.

I have done an extensive update of my HP Virtual Connect Flex-10 & VMware ESX(i) pre-requisites post which includes these new customised images.

HP Custom Image for VMware ESXi 5.5.0 GA – September 2013:

HP Custom Image for VMware ESXi 5.1 Update 1 – September 2013:

The new and updated features for the HP vSphere 5.5 /5.1 customised Images for September 2013 include:

  • Provider Features
    • Report Smart array driver name and version.
    • Report SAS driver name and version.
    • Report SCSI driver name and version
    • Report Firmware version of ‘System Programmable Logic Device’.
    • Report SPS/ME firmware.
    • Added SCSI HBA Provider.
    • Report IdentityInfoType and IdentityInfoValue for PowerControllerFirmware class.
    • IPv6 support for OA and iLO.
    • Report Memory DIMM part number for HP Smart Memory.
    • Added new ‘Test SNMP Trap’.
    • Updated reporting of memory configuration to align with iLO and health Driver.
  • AMS features
    • Report running SW processes to HP Insight Remote Support.
    • Report vSphere 5.5 SNMP agent management IP and enable VMware vSphere 5.5 SNMP
    • agent to report iLO4 management IP.
    • IML logging for NIC, and SAS traps.
    • Limit AMS log file size and support log redirection as defined by the ESXi host parameter:
    • ScratchConfig.ConfiguredScratchLocation
  • Utilities features
    • HPTESTEVENT – New utility to generate test WBEM indication and test SNMP trap.
    • HPSSACLI – New utility to replace hpacucli
    • HPONCFG – HPONCFG utility, displays the Server Serial Number along with the Server Name when using hponcfg –g switch, to extract the Host System Information.
Categories: ESX, Flex-10, HP, VMware Tags: , ,