Archive

Posts Tagged ‘esx’

What’s New in vSphere 6.0: Enhanced Linked Mode

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vCenter Linked Mode provides a single management overview of multiple vCenter instances.

Linked Mode also provides a single login for multiple vCenter Servers and then shows you a common inventory view and allows you to search for objects across vCenters. Licenses, roles and permissions are replicated between vCenter instances.

Linked mode has always been only available for Windows vCenters (ADAM is used as the replication engine) so you couldn’t share licenses, roles and permissions with any vCenter appliances you had.

With the release of the new Platform Services Controller in vSphere 6.0, some of the Linked Mode functionality is changing and its been given a new same.

vSphere will also now include an Enhanced Linked Mode which will require and work in conjunction with the Platform Services Controller.

image

This will not rely on ADAM but have its own replication system which is a multi-master replication technology also called VMDir based on Open LDAP which means replication now works across Windows vCenter as well as vCenter appliances.

Replication will be expanded to include Policies and Tags along with Roles and Permissions. In fact the replication engine will allow VMware to sync any kind of information between Platform Services Controllers which can then be used by vCenters and other management products. Bye bye ADAM, you won’t be missed.

What’s New in vSphere 6.0: Networking

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vSphere networking hasn’t had any huge additions in this release. This is partly to be expected as VMware’s networking messaging is mainly revolved around NSX for now.

Network I/O Control (NIOC) has however had a very useful addition, you can now have Per VM and Distributed Switch bandwidth reservations. You can therefore guarantee compute as well as network resources for your critical VMs.

IPv6 has also been beefed up but this is mainly for new greenfield deployments. It’s not easy to transition from IPv4 to IPv6 so I think VMware sees IPv6 for only new deployments. You will be able to manage ESXi purely with IPv6 and iSCSI and NFS will also be supported. In the future, VMware is looking to move to IPv6 only for vSphere management but that’s a few years out, dual stack IPv4 and IPv6 will be around for a while.

Here’s what the Install for vCenter would look like with IPv6

image

What’s New in vSphere 6.0: NFS Client

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for all details of all the components.

NFS has been available as a storage protocol since 2006 with ESX 3.0 and vSphere has been using NFS version 3 for all this time. There’s been no update to how NFS works.

I’ve been a massive fan of NFS since it was released. No LUNs, much bigger datastores and far simpler management. Being able to move around, back up and restore VM disk files natively from the storage array is extremely powerful. NFS datastores are by default thin-provisioned which allows you your VM admin and storage admin to agree on actual storage space utilisation.

However, good old NFSv3 has a number of limitations, there is no multi-pathing support, limited security and performance is limited by the single server head.

vSphere 6.0 introduces NFS version v4.1 to solve many of these limitations.

NFS 4.1 introduces multi-pathing by supporting session trunking using multiple remote IPs to a single session. Not all vendors will support this so best to check. You can now have increased performance from load-balanced and parallel access, with it comes better availability from path failover.

image

imageThere is improved security using Kerberos authentication. You need to add your ESXi hosts to AD and specify a Kerberos user before creating any NFSv4.1 datastores with Kerberos enabled . You then use this Kerberos username and password to authenticate against the NFS mount. All files stored in all Kerberos enabled datastore will be accessed using this single user’s credentials. You should always use the same user on all hosts otherwise vMotion and other features might fail if two hosts use different Kerberos users. NTP is also a requirement as usual when using Kerberos. This configuration can be automated with Host Profiles.

 

NFSv4.1 now allows you to use a non-root user to access files. RPC header authentication has also been added to boost security, it only supports DES-CBC-MD5 which is universal rather than the stronger AES-HMAC which is not supported by all vendors. Locking has been improved with in-band mandatory locks using share reservations as a locking mechanism. There is also better error recovery.

There are some caveats however with using NFS v4.1. NFSv4.1 is not compatible with SDRS, SIOC, SRM and VVols but you can continue to use NFSv3 datastores for these.

NFSv3 locking is not compatible with NFSv4.1. You must not mount an NFS share as NFSv3 on one ESXi host and mount the same share as NFSv4.1 on another host, best to configure your array to use one NFS protocol, either NFS v3 or v4.1, but not both.

The protocol has also been made more efficient by being less chatty by compounding operations, removing the file lock heartbeat and session lease.

All paths down handling is now different with multi-pathing support. The clock skew issue that caused an all path down issue in vSphere 5.1 and 5.5 has been fixed in vSphere 6.0 for both NFSv3 and NFSv4.1. With multi-pathing, IO can failover to other paths if one path goes down, there is no longer any single point of failure.

No support for pNFS will be available for ESXi 6.0. This has caused some confusion, best to have a look at Hans de Leenheer’s post: VSPHERE 6 NFS4.1 DOES NOT INCLUDE PARALLEL STRIPING!

Very happy to see NFSv4.1 see the light of day with vSphere for at least the multi-pathing as this caused many people to go down the block protocol route with the added complexity of LUNs, however, its a pity NFSv4.1 is not supported with VVols. I’m sure VMware must be working on this.

What’s New in vSphere 6.0: Finally Announced (at last!)

February 2nd, 2015 No comments

Series:

  1. What’s New in vSphere 6.0: Finally Announced (at last!)
  2. What’s New in vSphere 6.0: vCenter and ESXi
  3. What’s New in vSphere 6.0: Enhanced Link Mode
  4. What’s New in vSphere 6.0: Virtual Volumes
  5. What’s New in vSphere 6.0: Content Library
  6. What’s New in vSphere 6.0: Virtual Datacenter (removed from release)
  7. What’s New in vSphere 6.0: Fault Tolerance
  8. What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion
  9. What’s New in vSphere 6.0: Networking
  10. What’s New in vSphere 6.0: NFS Client
  11. What’s New in vSphere 6.0: Certificate Management

Finally, the time has come for VMware to publicly announce its latest update to version 6.0 of its ever growing virtualisation platform, vSphere.

It’s been a rather strange and somewhat convoluted journey to get to the actual announcement.

For the first time ever for VMware (kudos!), there was a very large public Beta (more than 10,000 people) but participants had to sign an NDA to join which meant they couldn’t talk about it. VMware itself then outed many of the features during keynotes and sessions at VMworld San Francisco 2014 (to the consternation and surprise of some product managers!) but still had to call the beta a Tech Preview. Pat Gelsinger himself called out the name during his keynote despite everyone else at VMware trying to keep quiet on the official name. All this left many people unsure what they could and couldn’t talk about. The apparent legal reason for not being able to officially announce vSphere 6.0 is all to do with financials. VMware didn’t want to announce a future product in 2014 as they would then be obliged to account for future earnings. So, the whole song and dance is nothing to do with technology and all to do with financial reporting, isn’t life fun!

Personally, I don’t think this was handled in the best way, fantastic to have a public beta but no point trying to strictly control the messaging with an NDA with so many people involved. Even Microsoft and Apple have more open public betas nowadays.

As of today, that’s now officially water under the bridge (although I hope they learn some things for next time). The covers have finally been lifted and VMware has officially announced vSphere 6.0

imageVMware says there are three focus areas for this vSphere release:

  1. Continue to be the best and most widely used virtualisation platform
  2. Be able to virtualise all x86 workloads. Run all today’s traditional datacenter apps however big they are such as Oracle, SAP, Microsoft Dynamics and Java and build on that foundation to run the next generation of cloud applications as part of a Software Defined Datacenter such as NodeJS, Rails, Spring, Pivotal and Hadoop
  3. Create operational efficiency at scale by reducing manual steps with mre automation

Although numbered 6.0 I would say as with vSphere 5.5, this is another evolutionary rather than revolutionary update and other than VMware’s recent cadence of a major update every two years could have been part of the vSphere 5 family. VSAN and NSX were the major new product announcements at VMworld 2013 and VMware decided to leave the big announcement infrastructure wise for VMworld 2014 to EVO:RAIL and its vCloud Air and vRealize rebranding.

As for vSphere 6.0, VMware has called this release the foundation for the Software Defined Datacenter.

image

The major new highlight as everyone knows is Virtual Volumes (VVols) which VMware has been talking about publicly since VMworld 2011 (I called vVols VMware’s revolutionary approach to storage) and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give vSphere 6.0 something big.

image_thumb[8]

VVols may be the headliner but there’s plenty else VMware has been working on:

  • Hosts up to 480 pCPUs, 12TB RAM, 64TB data stores and 1000 VMs
  • VMs up to 128 vCPUS and 4TB RAM
  • 64 nodes in a cluster and up to 6000 VMs.
  • Per VM Storage I/O Control
  • VVols
  • NFS 4.1 with Kerberos
  • vMotion across vCenter Servers, virtual switches, and long distance
  • Fault Tolerance for Multi-Processor VMs
  • vSphere Web Client enhancements
  • Certificate Lifecycle Management via a command line interface
  • New abilities to replicate and backup to the vCHS (vCloud Air) cloud
  • Better vSphere Replication RPOs to 5 mins
  • Network IO Control VM and distributed switch bandwidth reservations
  • Multi-Site replicated content library to store VM templates, vApps, ISO Images and scripts
  • AppHA expanded support for more applications

 

What’s New in vSphere 6.0: vCenter

August 27th, 2014 6 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

VMware continues to build out its hypervisor core management application vCenter with more functionality. There are no dramatic architectural changes but VMware is moving slowly to pull apart vCenter into its component parts to be able to run more vCenters at scale and is creating a central services function.

Platform Services Controller (PSC)

image VMware is introducing a new component called the VMware Platform Services Controller (which had a previous beta name of Infrastructure Controller).

SSO was the first component to be spun out into what is now being built up as the PSC. SSO was first released in 5.1 and had major issues and was rebuilt as SSO 2.0 for vSphere 5.5

vCenter, vCOPs, vCloud Director, vCloud Automation Center can use functionality within the PSC as a shared component.

vCenter is actually being split in two. One part is now called the Management Node and the other is the Platform Services Controller.

psc1

The Management Node contains all the vCenter Server components with all the security related things stripped out.

psc2

 

The PSC now contains the following functionality:

  • SSO
  • Licensing
  • Certificate Authority
  • Certificate Store
  • Miscellaneous Services

psc3

The Certificate Authority and Certificate Store are new components to at last tame the wild and woefully inadequate management of vSphere certificates. The new VMware Certificate Authority (VMCA) can act as a root certificate authority either managing its own certificates or handling certificates from an external Certificate Authority. VMCA provisions each ESXi host with a signed certificate when it is added to vCenter as part of installation or upgrade. You can view and manage these certificates from the vSphere Web Client and manage the full certificate lifecycle workflow.

Other services will be added to the PSC in future releases.

The PSC is built into vCenter and runs as a vPostgres database so there’s no additional database to worry about and it runs in both the Windows and appliance version. The PSCs self replicate and importantly don’t use ADAM so it can replicate between Windows and appliance vCenters.

You can either have the PSC embedded within vCenter Server or run it as an external component to vCenter Server.

Read more…

What’s New in vSphere 6.0: Content Library

August 27th, 2014 2 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Content Library is a planned new addition to vSphere 6.0 which was talked about for the first time in a session at VMworld. Content Library is a way to centrally store VM templates, vApps, ISO images and scripts.

This content can can be synchronised across sites and vCenters. Synchronised content allows you to more easily deploy workloads at scale that are consistent. Having consistent content is easier to automate against, will be easier to keep things in compliance and make an admin’s life more efficient.

image Content Library provides basic versioning of files in this release and has a publish and subscribe mechanism to replicate content between local and remote VCs which by default is synchronised every night. Changes to descriptions, tags and other metadata will not trigger a version change. There is no de-dupe at the content library level but storage arrays may do that behind the scenes.

Content library can also sync between vCenter and vCloud Director.

The content itself is stored either in vSphere Datastores or actually preferably on a local vCenter file system since the contents are then stored in a compressed format. A local file system is presented directly to the vCenter Servers, for a Windows VC it can be another drive or folder added but for the vCenter Appliance the preferred approach is to mount a NFS share directly to your vCenter appliance. This may mean you need to amend your storage networking as many installations have segregated storage networks which are directly accessible by hosts to store VMs but not by vCenter.

Read more…

What’s New in vSphere 6.0: Virtual Volumes

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Virtual Volumes (VVols) is one of the big new additions to vSphere 6.0. VMware has been talking about it publicly since VMworld 2011 (I called VVols “VMware’s game changer for storage”) and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give 6.0 something big.

VVols is all about changing the way storage is deployed, managed and consumed making the storage system VM-centric, VMware likes to use the term “making the VMDK a first class citizen in the storage world”.

 

image

Virtual Volumes is part of VMware’s Software Defined Storage story which is split between the control plane with Virtual Data Services which is all policy driven and the data plane with Virtual Data Plane which is where the data is actually stored.

 

image

Read more…

What’s New in vSphere 6.0: Multi-CPU Fault Tolerance

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

It’s been many many years in the making but at last Fault Tolerance for Multi-Processor VMs has seen the light of day and was announced during the VMworld keynote today.

FT will now support VMs with up to 4 x vCPUs and 64GB RAM. SMP-FT as it’s called works differently than FT for single CPUs. There is a new fast check-pointing mechanism to keep the primary and secondary in sync. Previously a “Record-Replay” sync mechanism was used but the new fast check-pointing has allowed FT to expand beyond 1 x vCPU. Record-Replay kept a secondary VM in “virtual lockstep” with the primary. With fast check-pointing the primary and secondary VM execute the same instruction stream simultaneously making it much faster. If the FT network latency is too high for VMs to stay in sync, the primary will be slowed down to the point that the secondary can keep up. You can also now hot-configure FT.

image

Read more…

What’s New in vSphere 6.0: Virtual Data Center (removed from release)

August 26th, 2014 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

UPDATE: 02/02/2015 Virtual Data Center and a Policy Based Management component which were both talked about at VMworld have been pulled from the final release. It seems VMware needs more time to work out which policy and automation functionality goes into vRealize Automation Center, vCloud Director and vCenter itself. It’s a shame really as these components were real enablers for the SDDC, being able to control placement of VMs by policy will have to wait until another day.

Briefly shown in the VMworld Day 2 keynote demos was deploying a VM to a Virtual Datacenter which is in fact planned as a new addition to vSphere. Well, I say new addition which is true but its an old name brought back to life. The whole message of vSphere 4.0 was about creating a “Virtual Datacenter”. You could move physical machines into your virtual datacenter! Now we’ve progressed full circle and “Virtual Data Centers” are back!

image In vSphere 6.0, a Virtual Datacenter aggregates compute clusters, storage clusters, network and policies. In this first release, a virtual datacenter can aggregate resources across multiple clusters within a single vCenter Server into a single large pool of capacity. This will benefit large deployments such as VDI where you have multiple clusters with similar network and storage connections and now you can group them together.

Within this single pool of capacity, the Virtual Data Center will automate VM initial placement by deciding in which cluster the VM should be placed based on capacity and capability.

You can then create VM placement and storage policies and associate these policies with specific clusters or hosts as well as the datastores they are connected to. This policy may be a policy to store SQL VMs on a subset of hosts within a particular cluster for licensing reasons. You can then monitor adherence to these policies and automatically remediate any issues. When you deploy a VM, you would select from various policies and the Virtual Datacenter, based on the policies would decide where a VM would be placed. This again is to try reduce the opex admin decisions of where VMs are placed.

imageVirtual Data Centers require clusters with DRS enabled to handle the initial placement, individual hosts cannot be added. You can remove a host from a cluster within a Virtual Data Center by putting it in maintenance mode, all VMs will stay within the VDC moving to other hosts in the cluster. If you need to remove a cluster or turn off DRS for any reason and can’t use Partially Automated Mode, you would remove the cluster from the Virtual Data Center. The VMs would stay in the cluster but will no longer have VM placement policy monitoring checks done until the cluster rejoins a Virtual Data Center. You could manually vMotion VMs to other clusters within the VDC before removing a cluster.

Read more…

What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion

August 26th, 2014 3 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vMotion is one of most basic yet coolest features of vSphere, People generally consider the time they saw vMotion work for the first time as their “wow” moment showing the power of virtualisation. in vSphere 5.5, vMotion is possible within a single cluster and across clusters within the same Datacenter and vCenter. With vSphere 6.0 vMotion is being expanded to include vMotion across vCenters, across virtual switches, across long distances and routed vMotion networks aligning vMotion capabilities with larger data center environments.

vMotion across vCenters will simultaneously change compute, storage, networks, and management. This leverages vMotion with unshared storage and will support local, metro and cross-continental distances.

imageYou will need the same SSO domain for both vCenters if you use the GUI to initiate the vMotion as the VM UUID can be maintained across vCenter Server instances but it is possible with the API to have a different SSO domain. VM historical data is preserved such as Events, Alarms and Task History. Performance Data will be preserved once the VM is moved but is not aggregated in the vCenter UI. the information can still be accessed using 3rd party tools or the .API using the VM instance ID which will remain across vCenters.

Read more…