Archive

Posts Tagged ‘networking’

What’s New in vSphere 6.0: Content Library

August 27th, 2014 2 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Content Library is a planned new addition to vSphere 6.0 which was talked about for the first time in a session at VMworld. Content Library is a way to centrally store VM templates, vApps, ISO images and scripts.

This content can can be synchronised across sites and vCenters. Synchronised content allows you to more easily deploy workloads at scale that are consistent. Having consistent content is easier to automate against, will be easier to keep things in compliance and make an admin’s life more efficient.

image Content Library provides basic versioning of files in this release and has a publish and subscribe mechanism to replicate content between local and remote VCs which by default is synchronised every night. Changes to descriptions, tags and other metadata will not trigger a version change. There is no de-dupe at the content library level but storage arrays may do that behind the scenes.

Content library can also sync between vCenter and vCloud Director.

The content itself is stored either in vSphere Datastores or actually preferably on a local vCenter file system since the contents are then stored in a compressed format. A local file system is presented directly to the vCenter Servers, for a Windows VC it can be another drive or folder added but for the vCenter Appliance the preferred approach is to mount a NFS share directly to your vCenter appliance. This may mean you need to amend your storage networking as many installations have segregated storage networks which are directly accessible by hosts to store VMs but not by vCenter.

Read more…

What’s New in vSphere 6.0: Virtual Volumes

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Virtual Volumes (VVols) is one of the big new additions to vSphere 6.0. VMware has been talking about it publicly since VMworld 2011 (I called VVols “VMware’s game changer for storage”) and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give 6.0 something big.

VVols is all about changing the way storage is deployed, managed and consumed making the storage system VM-centric, VMware likes to use the term “making the VMDK a first class citizen in the storage world”.

 

image

Virtual Volumes is part of VMware’s Software Defined Storage story which is split between the control plane with Virtual Data Services which is all policy driven and the data plane with Virtual Data Plane which is where the data is actually stored.

 

image

Read more…

What’s New in vSphere 6.0: Multi-CPU Fault Tolerance

August 26th, 2014 1 comment

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

It’s been many many years in the making but at last Fault Tolerance for Multi-Processor VMs has seen the light of day and was announced during the VMworld keynote today.

FT will now support VMs with up to 4 x vCPUs and 64GB RAM. SMP-FT as it’s called works differently than FT for single CPUs. There is a new fast check-pointing mechanism to keep the primary and secondary in sync. Previously a “Record-Replay” sync mechanism was used but the new fast check-pointing has allowed FT to expand beyond 1 x vCPU. Record-Replay kept a secondary VM in “virtual lockstep” with the primary. With fast check-pointing the primary and secondary VM execute the same instruction stream simultaneously making it much faster. If the FT network latency is too high for VMs to stay in sync, the primary will be slowed down to the point that the secondary can keep up. You can also now hot-configure FT.

image

Read more…

What’s New in vSphere 6.0: Virtual Data Center (removed from release)

August 26th, 2014 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

UPDATE: 02/02/2015 Virtual Data Center and a Policy Based Management component which were both talked about at VMworld have been pulled from the final release. It seems VMware needs more time to work out which policy and automation functionality goes into vRealize Automation Center, vCloud Director and vCenter itself. It’s a shame really as these components were real enablers for the SDDC, being able to control placement of VMs by policy will have to wait until another day.

Briefly shown in the VMworld Day 2 keynote demos was deploying a VM to a Virtual Datacenter which is in fact planned as a new addition to vSphere. Well, I say new addition which is true but its an old name brought back to life. The whole message of vSphere 4.0 was about creating a “Virtual Datacenter”. You could move physical machines into your virtual datacenter! Now we’ve progressed full circle and “Virtual Data Centers” are back!

image In vSphere 6.0, a Virtual Datacenter aggregates compute clusters, storage clusters, network and policies. In this first release, a virtual datacenter can aggregate resources across multiple clusters within a single vCenter Server into a single large pool of capacity. This will benefit large deployments such as VDI where you have multiple clusters with similar network and storage connections and now you can group them together.

Within this single pool of capacity, the Virtual Data Center will automate VM initial placement by deciding in which cluster the VM should be placed based on capacity and capability.

You can then create VM placement and storage policies and associate these policies with specific clusters or hosts as well as the datastores they are connected to. This policy may be a policy to store SQL VMs on a subset of hosts within a particular cluster for licensing reasons. You can then monitor adherence to these policies and automatically remediate any issues. When you deploy a VM, you would select from various policies and the Virtual Datacenter, based on the policies would decide where a VM would be placed. This again is to try reduce the opex admin decisions of where VMs are placed.

imageVirtual Data Centers require clusters with DRS enabled to handle the initial placement, individual hosts cannot be added. You can remove a host from a cluster within a Virtual Data Center by putting it in maintenance mode, all VMs will stay within the VDC moving to other hosts in the cluster. If you need to remove a cluster or turn off DRS for any reason and can’t use Partially Automated Mode, you would remove the cluster from the Virtual Data Center. The VMs would stay in the cluster but will no longer have VM placement policy monitoring checks done until the cluster rejoins a Virtual Data Center. You could manually vMotion VMs to other clusters within the VDC before removing a cluster.

Read more…

What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion

August 26th, 2014 3 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vMotion is one of most basic yet coolest features of vSphere, People generally consider the time they saw vMotion work for the first time as their “wow” moment showing the power of virtualisation. in vSphere 5.5, vMotion is possible within a single cluster and across clusters within the same Datacenter and vCenter. With vSphere 6.0 vMotion is being expanded to include vMotion across vCenters, across virtual switches, across long distances and routed vMotion networks aligning vMotion capabilities with larger data center environments.

vMotion across vCenters will simultaneously change compute, storage, networks, and management. This leverages vMotion with unshared storage and will support local, metro and cross-continental distances.

imageYou will need the same SSO domain for both vCenters if you use the GUI to initiate the vMotion as the VM UUID can be maintained across vCenter Server instances but it is possible with the API to have a different SSO domain. VM historical data is preserved such as Events, Alarms and Task History. Performance Data will be preserved once the VM is moved but is not aggregated in the vCenter UI. the information can still be accessed using 3rd party tools or the .API using the VM instance ID which will remain across vCenters.

Read more…

EVO: Rail – Integrated Hardware and Software

August 25th, 2014 No comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image Each of the four compute nodes within the 2U appliance has a very specific minimum set of specifications. Some hardware vendors may go above and beyond this by adding say GPU cards for VDI or more RAM per host but VMware wants a standard approach. These kinds of servers don’t exist on the market currently other than what other hyper-converged companies whitebox from say SuperMicro so we’re talking about new hardware from partners.

Each of the four EVO: RAIL nodes within a single appliance will have at a minimum the following:

  • Two Intel E5-2620v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD for the ESXi boot device
  • Three SAS 10K RPM 1.2TB HDD for the VSAN datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One VSAN certified pass-through disk controller
  • Two 10GbE NIC ports (either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for out-of-band management

Each appliance is fully redundant with dual power supplies. As there are four ESXi hosts per appliance, you are covered for hardware failures or maintenance. The ESXi boot device and all HDDs and SSDs are all enterprise-grade. VSAN itself is resilient. EVO: RAIL Version 1.0 can scale out to four appliances giving you a total of 16 ESXi hosts, backed by a single vCenter and a single VSAN datastore.There is some new intelligence added which automatically scans the local network for new EVO:RAIL appliances when they have been connected and easily adds them to the EVO: RAIL cluster.

Read more…

EVO: Rail – Management Re-imagined

August 25th, 2014 2 comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image The EVO: RAIL management software has been built to dramatically simplify the deployment of the appliances as well as provisioning VMs. The user guide is only 29 pages so you can get an idea of how VMware is driving simplicity. Marvin actually exists as a character icon within the management interface with an embedded “V” and “M”.

VMware recognises that vCenter has had a rather large feature bloat problem over the years. They have introduced new components like SSO which do provide needed functionality but add to the complexity of deploying vSphere. VMware has also tried to bring all these components together in the vCenter Server Appliance (VCSA).

This is great but has some functionality missing compared to the Windows version like Linked-Mode and some customers worry about managing the embedded database for large deployments. As EVO:RAIL is aimed at smaller deployments and isn’t concerned with linking vCenters together, the VCSA is a good option and the EVO:RAIL software is in fact a package which runs as part of the VCSA. There is no additional database required, it is all built into the appliance and uses the same public APIs to communicate with vCenter but acts as a layer to provide a simpler user experience, hiding some of the complexity of vCenter. vCenter  is still there so you can always connect directly with the Web Client and manage VMs as you do normally and any changes made in either environment are common so no conflicts.

EVO:RAIL is also also written purely in HTML5 even for the VM console, no yucky Flash like the vSphere Web Client and it works on any browser, even an iPad. Interestingly is has a look which is a little similar to Microsoft Azure Pack. Who would ever have thought VMware would have written a VM management interface built for simplicity that is similar to an existing Microsoft one!

Read more…

VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance

August 25th, 2014 1 comment

image image VMware will announce shortly today at VMworld US that it is entering the hyper-converged appliance market with a solution called EVO:Rail. This has been rumoured for a while since an eagle eyed visitor to the VMware campus spotted a sign for Marvin in their briefing center. Marvin was the engineering name and has still stuck around in parts of the product but its grown up name is EVO:Rail.

EVO(lution) is eventually going to be a suite of products/solutions, Rail is the first announcement named for the smallest part of a data center rack, the rail, so you can infer that VMware intends to build this portfolio out to an EVO:RACK and beyond.

EVO:Rail combines compute, storage and networking resources into a hyper-converged infrastructure appliance with the intention to dramatically simplify infrastructure deployment. Hardware wise this is pretty much what Nutanix and Simplivity as two examples do today. Spot the acronym, HCIA, to hunt for newly added VMworld sessions.

VMware is not however entering the hardware business itself, that would kill billions of marketing budget spent on the Software Defined Data Center message of software ruling the world. Partner hardware vendors will be building the appliance to strict specifications with VMware’s EVO:RAIL software bundle pre-installed and the appliance delivered as a single SKU. Some may see this as a technicality. VMware has always said if you need specific  hardware you are not software defined. Does EVO:RAIL count as specific hardware?

Support will be with the hardware vendor for both hardware and software with VMware providing software support to the hardware vendor at the back-end.

image

Read more…

HP’s new management appliance OneView updated to 1.1

July 4th, 2014 No comments

HP has updated its new all singing all dancing management appliance, OneView to 1.1

This is now available for download after being announced at HP Discover last month.

image

HP OneView will be the ultimate replacement for HP Systems Insight Manager (HP SIM), HP Virtual Connect Enterprise Manager (VCEM), HP Insight Control and HP Intelligent Provisioning. It is delivered as a virtual appliance running on a hypervisor.

HP is putting a lot of effort into OneView and really trying to reimagine server management. I was never a fan of HPSIM as I felt it was unnecessarily cumbersome, HP has specifically said one of the goals of OneView is to make server management far easier and quicker with a lighter touch. In fact they are not rushing to add functionality to OneView but taking a pragmatic approach and only adding what is absolutely needed. HPs answer to Vblock is its Converged Systems which are built, configured and managed by OneView so HP has skin in the management game. Converged infrastructure is not just connecting hardware together but requires converged management which OneView aims to deliver.

Moving over to OneView is going to be a long process however as OneView has been designed to manage only Gen8 and future servers with just a little bit of management available for G7 servers. Far more complicated though is there is no migration path from Virtual Connect to OneView, you need to delete your virtual connect domains and recreate them in OneView which means shutting down every blade in your domain (up to 4 chassis) and starting from scratch. HP calls this a transition, not a migration. Not all current Virtual Connect functionality is available in OneView so you may not even be able to configure your newly purchased chassis in OneView depending on your required network config.

Saying that, OneView is going to be the future of server management so you should be thinking in that direction for your future plans. One of the stumblers may be licensing, you need to purchase or upgrade existing management software licenses to use OneView.

What’s new with 1.1?

  • Now available as a Hyper-V appliance along with ESXi
  • You can now provision and manage 3PAR storage, integrating the configuration into server profiles.
  • Added support for the new 20/40 FlexFabric Modules
  • Virtual Connect support for untagged traffic and VLAN tunnelling (OneView was pretty hampered by this before).
  • BIOS settings as part of server profiles (nice one!)
  • Inventory views of Cisco Nexus 5000 switches and HP FEX module which will be very useful.
  • Server Profiles for Gen8 rack mount servers to update firmware and BIOS settings for DL360/DL380
  • HP Insight Control for VMware vCenter Server is now HP OneView for VMware vCenter
  • HP Insight Control for Microsoft System Center is now HP OneView for Microsoft System Center
  • HP Insight Control for Red Hat Enterprise Virtualization is now HP OneView for Red Hat Enterprise Virtualization

Here’s all the documentation and another link with some of the others guides (HP, may be worth putting them all in one place).

Categories: HP Tags: , , , ,

HP Discover US 2014. The Day 1 Buzz

June 11th, 2014 No comments

image

Series:

This week I’ve been lucky to be invited by HP to their annual HP Discover conference in Las Vegas. As I’ve been working with HP technology and it’s predecessors for so long, I really couldn’t miss an opportunity to dive deeper into the world of HP and be able to find out more about the myriad of technologies HP is involved in.

Yesterday I travelled for 15 hours from London to Las Vegas via San Francisco and after all that travel headed out to the London Eye in Vegas, well, rather the High Roller, the worlds largest Ferris wheel and met up with the other invited bloggers and social media for spectacular views of the strip.

photo 1 (5)

Read more…

Categories: HP Discover Tags: , ,