Archive

Posts Tagged ‘networking’

Tech Field Day 11 Preview: CloudPhysics

June 9th, 2016 1 comment

Tech Field Day 11 is happening in Boston, from 22-24 June and I’m super happy to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: I’m heading to Tech Field Day 11 and DockerCon!

CloudPhysics

CloudPhysics is somewhat of a darling of the visualisation ecosystem, founded by a number of ex-VMware brains. CloudPhysics has previously presented at Virtualisation Field Day 3, two years ago

It has a SaaS product for analysing on-premises VMware installations. This is hugely valuable, vSphere is powerful, can have fantastic performance but by nature of it touching compute, storage and networking can be difficult to see where performance or configuration issues are.
CloudPhysics sucks up all your vSphere config and performance data via a small virtual appliance and sends the data to the cloud and crunches it to give you visibility across your entire infrastructure so you can view reports, see config changes and cluster performance. You can also look ahead and use the product’s trending and predictive analysis. You can get going in 15 minutes and spend no money with the Free edition or upgrade to the Premium edition for more features which is a yearly subscription.

The user interface is all based on cards, each one is a mash of systems data and analytics. In the free Edition you can see things like inventory information, VM reservations and limits, snapshots and host resource commitment. If you start paying you get many more cards including datastore space, cluster health, unused VMs, orphaned VM files, I/O contention, a helpful knowledge based advisor to match KB articles to your infrastructure and also some cost comparison calculators for vCloud Air and Azure. As its a SaaS platform the cards are continually being updated and new ones appear fairly regularly. You can also create your own.

Being able to spot bad configurations and unauthorised changes is so useful and if you can correlate a performance change to a configuration change that can save hours of needless investigation.

Its strange to say but you really shouldn’t need any of this, I wish vCenter was able to give you all this information in an easily digestible format but it doesn’t so CloudPhysics is great. Who knows if VMware ever get to vCenter as a Service whether analytics like this is part of the future roadmap?

CloudPhysics has always had the VM analytics but has recently been fleshing out its host and cluster exploration capabilities so can better see the relation between VMs for noise neighbours for example, it will be interesting to hear what’s new.

Partner Edition

Read more…

Categories: TFD11 Tags: , , ,

ZeroStack’s full stack from Infrastructure to Application

January 13th, 2016 No comments

ZeroStack is a recently out of stealth company providing a cloud managed hyper-converged appliance running OpenStack. It is targeting private cloud customers who are wanting to stand up their own OpenStack instances but don’t want the hassle of getting it all working themselves. What ZeroStack also does which is unique is combine this infrastructure part with application deployment which for me is the exciting bit.

It is early days for the company but it has seasoned financial backers, advisers and founders and after just a year has an impressive amount of functionality in its product.

Private Cloud

imageThe use case is companies wanting to replicate the ease of public cloud but in as a private cloud. Amazon’s AWS and Microsoft’s Azure make spinning up VMs or even direct application instances easy and allow you to pay per use. It’s all about lowering the admin of deployment and moving to an IT consumption model.

This is all great but companies at the moment need to replicate this functionality in-house and may like to built out a private cloud. They may need data kept on premises due to perceived security concerns or even legally requiring data to be held in a particular location. There may be more practical concerns like the amount of data to be stored/analysed that makes it impractical to move externally. The issue of cost may be an issue with scare stories of AWS bills racking up quickly although I do find companies are very poor at working out their own internal data center costs so comparisons are not necessarily accurate.

The point where deployment happens is also shifting away from infrastructure support teams to application support teams and further along to applications themselves managing their own infrastructure resources via API calls to a cloud to spin up new VMs with automated deployment and scaling of applications.

Suffice to say companies are wanting to replicate public cloud functionality internally to give applications the resources they require. Current software options are generally VMware which is feature rich with excellent infrastructure resiliency with a cost model to match the functionality or OpenStack which is open source, not as feature rich with deliberately less infrastructure resiliency but doesn’t have license costs due to a vendor.

ZeroStack uses the tagline “Public Cloud Experience, Private Cloud Control” and as I see it is attempting to give its customers four key things:

1. Hardware: Hyper-Converged Appliance

Read more…

VMworld EU 2015 Buzz: Meeting OpenNebula

October 28th, 2015 1 comment

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

OpenNebula

2015-10-13 13.43.58I spent quiet a bit of time in the Solutions Exchange and made a point of going through the New Innovators section. I found OpenNebula which seems a very simple private cloud enabler which piqued my interest after worrying about the complexity of vRealize Automation and vCloud Director for private cloud. It seems a very simple solution, download an .OVA, suck in your templates and then provide a portal to clients to be able to deploy cloud like those templates, very simple just what many companies need. I believe it is even open source with an Apache license.

Open Nebula says it is an enterprise-ready turnkey solution for deploying private clouds, You can use KVM, Xen or ESXi as your hypervisor and can also layer it over vCenter to provide a multi-tenant private cloud.

As a consumer you can use AWS EC2 and EBS APIs, it has a marketplace of appliances ready to be run in OpenNebula, has chargeback/accounting, auditing, RBAC, quotas, etc. a pretty comprehensive list of features.

Community support is available or you can pay for commercial support straight from the developers.

I will certainly be downloading their software and having a look.

 

HP Service Pack for ProLiant 2015.04.0 released, includes vSphere 6.0 support

April 17th, 2015 No comments

HP server users may be glad to know that HP has released the latest update to its Service Pack for Proliant which will be supported until April 30, 2016.

vSphere 6.0 support has been added so super-keen upgraders now have HP driver and firmware to match.

http://h20564.www2.hp.com/hpsc/swd/public/detail?sp4ts.oid=4091567&swItemId=MTX_8f34f27973a04a71b211d728ab#tab1

This latest SPP has added support for:

  • New HP ProLiant servers:
    • HP ML10 v2
    • HP XL730f Gen9
    • HP XL740f Gen9
    • HP XL750f Gen9
    • HP ML110 Gen9
    • HP XL170r Gen9
    • HP XL190r Gen9
    • HP WS460c Gen9 Graphics Server Blade
  • New HP ProLiant options
  • Red Had Enterprise Linux 6.6, SUSE Linux Enterprise Server 12, VMware vSphere 5.5 U2 and VMware vSphere 6.0
  • HP USB Key Utility for Windows v2.0.0.0 for downloads greater than 4GB
  • Select Linux firmware components available in rpm format
  • HP Smart Update Manager v7.2.0.

Release Notes are here:

ftp://ftp.hp.com/pub/softlib2/software1/doc/p1205445419/v108284/2015.04.0-SPPReleaseNotes.pdf

The contents list is here:

ftp://ftp.hp.com/pub/softlib2/software1/doc/p1205445419/v108284/2015.04.0SPPContentsReport.pdf

HP ESXi image for vSphere 6.0 available here:

https://my.vmware.com/web/vmware/details?downloadGroup=OEM-ESXI60GA-HP&productId=491

Happy patching…

 

Categories: Flex-10, HP Tags: , ,

What’s New in vSphere 6.0: Enhanced Linked Mode

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vCenter Linked Mode provides a single management overview of multiple vCenter instances.

Linked Mode also provides a single login for multiple vCenter Servers and then shows you a common inventory view and allows you to search for objects across vCenters. Licenses, roles and permissions are replicated between vCenter instances.

Linked mode has always been only available for Windows vCenters (ADAM is used as the replication engine) so you couldn’t share licenses, roles and permissions with any vCenter appliances you had.

With the release of the new Platform Services Controller in vSphere 6.0, some of the Linked Mode functionality is changing and its been given a new same.

vSphere will also now include an Enhanced Linked Mode which will require and work in conjunction with the Platform Services Controller.

image

This will not rely on ADAM but have its own replication system which is a multi-master replication technology also called VMDir based on Open LDAP which means replication now works across Windows vCenter as well as vCenter appliances.

Replication will be expanded to include Policies and Tags along with Roles and Permissions. In fact the replication engine will allow VMware to sync any kind of information between Platform Services Controllers which can then be used by vCenters and other management products. Bye bye ADAM, you won’t be missed.

What’s New in vSphere 6.0: Networking

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

vSphere networking hasn’t had any huge additions in this release. This is partly to be expected as VMware’s networking messaging is mainly revolved around NSX for now.

Network I/O Control (NIOC) has however had a very useful addition, you can now have Per VM and Distributed Switch bandwidth reservations. You can therefore guarantee compute as well as network resources for your critical VMs.

IPv6 has also been beefed up but this is mainly for new greenfield deployments. It’s not easy to transition from IPv4 to IPv6 so I think VMware sees IPv6 for only new deployments. You will be able to manage ESXi purely with IPv6 and iSCSI and NFS will also be supported. In the future, VMware is looking to move to IPv6 only for vSphere management but that’s a few years out, dual stack IPv4 and IPv6 will be around for a while.

Here’s what the Install for vCenter would look like with IPv6

image

What’s New in vSphere 6.0: NFS Client

February 2nd, 2015 No comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for all details of all the components.

NFS has been available as a storage protocol since 2006 with ESX 3.0 and vSphere has been using NFS version 3 for all this time. There’s been no update to how NFS works.

I’ve been a massive fan of NFS since it was released. No LUNs, much bigger datastores and far simpler management. Being able to move around, back up and restore VM disk files natively from the storage array is extremely powerful. NFS datastores are by default thin-provisioned which allows you your VM admin and storage admin to agree on actual storage space utilisation.

However, good old NFSv3 has a number of limitations, there is no multi-pathing support, limited security and performance is limited by the single server head.

vSphere 6.0 introduces NFS version v4.1 to solve many of these limitations.

NFS 4.1 introduces multi-pathing by supporting session trunking using multiple remote IPs to a single session. Not all vendors will support this so best to check. You can now have increased performance from load-balanced and parallel access, with it comes better availability from path failover.

image

imageThere is improved security using Kerberos authentication. You need to add your ESXi hosts to AD and specify a Kerberos user before creating any NFSv4.1 datastores with Kerberos enabled . You then use this Kerberos username and password to authenticate against the NFS mount. All files stored in all Kerberos enabled datastore will be accessed using this single user’s credentials. You should always use the same user on all hosts otherwise vMotion and other features might fail if two hosts use different Kerberos users. NTP is also a requirement as usual when using Kerberos. This configuration can be automated with Host Profiles.

 

NFSv4.1 now allows you to use a non-root user to access files. RPC header authentication has also been added to boost security, it only supports DES-CBC-MD5 which is universal rather than the stronger AES-HMAC which is not supported by all vendors. Locking has been improved with in-band mandatory locks using share reservations as a locking mechanism. There is also better error recovery.

There are some caveats however with using NFS v4.1. NFSv4.1 is not compatible with SDRS, SIOC, SRM and VVols but you can continue to use NFSv3 datastores for these.

NFSv3 locking is not compatible with NFSv4.1. You must not mount an NFS share as NFSv3 on one ESXi host and mount the same share as NFSv4.1 on another host, best to configure your array to use one NFS protocol, either NFS v3 or v4.1, but not both.

The protocol has also been made more efficient by being less chatty by compounding operations, removing the file lock heartbeat and session lease.

All paths down handling is now different with multi-pathing support. The clock skew issue that caused an all path down issue in vSphere 5.1 and 5.5 has been fixed in vSphere 6.0 for both NFSv3 and NFSv4.1. With multi-pathing, IO can failover to other paths if one path goes down, there is no longer any single point of failure.

No support for pNFS will be available for ESXi 6.0. This has caused some confusion, best to have a look at Hans de Leenheer’s post: VSPHERE 6 NFS4.1 DOES NOT INCLUDE PARALLEL STRIPING!

Very happy to see NFSv4.1 see the light of day with vSphere for at least the multi-pathing as this caused many people to go down the block protocol route with the added complexity of LUNs, however, its a pity NFSv4.1 is not supported with VVols. I’m sure VMware must be working on this.

What’s New in vSphere 6.0: Finally Announced (at last!)

February 2nd, 2015 No comments

Series:

  1. What’s New in vSphere 6.0: Finally Announced (at last!)
  2. What’s New in vSphere 6.0: vCenter and ESXi
  3. What’s New in vSphere 6.0: Enhanced Link Mode
  4. What’s New in vSphere 6.0: Virtual Volumes
  5. What’s New in vSphere 6.0: Content Library
  6. What’s New in vSphere 6.0: Virtual Datacenter (removed from release)
  7. What’s New in vSphere 6.0: Fault Tolerance
  8. What’s New in vSphere 6.0: Cross vCenter and Long Distance vMotion
  9. What’s New in vSphere 6.0: Networking
  10. What’s New in vSphere 6.0: NFS Client
  11. What’s New in vSphere 6.0: Certificate Management

Finally, the time has come for VMware to publicly announce its latest update to version 6.0 of its ever growing virtualisation platform, vSphere.

It’s been a rather strange and somewhat convoluted journey to get to the actual announcement.

For the first time ever for VMware (kudos!), there was a very large public Beta (more than 10,000 people) but participants had to sign an NDA to join which meant they couldn’t talk about it. VMware itself then outed many of the features during keynotes and sessions at VMworld San Francisco 2014 (to the consternation and surprise of some product managers!) but still had to call the beta a Tech Preview. Pat Gelsinger himself called out the name during his keynote despite everyone else at VMware trying to keep quiet on the official name. All this left many people unsure what they could and couldn’t talk about. The apparent legal reason for not being able to officially announce vSphere 6.0 is all to do with financials. VMware didn’t want to announce a future product in 2014 as they would then be obliged to account for future earnings. So, the whole song and dance is nothing to do with technology and all to do with financial reporting, isn’t life fun!

Personally, I don’t think this was handled in the best way, fantastic to have a public beta but no point trying to strictly control the messaging with an NDA with so many people involved. Even Microsoft and Apple have more open public betas nowadays.

As of today, that’s now officially water under the bridge (although I hope they learn some things for next time). The covers have finally been lifted and VMware has officially announced vSphere 6.0

imageVMware says there are three focus areas for this vSphere release:

  1. Continue to be the best and most widely used virtualisation platform
  2. Be able to virtualise all x86 workloads. Run all today’s traditional datacenter apps however big they are such as Oracle, SAP, Microsoft Dynamics and Java and build on that foundation to run the next generation of cloud applications as part of a Software Defined Datacenter such as NodeJS, Rails, Spring, Pivotal and Hadoop
  3. Create operational efficiency at scale by reducing manual steps with mre automation

Although numbered 6.0 I would say as with vSphere 5.5, this is another evolutionary rather than revolutionary update and other than VMware’s recent cadence of a major update every two years could have been part of the vSphere 5 family. VSAN and NSX were the major new product announcements at VMworld 2013 and VMware decided to leave the big announcement infrastructure wise for VMworld 2014 to EVO:RAIL and its vCloud Air and vRealize rebranding.

As for vSphere 6.0, VMware has called this release the foundation for the Software Defined Datacenter.

image

The major new highlight as everyone knows is Virtual Volumes (VVols) which VMware has been talking about publicly since VMworld 2011 (I called vVols VMware’s revolutionary approach to storage) and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give vSphere 6.0 something big.

image_thumb[8]

VVols may be the headliner but there’s plenty else VMware has been working on:

  • Hosts up to 480 pCPUs, 12TB RAM, 64TB data stores and 1000 VMs
  • VMs up to 128 vCPUS and 4TB RAM
  • 64 nodes in a cluster and up to 6000 VMs.
  • Per VM Storage I/O Control
  • VVols
  • NFS 4.1 with Kerberos
  • vMotion across vCenter Servers, virtual switches, and long distance
  • Fault Tolerance for Multi-Processor VMs
  • vSphere Web Client enhancements
  • Certificate Lifecycle Management via a command line interface
  • New abilities to replicate and backup to the vCHS (vCloud Air) cloud
  • Better vSphere Replication RPOs to 5 mins
  • Network IO Control VM and distributed switch bandwidth reservations
  • Multi-Site replicated content library to store VM templates, vApps, ISO Images and scripts
  • AppHA expanded support for more applications

 

What’s New in vSphere 6.0: vCenter

August 27th, 2014 6 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

VMware continues to build out its hypervisor core management application vCenter with more functionality. There are no dramatic architectural changes but VMware is moving slowly to pull apart vCenter into its component parts to be able to run more vCenters at scale and is creating a central services function.

Platform Services Controller (PSC)

image VMware is introducing a new component called the VMware Platform Services Controller (which had a previous beta name of Infrastructure Controller).

SSO was the first component to be spun out into what is now being built up as the PSC. SSO was first released in 5.1 and had major issues and was rebuilt as SSO 2.0 for vSphere 5.5

vCenter, vCOPs, vCloud Director, vCloud Automation Center can use functionality within the PSC as a shared component.

vCenter is actually being split in two. One part is now called the Management Node and the other is the Platform Services Controller.

psc1

The Management Node contains all the vCenter Server components with all the security related things stripped out.

psc2

 

The PSC now contains the following functionality:

  • SSO
  • Licensing
  • Certificate Authority
  • Certificate Store
  • Miscellaneous Services

psc3

The Certificate Authority and Certificate Store are new components to at last tame the wild and woefully inadequate management of vSphere certificates. The new VMware Certificate Authority (VMCA) can act as a root certificate authority either managing its own certificates or handling certificates from an external Certificate Authority. VMCA provisions each ESXi host with a signed certificate when it is added to vCenter as part of installation or upgrade. You can view and manage these certificates from the vSphere Web Client and manage the full certificate lifecycle workflow.

Other services will be added to the PSC in future releases.

The PSC is built into vCenter and runs as a vPostgres database so there’s no additional database to worry about and it runs in both the Windows and appliance version. The PSCs self replicate and importantly don’t use ADAM so it can replicate between Windows and appliance vCenters.

You can either have the PSC embedded within vCenter Server or run it as an external component to vCenter Server.

Read more…

What’s New in vSphere 6.0: Content Library

August 27th, 2014 2 comments

VMware has finally officially announced what is to be included in vSphere 6.0 after lifting the lid on parts of the update during VMworld 2014 keynotes and sessions. 

See my introductory post: What’s New in vSphere 6.0: Finally Announced (about time!) for details of all the components.

Content Library is a planned new addition to vSphere 6.0 which was talked about for the first time in a session at VMworld. Content Library is a way to centrally store VM templates, vApps, ISO images and scripts.

This content can can be synchronised across sites and vCenters. Synchronised content allows you to more easily deploy workloads at scale that are consistent. Having consistent content is easier to automate against, will be easier to keep things in compliance and make an admin’s life more efficient.

image Content Library provides basic versioning of files in this release and has a publish and subscribe mechanism to replicate content between local and remote VCs which by default is synchronised every night. Changes to descriptions, tags and other metadata will not trigger a version change. There is no de-dupe at the content library level but storage arrays may do that behind the scenes.

Content library can also sync between vCenter and vCloud Director.

The content itself is stored either in vSphere Datastores or actually preferably on a local vCenter file system since the contents are then stored in a compressed format. A local file system is presented directly to the vCenter Servers, for a Windows VC it can be another drive or folder added but for the vCenter Appliance the preferred approach is to mount a NFS share directly to your vCenter appliance. This may mean you need to amend your storage networking as many installations have segregated storage networks which are directly accessible by hosts to store VMs but not by vCenter.

Read more…