Archive

Posts Tagged ‘VSAN’

VMworld EU 2015 Buzz: The Future of Software-Defined Storage – What does it look like in 3 years time? – CTO6453

October 28th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Richard McDougall, a Principal Engineer at VMware led this presentation peeking into the future.

2015-10-13 17.01.15 2015-10-13 16.57.54

This session was about the futures & trends for storage hardware and next-gen distributed services. shared NVMe/PCIe rack scale, flash densities & if magnetic storage still has a place.

2015-10-13 17.02.18Richard gave an interesting talk explaining the needs of Big Data/No SQL etc. applications and their storage requirements building up a graph using two axis, horizontal for size from 10s of TBs to 10s of PBs and vertical for IOPS from 1000 to 1,000,000.

He built up the picture showing where various memory and storage applications sit and then added what hardware / software platforms are used to service these applications, it was a great visual aid.

He spend time going through how cloud native applications and containers still have a storage requirement with some options copying the whole root tree, using a Docker solution by cloning using another union file system (aufs), like redo logs for VMDKs.

Containers still need files, not blocks and need snapshots and clones. You need non-persistent boot environment as well as somewhere to put persistent data. Shared volumes may be needed as well as an object store for retention/archive.

Richard went on to talk about hardware and the massive increase in performance for NVDIMMs, getting closer to DRAM. Have a look at the comparison chart relative for travel time from California to Australia.

2015-10-13 17.15.39 2015-10-13 17.23.36

He then went through some of the device interconnects and posited that NVMe will take over most current interconnect methods, he was very positive about NVMe!

2015-10-13 17.39.19 2015-10-13 17.43.27

He mentioned how hard it is to actually build true scale out performant storage.

2015-10-13 17.48.31He mentioned a great use case for caching companies like PernixData and how they in the future could be used to front end things like S3 storage, so have massive buckets in the cloud yet give very fast locally cached access, interesting.

The dream is a single common storage platform that can be used with a single HCL and common software defined storage platform for Block, CEPH, MySQL, MongoDB, Hadoop etc. I think that’s what VMware is trying to make VSAN do.

This is very difficult to achieve but I certainly see future VSAN not too far away with native SMB and NFS access as well as persistent storage for containers running on the Photon Platform. This will give you the best of both worlds, stateless containers running natively as well as stateful containers with their data stored locally within the container being replicated to other nodes in the VSAN cluster as they are VMs. Other services can access SMB and NFS for file data natively on VSAN which will also be replicated across the cluster and across sites for DR.

UKVMUG: The unofficial lowdown on everything announced at VMworld

November 18th, 2014 No comments

vmug-logoI have had the pleasure today of presenting at the 4th annual UK VMware User Group conference at the National Motorcycle Museum in Solihull near Birmingham.

I did a whirlwind tour of everything that was announced at VMworld and believe me, there was a huge amount. OK, so no major release which is the norm (but plenty of teasers) but enough other things going on in the VMware space to fill more than a UKVMUG! I know, I’ve done the research! Even though I was at VMworld US, so much was going on that I didn’t appreciate all the new shiny things being announced and once you start getting down to the nitty gritty of everything, you will be amazed at how much is going on.

I really didn’t have time to go through everything in detail so the presentation acts as an independently curated jumping off point for you to find out more information about the announcements that matter to you. You may not care particularly about hyper-converged or OpenStack so you can flick through the slides and then head off to continue your explorations.

Thanks for having me UKVMUG!

Here’s the presentation:

EVO: Rail – Integrated Hardware and Software

August 25th, 2014 No comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image Each of the four compute nodes within the 2U appliance has a very specific minimum set of specifications. Some hardware vendors may go above and beyond this by adding say GPU cards for VDI or more RAM per host but VMware wants a standard approach. These kinds of servers don’t exist on the market currently other than what other hyper-converged companies whitebox from say SuperMicro so we’re talking about new hardware from partners.

Each of the four EVO: RAIL nodes within a single appliance will have at a minimum the following:

  • Two Intel E5-2620v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD for the ESXi boot device
  • Three SAS 10K RPM 1.2TB HDD for the VSAN datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One VSAN certified pass-through disk controller
  • Two 10GbE NIC ports (either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for out-of-band management

Each appliance is fully redundant with dual power supplies. As there are four ESXi hosts per appliance, you are covered for hardware failures or maintenance. The ESXi boot device and all HDDs and SSDs are all enterprise-grade. VSAN itself is resilient. EVO: RAIL Version 1.0 can scale out to four appliances giving you a total of 16 ESXi hosts, backed by a single vCenter and a single VSAN datastore.There is some new intelligence added which automatically scans the local network for new EVO:RAIL appliances when they have been connected and easily adds them to the EVO: RAIL cluster.

Read more…

EVO: Rail – Management Re-imagined

August 25th, 2014 2 comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image The EVO: RAIL management software has been built to dramatically simplify the deployment of the appliances as well as provisioning VMs. The user guide is only 29 pages so you can get an idea of how VMware is driving simplicity. Marvin actually exists as a character icon within the management interface with an embedded “V” and “M”.

VMware recognises that vCenter has had a rather large feature bloat problem over the years. They have introduced new components like SSO which do provide needed functionality but add to the complexity of deploying vSphere. VMware has also tried to bring all these components together in the vCenter Server Appliance (VCSA).

This is great but has some functionality missing compared to the Windows version like Linked-Mode and some customers worry about managing the embedded database for large deployments. As EVO:RAIL is aimed at smaller deployments and isn’t concerned with linking vCenters together, the VCSA is a good option and the EVO:RAIL software is in fact a package which runs as part of the VCSA. There is no additional database required, it is all built into the appliance and uses the same public APIs to communicate with vCenter but acts as a layer to provide a simpler user experience, hiding some of the complexity of vCenter. vCenter  is still there so you can always connect directly with the Web Client and manage VMs as you do normally and any changes made in either environment are common so no conflicts.

EVO:RAIL is also also written purely in HTML5 even for the VM console, no yucky Flash like the vSphere Web Client and it works on any browser, even an iPad. Interestingly is has a look which is a little similar to Microsoft Azure Pack. Who would ever have thought VMware would have written a VM management interface built for simplicity that is similar to an existing Microsoft one!

Read more…

VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance

August 25th, 2014 1 comment

image image VMware will announce shortly today at VMworld US that it is entering the hyper-converged appliance market with a solution called EVO:Rail. This has been rumoured for a while since an eagle eyed visitor to the VMware campus spotted a sign for Marvin in their briefing center. Marvin was the engineering name and has still stuck around in parts of the product but its grown up name is EVO:Rail.

EVO(lution) is eventually going to be a suite of products/solutions, Rail is the first announcement named for the smallest part of a data center rack, the rail, so you can infer that VMware intends to build this portfolio out to an EVO:RACK and beyond.

EVO:Rail combines compute, storage and networking resources into a hyper-converged infrastructure appliance with the intention to dramatically simplify infrastructure deployment. Hardware wise this is pretty much what Nutanix and Simplivity as two examples do today. Spot the acronym, HCIA, to hunt for newly added VMworld sessions.

VMware is not however entering the hardware business itself, that would kill billions of marketing budget spent on the Software Defined Data Center message of software ruling the world. Partner hardware vendors will be building the appliance to strict specifications with VMware’s EVO:RAIL software bundle pre-installed and the appliance delivered as a single SKU. Some may see this as a technicality. VMware has always said if you need specific  hardware you are not software defined. Does EVO:RAIL count as specific hardware?

Support will be with the hardware vendor for both hardware and software with VMware providing software support to the hardware vendor at the back-end.

image

Read more…