Archive

Archive for the ‘Storage’ Category

Restoring DevOps to Infrastructure with Actifio

August 23rd, 2016 No comments

As Enterprises integrate DevOps into more of their development lifecycles they start to bump up against some of the practicalities of managing data. A major tenet of DevOps is being able to ship code quicker to give you that edge against your competitors. It may be fast to write code and a continuous integration pipeline and continuous deployment capability allows you to take that new code, test it and push it out to production in an automated and repeatable fashion.

DevOps and Data

Data however is often one of the speed bumps that causes all this fancy CI/CD to slow to a crawl. If your developers need to test their small change against a large chunk of data you need to somehow have access to this data. Creating copies of databases or files is usually slow and inefficient, a time consuming process that negates most of the speedy DevOps cleverness you’ve done for your code writing.

I’ve worked on numerous projects where a robocopy/rsync was run weekly over the weekend to refresh the 100s of GBs UAT and DEV environment from production data, taking in effect three copies of production. This could only run at the weekend due to the size of the transfer and the impact on the underlying storage and network. One solution had to have the database down during the copy which meant the production one couldn’t even be used for a few hours over the weekend while the copy happened. Put that in your DevOps pipeline and smoke it!

Some storage vendors are able to work around some of the speed problem by being able to mount snapshots but Actifio has a very interesting, slick and more comprehensive solution. Actifio presented at a recent Tech Field Day 11 event.

The DevOps capabilities of Actifio are part of a far bigger solution which they call Copy Data Virtualisation. I previewed the solution in my pre-event post: Tech Field Day 11 Preview: Actifio

Basically you can create multiple copies of data very quickly without creating as many physical copies of the data. These copies can be used for multiple things, backups, analytics, compliance, forensics, DR, migrations etc. as well as DevOps.

Read more…

Categories: Storage, Tech Field Day, TFD11 Tags: , ,

VMworld EU 2015 Buzz: Office of the CTO Stand: vRDMA & Unifying Virtual and Physical Desktops with Synthetic Block Devices

October 30th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-13 14.42.28vRDMA

I made a point as I always do at VMworld to head to the VMware Office of the CTO booth to have a glimpse into the future. I spoke to Jorgen Hansen about vRDMA.

This is an interesting research project within VMware to create a new class of distributed application in a virtual environment by allowing very fast transport by bypassing much of the VMware kernel and accessing memory on another host. This will allow applications to reserve VM memory via the hypervisor but be extremely scalable and fast, think HPC and financial trading. Expand the pic to look into the project:

Unifying Virtual and Physical Desktops with Synthetic Block Devices

2015-10-13 18.03.58Later on they also had a new research project to talk about: Unifying Virtual and Physical Desktops with Synthetic Block Devices. Rami Stern talked me through it which is all about having a single instance storage across the physical and virtual world so a single store of Mirage data as well as VMDK. Users would be able to move OS data from physical to virtual with very little data transfer, very much linking the different technologies VMware has acquired. Again VSAN is being looked at to do this, image deduped storage for OS + File + VM + Mirage + Cloud Volumes Data, again very interesting.

Max the pic for more details:

 

VMworld EU 2015 Buzz: The Future of Software-Defined Storage – What does it look like in 3 years time? – CTO6453

October 28th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Richard McDougall, a Principal Engineer at VMware led this presentation peeking into the future.

2015-10-13 17.01.15 2015-10-13 16.57.54

This session was about the futures & trends for storage hardware and next-gen distributed services. shared NVMe/PCIe rack scale, flash densities & if magnetic storage still has a place.

2015-10-13 17.02.18Richard gave an interesting talk explaining the needs of Big Data/No SQL etc. applications and their storage requirements building up a graph using two axis, horizontal for size from 10s of TBs to 10s of PBs and vertical for IOPS from 1000 to 1,000,000.

He built up the picture showing where various memory and storage applications sit and then added what hardware / software platforms are used to service these applications, it was a great visual aid.

He spend time going through how cloud native applications and containers still have a storage requirement with some options copying the whole root tree, using a Docker solution by cloning using another union file system (aufs), like redo logs for VMDKs.

Containers still need files, not blocks and need snapshots and clones. You need non-persistent boot environment as well as somewhere to put persistent data. Shared volumes may be needed as well as an object store for retention/archive.

Richard went on to talk about hardware and the massive increase in performance for NVDIMMs, getting closer to DRAM. Have a look at the comparison chart relative for travel time from California to Australia.

2015-10-13 17.15.39 2015-10-13 17.23.36

He then went through some of the device interconnects and posited that NVMe will take over most current interconnect methods, he was very positive about NVMe!

2015-10-13 17.39.19 2015-10-13 17.43.27

He mentioned how hard it is to actually build true scale out performant storage.

2015-10-13 17.48.31He mentioned a great use case for caching companies like PernixData and how they in the future could be used to front end things like S3 storage, so have massive buckets in the cloud yet give very fast locally cached access, interesting.

The dream is a single common storage platform that can be used with a single HCL and common software defined storage platform for Block, CEPH, MySQL, MongoDB, Hadoop etc. I think that’s what VMware is trying to make VSAN do.

This is very difficult to achieve but I certainly see future VSAN not too far away with native SMB and NFS access as well as persistent storage for containers running on the Photon Platform. This will give you the best of both worlds, stateless containers running natively as well as stateful containers with their data stored locally within the container being replicated to other nodes in the VSAN cluster as they are VMs. Other services can access SMB and NFS for file data natively on VSAN which will also be replicated across the cluster and across sites for DR.

VMworld EU 2015 Buzz: Should I be Transitioning my Legacy Applications into CNA? – CNA6813-QT

October 27th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Session was led by Chris Crafford, a Practice Manager, VMware

This again was a high level overview of the technologies available and went through what microservice are, the 12 factor apps I mentioned in the lab I did and why they are better for cloud environments. Microservices only manage the data they care about, are accessed only via the service, there are no shared libraries.

Chris mentioned an interesting thing I hadn’t thought of for the definition. Microservices need to be automatically deployed to make them true microservices, its not good enough to just have services that are micro.

Chris went through one of the major tenets of microservices which is all about failure management, assume failure and have an architecture that mitigates the impact of the faults, errors and failures at runtime.

Then Chris went on to talk about migrating legacy applications which must be done as an evolutionary approach. Choose the most business urgent to break out first. Use containers for this new bit and leverage best practices for CI/CD, automating all the steps. Learn and improve and then repeat for the next service that has been prioritised.

Another thing Chris mentioned was some deployments use one microservice per container but this makes management more challenging so consider a business role mapped to a container model instead.

The short session ended with a vCloud Air commercial, VMware funnily enough says it is the ideal target for migration of legacy applications particularly with the recent announcements with layer 2 networking between your data center and vCloud Air and container security with NSX.

The future of vCloud Air and how it will integrate with EMCs recent aquisition of Virtustream now becomes very interesting as vCloud Air is being moved out of VMware direct management and folded directly into Virtustream. Who knows what the future holds.

Virtualisation Field Day 4 Preview: CommVault

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

CommVault

Commvault is a data company (what backup companies also now call themselves) and has previously presented at Virtualization Field Day 3 and Tech Field Day 9.

Calling Commvault just a backup company isn’t a little disparaging as their software aims to do a lot more and rather like to think of themselves as providing information management. Sure, backing up and restoring data is important but there are a lot more reasons why you need to keep a copy of your data. You may need to keep an email archive for compliance reasons, journal instant messages from your traders for legal reasons so your lawyers have evidence to sift through or securely store x-rays for a long period of time. Archives, journaling, backups, reporting, legal discovery all rolled into one. It can suck in a whole bunch of stuff from end point laptops to mobile devices across physical, virtual, cloud, database, file, email, unix, Mac and windows. It has broad reach without the dreary and clunky legacy of TSM and NetBackup and although not as sexy, simple or targeted as Veeam, can do a lot more.

Their product is called Simpana and their trick is to have a single code base for integrating the backup and information management so you only need to store one deduplicated copy to be able to do a whole lot with it. This data repository is called the Content Store. Obviously backups need multiple copies to be spread around for protection and you can do that.

Read more…

Virtualisation Field Day 4 Preview: StorMagic

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

 

StorMagic_Monogram_Black_CMYK

StorMagic has an interesting product called SvSAN which is a SAN specifically designed for remote offices which require local IT infrastructure that can’t be delivered remotely. StorMagic has previously presented at Storage Field Day 6.

Many companies need to run critical applications at what StorMagic call edge sites yet still require high availability. Think retail with PoS everywhere, manufacturing with numerous distributed sites, oil rigs, ships, manufacturing, in fact any company with a distributed geographic footprint. SvSAN can be managed centrally at scale with typically 10-10000 edge sites.

Their software runs as a VSA on vSphere or Hyper-V using local disks and can be clustered with synchronous mirroring using as little as two hosts to provide shared storage to VMs giving them HA/vMotion. You can also use it with stretched clusters. It presents an iSCSI LUN to the hypervisor and can use SSD for cache and target it to particular workloads.

Centralised management is at the cornerstone of StorMagic which you would need for the scale they support. You can deploy SvSAN across multiple sites fairly easily and quickly. The nodes can then continue to be easily managed centrally so you don’t need any local IT staff.

StorMagic doesn’t look like its going to take over the world but it has a solid use case along with a market opportunity and is price competitive. I think it needs some sort of snapshotting and could benefit from a way to replicate data back to head office for backup with some clever deduping. Interested to hear what they have to say.

Gestalt IT is paying for travel, accommodation and things to eat to attend Virtualisation Field Day but aren’t paying a penny for me to write anything good or bad about anyone.

Virtualisation Field Day 4 Preview: VMTurbo

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

VMTurboLogoSm[4]

VMTurbo presented at the last Virtualisation Field Day 3 with an update at the VMworld SFO compact edition so the Tech Field Day community know what they are about.

VMTurbo has an application called Operations Manager (bland name IMO). VM management is a very crowded market even harder to penetrate when vendors have their own offerings (VMware with vRealize Ops previously VCOPS and Microsoft with SCOM).

VMTurbo differentiates itself with an interesting take by modelling your data center as an economic market. VMs need resources and can be thought of as buyers of what they need be it CPU, RAM, IO, latency etc. Your infrastructure is the seller offering up goods to satisfy the sellers. This means everything can be associated with a price and can use the economic laws of supply and demand to set prices. As resources are more utilised and become scarce, their price goes up for the VMs so they should shop around for a better price where there is more supply capacity and therefore lower prices. This economic model allows VMTurbo to solve the problem of where to run VMs. This also translates directly into reporting on cost/benefits and an opportunity cost framework that seems very interesting.

Now economics are incredibly complex, just ask the financial wizards to despite thinking they knew everything let the market crash beneath them.

Read more…

Virtualisation Field Day 4 Preview: Scale Computing

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

 

Scale Computing has presented previously at Storage Field Day 5, this is their 1st Virtualisation Field Day but they must be keen to get their message out as they’ve already signed up for Virtualisation Field Day 5.

Scale Computing is another member of the hyper-converged space along with Nutanix, SimpliVity and VMware’s EVO:RAIL. They have been shipping hyper-converged as long as SimpliVity but are less well known. Their management provenance is from Avamar (now EMC), Double-Take, Seagate, Veritas and Corvigo.

The writing is on the wall that converged and hyper-converged will be the only way you purchase infrastructure in the future. Why waste time rolling your own? There is therefore plenty of opportunity for a massive market. Scale Computing started life as a scale-out storage platform and then added compute.

Scale Computing has a hyper-converged appliance called HC3 running on KVM so offering an alternative to the behemoth that is VMware and Microsoft. The HC3 name comes from Hyper-Converged 3 (being 1:servers, 2:storage and 3:virtualisation). Their marketing is all about reducing cost and simplifying virtualisation complexity and is ideal for those who haven’t adopted virtualisation due to cost and complexity or are looking for a new alternative and reduced cost. They generally target SMB size workloads but this can still grow fairly large.

Read more…

VMworld 2014 US: VSAN Architecture Deep Dive #STO1279

August 27th, 2014 No comments

Quick notes while attending VSAN deep dive by Christos Karamanolis, the architect of VSAN and Christian Dickmann one of the lead developers for VSAN.

They went into the technical details of some of the functional components and how VSAN decides to distribute data across the cluster to meet availability and performance requirements, they showed some of the performance numbers.

VSAN key benefits, radically simple, high performance, lower TCO.

VMware increasingly sees vSphere admins also managing storage and VSAN is targeted at them.

VSAN performance is with very low host CPU performance

2M IOPS for 100% read possible with 4PB of disks, 640k IOPS with 70% read, 30% write

VSAN clusters aligned to vSphere clusters for ease of management rather than a technical limitation

VSAN policies it can present:

  • object space reservation
  • number of failures to tolerate
  • number of disk stripes per object
  • flash read cache reservation
  • force provisioning

Went through disk layouts and use of flash devices

VSAN asynchronously retires data from flash write buffer to HDD sequentially for performance

With VSAN license you get vSphere Distributed Virtual Switch even if you don’t have Enterprise Plus licensing

VSAN is an object store not file store

VM home directory object is formatted with VMFS to allow a VMs configuration files to be stored on it, mounted under the VSAN root directory, this is similar to VVols

Advantage of objects:

    • storage platform designed for SPBM
      • per VM per VMDK level of service
      • application gets exactly what is needs
    • high availability
      • per object quorum
    • better scalability
      • per VM locking, no issues as number of VMs grows
      • no global namespace translations

SAN write stays in write buffer for as long as possible as it often changes after initial write so is kept in cache

Host load balances VSAN reads across replicas but only reads block from same replica to keep single cache copy

VSAN remote cache read latency negligible as local SSD latency increases anyway with more data

VSAN supports in-memory local cache for very low latency, used with View Accelerator (CBRC)

VSAN has a scheduler that throttles replication traffic in the cluster but will always leave a little room so replication can at least continue

HA has been heavily modified to work with hyper-converged and VSAN

VSAN gives users 3 options for maintenance mode

  • ensure accessibility
  • full data migration
  • no data migration

VSAN monitoring and troubleshooting with:

  • vSphere IO
  • command line tools
  • Ruby vSphere Console
  • VSAN Observer.
Categories: Storage, VMware Tags: , ,

VMworld US 2014: The Day 2 Buzz

August 27th, 2014 No comments

image

Another Run VMworld with an ever bigger group and plenty to talk about.

IMG_4801

American style breakfast, hey there was fruit though!

IMG_4805 IMG_4802 IMG_4804

General Session

IMG_4806 The second general session which is usually the more technical show-and-tell of the mass presentations was led by VMware’s CTO Ben Fathi making his first VMworld keynote appearance. Wearing jeans and talking to the engineers in the audience, his job is to show some of the technology announced. He went through the story of businesses stuck in silos battling the change from traditional apps to cloud-native apps. VMware wants to make things much easier to deploy all kinds of workloads from your private data center using vCloud Suite to Public cloud with vCloud Air but with a common management framework and toolset covering both. Quite a bit of time spent talking about the power of “and”, saying you can use multiple things (hybrid cloud) over having to make a decision and being stuck with “or”.

Read more…

Categories: Cloud, EUC, Storage, VDI, VMware, VMworld Tags: