Archive

Archive for the ‘Storage’ Category

Cloud Field Day 2 Preview: HPE Nimble Storage

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

Nimble Storage is a storage company with lofty goals of “giving users the fastest, most reliable access to data – on-premise and in the cloud”. The premise is their wording, certainly not mine, really should be premises, Nimble!

Nimble has an interesting Tech Field Day history as it announced its original product, the CS200 hybrid array at Tech Field Day 3 in Seattle in 2010. Fast forward to March 2017 when Nimble was purchased by HPE for just over $1 Billion.

HPE Land

Read more…

Categories: CFD2, Cloud, Storage, Tech Field Day Tags: , , ,

Cloud Field Day 2 Preview: Rubrik

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

What a journey Rubrik has had so far, a 2 year old company that has ambitious plans to redefine that stodge of datacenter technologies, backup. Rubrik recently received a further $180 million in a Series D round at a $1.3 billion valuation. Yes, that’s a more than billion dollar valuation for a company that does backup, wow! Rubrik says it has hardly dipped into its $61 million Series C round but is going for hyper growth. It currently has several hundred enterprises as customers. Interestingly in the Series D funding announcement Rubrik mentioned investing heavily in R&D with this money. They’ve already had 8 product releases with the latest including a number of cloud features so I would think sales and marketing is where the money will need to be spent to increase customers. Hyper growth phase is normally less about R&D and more about knocking on the doors of prospective customers so will be interesting to hear the latest company plans.

All the Data

Read more…

Cloud Field Day 2 Preview: Scality

July 20th, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

Scality has been a previous Tech Field Day presenter.

Scality is one of the new storage companies leveraging the AWS S3 storage API standard to create new enterprise storage options beyond your typical block and file store. S3 is object storage which is all about scale, built to store billions of objects or massive petabyte sized files or stores.

Scality already provides an open source implementation of the AWS S3 API called Scality S3 Server. Interestingly it is packaged as a Docker container so can leverage the benefits of Docker such as the same deployment mechanism from a developers laptop to being deployed in production and further scaled out via Docker Swarm.

Scality RING is the enterprise friendly version of S3 Server for more critical workloads with the usual enterprise feature requirements of security, support, availability, etc.

AWS S3 is all great but some enterprises aren’t willing to store everything in a public cloud. There may be (often unfounded) security concerns or more valid concerns about bandwidth usage, data gravity and cost. If you have PBs of on-prem storage for your media files, x-rays, satellite images etc. you would love the ease of use of the S3 API but accessed locally. Scality can provide this S3 API on-prem as well as the replicated, highly available storage infrastructure running on standard x86 underneath. Having S3 locally also allows your developers to test functionality locally for things that may eventually access AWS S3.

Zenko

Scality has now announced Zenko which is an open source multi-cloud controller and this is what I expect we’ll hear more about at Cloud Field Day.

Read more…

Categories: AWS, CFD2, Cloud, Storage, Tech Field Day Tags: , , , ,

Restoring DevOps to Infrastructure with Actifio

August 23rd, 2016 No comments

As Enterprises integrate DevOps into more of their development lifecycles they start to bump up against some of the practicalities of managing data. A major tenet of DevOps is being able to ship code quicker to give you that edge against your competitors. It may be fast to write code and a continuous integration pipeline and continuous deployment capability allows you to take that new code, test it and push it out to production in an automated and repeatable fashion.

DevOps and Data

Data however is often one of the speed bumps that causes all this fancy CI/CD to slow to a crawl. If your developers need to test their small change against a large chunk of data you need to somehow have access to this data. Creating copies of databases or files is usually slow and inefficient, a time consuming process that negates most of the speedy DevOps cleverness you’ve done for your code writing.

I’ve worked on numerous projects where a robocopy/rsync was run weekly over the weekend to refresh the 100s of GBs UAT and DEV environment from production data, taking in effect three copies of production. This could only run at the weekend due to the size of the transfer and the impact on the underlying storage and network. One solution had to have the database down during the copy which meant the production one couldn’t even be used for a few hours over the weekend while the copy happened. Put that in your DevOps pipeline and smoke it!

Some storage vendors are able to work around some of the speed problem by being able to mount snapshots but Actifio has a very interesting, slick and more comprehensive solution. Actifio presented at a recent Tech Field Day 11 event.

The DevOps capabilities of Actifio are part of a far bigger solution which they call Copy Data Virtualisation. I previewed the solution in my pre-event post: Tech Field Day 11 Preview: Actifio

Basically you can create multiple copies of data very quickly without creating as many physical copies of the data. These copies can be used for multiple things, backups, analytics, compliance, forensics, DR, migrations etc. as well as DevOps.

Read more…

Categories: Storage, Tech Field Day, TFD11 Tags: , ,

VMworld EU 2015 Buzz: Office of the CTO Stand: vRDMA & Unifying Virtual and Physical Desktops with Synthetic Block Devices

October 30th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-13 14.42.28vRDMA

I made a point as I always do at VMworld to head to the VMware Office of the CTO booth to have a glimpse into the future. I spoke to Jorgen Hansen about vRDMA.

This is an interesting research project within VMware to create a new class of distributed application in a virtual environment by allowing very fast transport by bypassing much of the VMware kernel and accessing memory on another host. This will allow applications to reserve VM memory via the hypervisor but be extremely scalable and fast, think HPC and financial trading. Expand the pic to look into the project:

Unifying Virtual and Physical Desktops with Synthetic Block Devices

2015-10-13 18.03.58Later on they also had a new research project to talk about: Unifying Virtual and Physical Desktops with Synthetic Block Devices. Rami Stern talked me through it which is all about having a single instance storage across the physical and virtual world so a single store of Mirage data as well as VMDK. Users would be able to move OS data from physical to virtual with very little data transfer, very much linking the different technologies VMware has acquired. Again VSAN is being looked at to do this, image deduped storage for OS + File + VM + Mirage + Cloud Volumes Data, again very interesting.

Max the pic for more details:

 

VMworld EU 2015 Buzz: The Future of Software-Defined Storage – What does it look like in 3 years time? – CTO6453

October 28th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Richard McDougall, a Principal Engineer at VMware led this presentation peeking into the future.

2015-10-13 17.01.15 2015-10-13 16.57.54

This session was about the futures & trends for storage hardware and next-gen distributed services. shared NVMe/PCIe rack scale, flash densities & if magnetic storage still has a place.

2015-10-13 17.02.18Richard gave an interesting talk explaining the needs of Big Data/No SQL etc. applications and their storage requirements building up a graph using two axis, horizontal for size from 10s of TBs to 10s of PBs and vertical for IOPS from 1000 to 1,000,000.

He built up the picture showing where various memory and storage applications sit and then added what hardware / software platforms are used to service these applications, it was a great visual aid.

He spend time going through how cloud native applications and containers still have a storage requirement with some options copying the whole root tree, using a Docker solution by cloning using another union file system (aufs), like redo logs for VMDKs.

Containers still need files, not blocks and need snapshots and clones. You need non-persistent boot environment as well as somewhere to put persistent data. Shared volumes may be needed as well as an object store for retention/archive.

Richard went on to talk about hardware and the massive increase in performance for NVDIMMs, getting closer to DRAM. Have a look at the comparison chart relative for travel time from California to Australia.

2015-10-13 17.15.39 2015-10-13 17.23.36

He then went through some of the device interconnects and posited that NVMe will take over most current interconnect methods, he was very positive about NVMe!

2015-10-13 17.39.19 2015-10-13 17.43.27

He mentioned how hard it is to actually build true scale out performant storage.

2015-10-13 17.48.31He mentioned a great use case for caching companies like PernixData and how they in the future could be used to front end things like S3 storage, so have massive buckets in the cloud yet give very fast locally cached access, interesting.

The dream is a single common storage platform that can be used with a single HCL and common software defined storage platform for Block, CEPH, MySQL, MongoDB, Hadoop etc. I think that’s what VMware is trying to make VSAN do.

This is very difficult to achieve but I certainly see future VSAN not too far away with native SMB and NFS access as well as persistent storage for containers running on the Photon Platform. This will give you the best of both worlds, stateless containers running natively as well as stateful containers with their data stored locally within the container being replicated to other nodes in the VSAN cluster as they are VMs. Other services can access SMB and NFS for file data natively on VSAN which will also be replicated across the cluster and across sites for DR.

VMworld EU 2015 Buzz: Should I be Transitioning my Legacy Applications into CNA? – CNA6813-QT

October 27th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Session was led by Chris Crafford, a Practice Manager, VMware

This again was a high level overview of the technologies available and went through what microservice are, the 12 factor apps I mentioned in the lab I did and why they are better for cloud environments. Microservices only manage the data they care about, are accessed only via the service, there are no shared libraries.

Chris mentioned an interesting thing I hadn’t thought of for the definition. Microservices need to be automatically deployed to make them true microservices, its not good enough to just have services that are micro.

Chris went through one of the major tenets of microservices which is all about failure management, assume failure and have an architecture that mitigates the impact of the faults, errors and failures at runtime.

Then Chris went on to talk about migrating legacy applications which must be done as an evolutionary approach. Choose the most business urgent to break out first. Use containers for this new bit and leverage best practices for CI/CD, automating all the steps. Learn and improve and then repeat for the next service that has been prioritised.

Another thing Chris mentioned was some deployments use one microservice per container but this makes management more challenging so consider a business role mapped to a container model instead.

The short session ended with a vCloud Air commercial, VMware funnily enough says it is the ideal target for migration of legacy applications particularly with the recent announcements with layer 2 networking between your data center and vCloud Air and container security with NSX.

The future of vCloud Air and how it will integrate with EMCs recent aquisition of Virtustream now becomes very interesting as vCloud Air is being moved out of VMware direct management and folded directly into Virtustream. Who knows what the future holds.

Virtualisation Field Day 4 Preview: CommVault

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

CommVault

Commvault is a data company (what backup companies also now call themselves) and has previously presented at Virtualization Field Day 3 and Tech Field Day 9.

Calling Commvault just a backup company isn’t a little disparaging as their software aims to do a lot more and rather like to think of themselves as providing information management. Sure, backing up and restoring data is important but there are a lot more reasons why you need to keep a copy of your data. You may need to keep an email archive for compliance reasons, journal instant messages from your traders for legal reasons so your lawyers have evidence to sift through or securely store x-rays for a long period of time. Archives, journaling, backups, reporting, legal discovery all rolled into one. It can suck in a whole bunch of stuff from end point laptops to mobile devices across physical, virtual, cloud, database, file, email, unix, Mac and windows. It has broad reach without the dreary and clunky legacy of TSM and NetBackup and although not as sexy, simple or targeted as Veeam, can do a lot more.

Their product is called Simpana and their trick is to have a single code base for integrating the backup and information management so you only need to store one deduplicated copy to be able to do a whole lot with it. This data repository is called the Content Store. Obviously backups need multiple copies to be spread around for protection and you can do that.

Read more…

Virtualisation Field Day 4 Preview: StorMagic

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

 

StorMagic_Monogram_Black_CMYK

StorMagic has an interesting product called SvSAN which is a SAN specifically designed for remote offices which require local IT infrastructure that can’t be delivered remotely. StorMagic has previously presented at Storage Field Day 6.

Many companies need to run critical applications at what StorMagic call edge sites yet still require high availability. Think retail with PoS everywhere, manufacturing with numerous distributed sites, oil rigs, ships, manufacturing, in fact any company with a distributed geographic footprint. SvSAN can be managed centrally at scale with typically 10-10000 edge sites.

Their software runs as a VSA on vSphere or Hyper-V using local disks and can be clustered with synchronous mirroring using as little as two hosts to provide shared storage to VMs giving them HA/vMotion. You can also use it with stretched clusters. It presents an iSCSI LUN to the hypervisor and can use SSD for cache and target it to particular workloads.

Centralised management is at the cornerstone of StorMagic which you would need for the scale they support. You can deploy SvSAN across multiple sites fairly easily and quickly. The nodes can then continue to be easily managed centrally so you don’t need any local IT staff.

StorMagic doesn’t look like its going to take over the world but it has a solid use case along with a market opportunity and is price competitive. I think it needs some sort of snapshotting and could benefit from a way to replicate data back to head office for backup with some clever deduping. Interested to hear what they have to say.

Gestalt IT is paying for travel, accommodation and things to eat to attend Virtualisation Field Day but aren’t paying a penny for me to write anything good or bad about anyone.

Virtualisation Field Day 4 Preview: VMTurbo

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

VMTurboLogoSm[4]

VMTurbo presented at the last Virtualisation Field Day 3 with an update at the VMworld SFO compact edition so the Tech Field Day community know what they are about.

VMTurbo has an application called Operations Manager (bland name IMO). VM management is a very crowded market even harder to penetrate when vendors have their own offerings (VMware with vRealize Ops previously VCOPS and Microsoft with SCOM).

VMTurbo differentiates itself with an interesting take by modelling your data center as an economic market. VMs need resources and can be thought of as buyers of what they need be it CPU, RAM, IO, latency etc. Your infrastructure is the seller offering up goods to satisfy the sellers. This means everything can be associated with a price and can use the economic laws of supply and demand to set prices. As resources are more utilised and become scarce, their price goes up for the VMs so they should shop around for a better price where there is more supply capacity and therefore lower prices. This economic model allows VMTurbo to solve the problem of where to run VMs. This also translates directly into reporting on cost/benefits and an opportunity cost framework that seems very interesting.

Now economics are incredibly complex, just ask the financial wizards to despite thinking they knew everything let the market crash beneath them.

Read more…