Home > CFD2, Tech Field Day > Cloud Field Day 2 Preview: NetApp

Cloud Field Day 2 Preview: NetApp

July 21st, 2017

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

NetApp is one of the grand daddy’s of the storage industry. I first encountered NetApp 16 years ago. I was struck how relatively easy it was to use and set up. Having unified management across all its devices made learning it simpler than many competitors and the knowledge stayed relevant as in time you upgraded to the bigger, better, faster, more option. I loved that NetApp championed the use of NFS for VMware early on even though you often had to purchase an additional NFS license. I worked for a company that was one of the very first VMware on NFS on NetApp at scale customers. LUNs are yuck (still) and NFS provided so many advantages over the clunky block based alternatives of FC and iSCSI. NetApp was at the forefront of virtualisation and I was happy to see it soar.

In the decade since it seems it spent way too many cycles trying to integrate other things and landed up spinning its wheels when others caught up to the ease of use and in many cases surpassed in performance and flexibility. NetApp had SnapShots, SnapMirror and secondary storage before many others and being able to move data around easily was very attractive, however everyone else caught up. Arch competitor EMC combined with Dell and HPE split up and then bought Nimble and SimplVity.

SolidFire

NetApp purchased SolidFire in 2016, a startup high performance flash storage system with great API compatibility which was making good inroads particularly into service providers. NetApp wanted SolidFire to burn a new culture into a company that had stood still. Doubters wondered whether NetApp would extinguish SolidFire’s passion rather than let it take over from within. SolidFire marketing has from what I hear pretty much taken over NetApp marketing. NetApp has reached out more to the developer community with its DevOps focused API loving messaging and also announced a SolidFire backed hyper-converged appliance so integration deeper than marketing is happening.

NetApp, like other storage vendors, is being assaulted from all sides. Virtualisation shops are moving to hyper-converged which until now NetApp wasn’t doing. Public cloud is eating their lunch. It doesn’t matter how easy it is to provision NetApp storage. Amazon S3 or Azure Blob storage is an unlimited pool of storage only an API call away and pay by use. Sure, its currently disconnected from your on-prem workloads but as more workloads move to the public cloud, the center of your data gravity starts to shift to public cloud piece by piece.

NetApp has a number of recent announcements which it is hoping ensures its relevance in a cloud first world. It even has a new tagline as “the data authority for hybrid cloud”.

Hyper-Converged, well maybe…

As mentioned NetApp has unveiled its SolidFire backed hyper-converged infrastructure (HCI) appliance which it says is the “industry’s first enterprise-scale hyper converged solution that delivers guaranteed performance, independent scaling, and Data Fabric integration”. Putting three things into a single “industry first” sentence means technically only one has to be true and of course no other storage product has Data Fabric which is a NetApp technology. Many other HCI players will certainly argue that their products deliver guaranteed performance and some do have independent scaling. It is due to be available in Q4 2017.

NetApp’s says its HCI differentiator is taking the SolidFire QoS tech which service providers like for multi-tenancy and apply it to HCI so you can guarantee performance. The usual automation, simpler management, replication, data protection are included. NetApp also allows you to independently scale compute and storage resources and we’re going to need to get into definitions shortly. One one hand this can be seen as more flexibility and is where earlier HCI entrants have also evolved when you can add disk without adding compute. Now, NetAPp HCI is markedly different and there is an argument as to whether the NetApp HCI is actually HCI. Currently there are 4 x servers in 2U,  2 of the servers are for compute with no local disks other than cards to boot ESXi. The other 2 x servers run SolidFire ElementOS and don’t do compute for VMs. So, this very much looks like a standard server connected to a standard SAN although within a single chassis and integrated management. NetApp says HCI is actually all about ease of management, not necessarily squashing compute and storage together, many would disagree.

NetApp is partnered with Commvault and Veeam for backup and delivered a marketing blessing from Intel, MongoDB and VMware as its currently a VMware only hypervisor solution.

Cloudy Consumption

To have more cloudy consumptions models, NetApp is expanding its ONTAP Select software defined storage options which is NetApp software running on your own infrastructure. There’s a new ROBO option for a 2-node HA configuration. You can also deploy ONTAP Select in IBM Bluemix so can have the same storage and SnapMirror between your on-prem NetApp arrays and your hosted IBM IaaS cloud. YOu can also rent NetApp kit on-prem or in the cloud and pay per use with NetApp OnDemand.

Another interesting use case that NetApp has is a FabricPool feature which allows you to tier data to the cloud. Keep your frequently accessed data on-prem running on your fast flash devices and transparently move the stale data out to the public cloud where the capacity is more flexible and could be cheaper but it will still look part of the on-prem storage.

NetApp was one of the first companies to actually provide its storage tech in a cloud environment when it had a deal with Amazon to run ONTAP on AWS so you could seamlessly SnapMirror data across clouds to your hearts content. This has also been extended to Azure so you have ONTAP Cloud which is a NetApp appliance running in public clouds but managed and SnapMirrored to and from just as you do on-prem. NetApp’s Data Fabric vision is to have NetApp storage wherever its customers have data all managed in the same way.

Microsoft and NetApp Getting Cloudy

However there may be something more NetApp is up to with Microsoft. Stephen Foskett, Tech Field Day chief shepherd unpacked a recent announcement and had some more inside info.

It looks like Microsoft may be partnering with NetApp a little closer than just allowing ONTAP VMs to run on Azure. Microsoft would want a better NFS Server to offer Azure clients since it is all super friendly with the linux types now. Rather than improving its own NFS Server for Windows Server, Microsoft could be asking NetApp for help by in effect using ONTAP itself to provide NFS Storage although as a customer you won’t likely realise its NetApp tech behind the scenes. I wonder what’s stopping Microsoft just writing this itself though?

Microsoft may not just be wanting a better NFS Server but would like Azure beefed up with more of the enterprise features NetApp has been building for decades such as replication and snapshots. This is an interesting take as at scale public cloud storage has generally been built for object storage where you don’t replicate and snapshot at the block or file level but at the object level. VM backing storage may still be block or file level so it would be interesting to see what block and file level smarts from the likes of NetApp could be applied to Azure storage.

Azure Stack

Taking this a step further, Azure Stack starts to get interesting. Azure is a very interesting strategy. Other vendors such as VMware with vCloud Air or IBM Softlayer (now Bluemix) or even the future VMware on AWS have attempted to create a public cloud from private cloud technology. Microsoft is going the other direction, it is taking public cloud technology and making a private cloud version. This is an important distinction as public clouds have done things very differently from private clouds. Management at scale is the obvious one but public clouds are also very much service based rather than IaaS based. It’s more about SQL as a Service or Sharepoint as a Service or for AWS, DynamoDB and Kinesis providing a service platform rather than VMs in which you install stuff. With Azure SQL you don’t manage your own version of SQL or have any idea what VMs are running, it’s just a database as a service. Microsoft’s approach with Azure Stack is bringing this service approach to private clouds which will be huge. However, private clouds have a far greater mix of current applications which offers an interesting opportunity which Microsoft likely needs to tap into. Microsoft can’t create a fully integrated Azure public on-prem version without connecting to a lot of traditional data center IT. The opportunity is to partner with on-prem companies like NetApp to offer ONTAP as part of Azure Stack. You get the benefit of a private cloud that looks like a public cloud but integrates well with your traditional on-prem infrastructure. Microsoft gets a much easier way into your datacenter with Azure Stack if it can connect to what you already have. I would expect there would be many possible on-prem integrations for Microsoft to pursue, storage, networking, security etc.

NetApp also has a new cloud boss, Joe CaraDonna, who’s been in the job for 6 months. Dell+EMC and HPE are going through major changes as they try to reinvent themselves for cloud. This is Cloud Field Day, not Storage Field Day, so it will be interesting to hear whether NetApp, the old storage dog has any new cloud tricks.

Gestalt IT is paying for travel, accommodation and things to eat to attend Cloud Field Day but isn’t paying a penny for me to write anything good or bad about anyone.

Categories: CFD2, Tech Field Day Tags: , , , ,
Comments are closed.