Archive

Posts Tagged ‘storage’

Cloud Field Day 2 Preview: NetApp

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

NetApp is one of the grand daddy’s of the storage industry. I first encountered NetApp 16 years ago. I was struck how relatively easy it was to use and set up. Having unified management across all its devices made learning it simpler than many competitors and the knowledge stayed relevant as in time you upgraded to the bigger, better, faster, more option. I loved that NetApp championed the use of NFS for VMware early on even though you often had to purchase an additional NFS license. I worked for a company that was one of the very first VMware on NFS on NetApp at scale customers. LUNs are yuck (still) and NFS provided so many advantages over the clunky block based alternatives of FC and iSCSI. NetApp was at the forefront of virtualisation and I was happy to see it soar.

In the decade since it seems it spent way too many cycles trying to integrate other things and landed up spinning its wheels when others caught up to the ease of use and in many cases surpassed in performance and flexibility. NetApp had SnapShots, SnapMirror and secondary storage before many others and being able to move data around easily was very attractive, however everyone else caught up. Arch competitor EMC combined with Dell and HPE split up and then bought Nimble and SimplVity.

SolidFire

Read more…

Categories: CFD2, Tech Field Day Tags: , , , ,

Cloud Field Day 2 Preview: HPE Nimble Storage

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

Nimble Storage is a storage company with lofty goals of “giving users the fastest, most reliable access to data – on-premise and in the cloud”. The premise is their wording, certainly not mine, really should be premises, Nimble!

Nimble has an interesting Tech Field Day history as it announced its original product, the CS200 hybrid array at Tech Field Day 3 in Seattle in 2010. Fast forward to March 2017 when Nimble was purchased by HPE for just over $1 Billion.

HPE Land

Read more…

Categories: CFD2, Cloud, Storage, Tech Field Day Tags: , , ,

Cloud Field Day 2 Preview: Rubrik

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

What a journey Rubrik has had so far, a 2 year old company that has ambitious plans to redefine that stodge of datacenter technologies, backup. Rubrik recently received a further $180 million in a Series D round at a $1.3 billion valuation. Yes, that’s a more than billion dollar valuation for a company that does backup, wow! Rubrik says it has hardly dipped into its $61 million Series C round but is going for hyper growth. It currently has several hundred enterprises as customers. Interestingly in the Series D funding announcement Rubrik mentioned investing heavily in R&D with this money. They’ve already had 8 product releases with the latest including a number of cloud features so I would think sales and marketing is where the money will need to be spent to increase customers. Hyper growth phase is normally less about R&D and more about knocking on the doors of prospective customers so will be interesting to hear the latest company plans.

All the Data

Read more…

Restoring DevOps to Infrastructure with Actifio

August 23rd, 2016 No comments

As Enterprises integrate DevOps into more of their development lifecycles they start to bump up against some of the practicalities of managing data. A major tenet of DevOps is being able to ship code quicker to give you that edge against your competitors. It may be fast to write code and a continuous integration pipeline and continuous deployment capability allows you to take that new code, test it and push it out to production in an automated and repeatable fashion.

DevOps and Data

Data however is often one of the speed bumps that causes all this fancy CI/CD to slow to a crawl. If your developers need to test their small change against a large chunk of data you need to somehow have access to this data. Creating copies of databases or files is usually slow and inefficient, a time consuming process that negates most of the speedy DevOps cleverness you’ve done for your code writing.

I’ve worked on numerous projects where a robocopy/rsync was run weekly over the weekend to refresh the 100s of GBs UAT and DEV environment from production data, taking in effect three copies of production. This could only run at the weekend due to the size of the transfer and the impact on the underlying storage and network. One solution had to have the database down during the copy which meant the production one couldn’t even be used for a few hours over the weekend while the copy happened. Put that in your DevOps pipeline and smoke it!

Some storage vendors are able to work around some of the speed problem by being able to mount snapshots but Actifio has a very interesting, slick and more comprehensive solution. Actifio presented at a recent Tech Field Day 11 event.

The DevOps capabilities of Actifio are part of a far bigger solution which they call Copy Data Virtualisation. I previewed the solution in my pre-event post: Tech Field Day 11 Preview: Actifio

Basically you can create multiple copies of data very quickly without creating as many physical copies of the data. These copies can be used for multiple things, backups, analytics, compliance, forensics, DR, migrations etc. as well as DevOps.

Read more…

Categories: Storage, Tech Field Day, TFD11 Tags: , ,

Crowdsourcing Community Knowledge with CloudPhysics

August 22nd, 2016 No comments

Image result for cloudphysicsCloudPhysics is a SaaS based solution for sucking up all your on-premises vSphere metadata into its own data lake and performing any number of analytics crunching on it.

The Cloud Physics offering is built upon a system of displaying cards where you can correlate configuration and/or performance information to show you for example datastore utilisation or iSCSI LUNs.

One of the interesting aspects of CloudPhysics is how they can actively monitor the bloggosphere to crowd-source knowledge to help its customers. There are a whole bunch of built in cards which customers can use to report on their environments but something I didn’t realise was that CloudPhysics can also monitor blogs for issues plaguing vSphere environments. If the investigation involves gathering data from your vSphere deployment, CloudPhysics likely has that data already.

At its recent Tech Field Day 11 presentation, CloudPhysics showed how information from fellow delegate Andreas Lesslhumer’s blog which was about tracking down whether a vSphere Changed Block Tracking (CBT) bug which breaks backups affected you. CloudPhysics was able to code the information Andreas wrote about into a new card which customers could then use to report on their own infrastructure, so much easier than writing the code to gather the information yourself.

This could be even more important if you are not even aware of the bug. CloudPhysics or even any user can scan the VMware Knowledge Base as well as many other blogs and write a card to tell you for example that with the exact version of vSphere you are running on some or all of your hosts whether an issue affects you. Of course this wouldn’t apply to you if you were continually scanning all the official and community sites for all bugs reported and able to report on them! Thought you weren’t, well CloudPhysics may have your back.

I would have loved to have had this a few years ago when I had spent ages correlating vSphere versions with HP/Broadcom/Emulex Nic card drivers and firmware to track down the too many issues that plagued the HP Virtual Connect blade chassis networking at the time. I wrote a PowerCLI script which invoked Putty and SSH to connect to each ESXi host to gather the firmware version so I could check the support matrix, it was time consuming and cumbersome. CloudPhysics would have made this so much easier. I could have used the Developer Edition to create my own cards so much quicker and then this could have been made available to others by publishing it to the Card Store.

Read more…

Tech Field Day 11 Preview: CloudPhysics

June 9th, 2016 1 comment

Tech Field Day 11 is happening in Boston, from 22-24 June and I’m super happy to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: I’m heading to Tech Field Day 11 and DockerCon!

CloudPhysics

CloudPhysics is somewhat of a darling of the visualisation ecosystem, founded by a number of ex-VMware brains. CloudPhysics has previously presented at Virtualisation Field Day 3, two years ago

It has a SaaS product for analysing on-premises VMware installations. This is hugely valuable, vSphere is powerful, can have fantastic performance but by nature of it touching compute, storage and networking can be difficult to see where performance or configuration issues are.
CloudPhysics sucks up all your vSphere config and performance data via a small virtual appliance and sends the data to the cloud and crunches it to give you visibility across your entire infrastructure so you can view reports, see config changes and cluster performance. You can also look ahead and use the product’s trending and predictive analysis. You can get going in 15 minutes and spend no money with the Free edition or upgrade to the Premium edition for more features which is a yearly subscription.

The user interface is all based on cards, each one is a mash of systems data and analytics. In the free Edition you can see things like inventory information, VM reservations and limits, snapshots and host resource commitment. If you start paying you get many more cards including datastore space, cluster health, unused VMs, orphaned VM files, I/O contention, a helpful knowledge based advisor to match KB articles to your infrastructure and also some cost comparison calculators for vCloud Air and Azure. As its a SaaS platform the cards are continually being updated and new ones appear fairly regularly. You can also create your own.

Being able to spot bad configurations and unauthorised changes is so useful and if you can correlate a performance change to a configuration change that can save hours of needless investigation.

Its strange to say but you really shouldn’t need any of this, I wish vCenter was able to give you all this information in an easily digestible format but it doesn’t so CloudPhysics is great. Who knows if VMware ever get to vCenter as a Service whether analytics like this is part of the future roadmap?

CloudPhysics has always had the VM analytics but has recently been fleshing out its host and cluster exploration capabilities so can better see the relation between VMs for noise neighbours for example, it will be interesting to hear what’s new.

Partner Edition

Read more…

Categories: TFD11 Tags: , , ,

ZeroStack’s full stack from Infrastructure to Application

January 13th, 2016 No comments

ZeroStack is a recently out of stealth company providing a cloud managed hyper-converged appliance running OpenStack. It is targeting private cloud customers who are wanting to stand up their own OpenStack instances but don’t want the hassle of getting it all working themselves. What ZeroStack also does which is unique is combine this infrastructure part with application deployment which for me is the exciting bit.

It is early days for the company but it has seasoned financial backers, advisers and founders and after just a year has an impressive amount of functionality in its product.

Private Cloud

imageThe use case is companies wanting to replicate the ease of public cloud but in as a private cloud. Amazon’s AWS and Microsoft’s Azure make spinning up VMs or even direct application instances easy and allow you to pay per use. It’s all about lowering the admin of deployment and moving to an IT consumption model.

This is all great but companies at the moment need to replicate this functionality in-house and may like to built out a private cloud. They may need data kept on premises due to perceived security concerns or even legally requiring data to be held in a particular location. There may be more practical concerns like the amount of data to be stored/analysed that makes it impractical to move externally. The issue of cost may be an issue with scare stories of AWS bills racking up quickly although I do find companies are very poor at working out their own internal data center costs so comparisons are not necessarily accurate.

The point where deployment happens is also shifting away from infrastructure support teams to application support teams and further along to applications themselves managing their own infrastructure resources via API calls to a cloud to spin up new VMs with automated deployment and scaling of applications.

Suffice to say companies are wanting to replicate public cloud functionality internally to give applications the resources they require. Current software options are generally VMware which is feature rich with excellent infrastructure resiliency with a cost model to match the functionality or OpenStack which is open source, not as feature rich with deliberately less infrastructure resiliency but doesn’t have license costs due to a vendor.

ZeroStack uses the tagline “Public Cloud Experience, Private Cloud Control” and as I see it is attempting to give its customers four key things:

1. Hardware: Hyper-Converged Appliance

Read more…

FalconStor’s rebirth with FreeStor

December 7th, 2015 No comments

image In my preview post before attending: Virtualisation Field Day 6 Preview: FalconStor I raised my concerns whether FalconStor was “yet another storage company”. I thought it would be useful to detail what I learned during its Virtualisation Field Day presentation as well as speaking to other delegates.

Rebirth

FalconStor as a company seems to have had a much needed rebirth which it sorely needed after legal issues and a tragic CEO loss 4 years ago started to sink the ship. FalconStor then bled cash for a while and lost another CEO before current boss Gary Quinn took the helm. Current management as expected takes pains to distance themselves from the dark times and are passionate about the company’s future and believe they have what it takes to succeed.

I’ve also learned FalconStor previously didn’t have the best reputation for code quality leading to products with less than stellar stability. Apparently this has been rectified with a new team who managed to ink a lucrative partnership with Violin Memory to provide data services software to the lacking Violin arrays. Violin is in the business of high performing storage so this must have been a win partnership for FalconStor as it could learn all about high performing flash as part of the deal. Unfortunately it seems this buddying up dissolved a year or so ago and there doesn’t seem to be much information on why. I get the impression FalconStor wanted to continue but Violin didn’t so hopefully FalconStor received enough of what it needed to improve, speed up and modernise its codebase. Violin is going through its own issues including a tanking stock price yet FalconStor hasn’t been dragged down as well so the market sees Violin as overvalued and has some faith in FalconStor. More recent OEM deals are being done with X-IO Technologies, Kaminario as well as Huawei so FalconStor software seems in high demand.

FreeStor

Read more…

Virtualisation Field Day 6 Preview: FalconStor

November 10th, 2015 No comments

Updated on 11/11/2015 with some changes based on additional information.

Virtualisation Field Day 6 is happening in Silicon Valley, California from 18th-20th November and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post:I’m heading to Virtualisation Field Day 6.

FalconStor SoftwareFalconStor is a company I’ve heard of over the years (it’s been going for 15 years) but haven’t had any direct experience with their products previously. It seems to have had a chequered history with fines for paying bribes and then covering it up in its books but that’s a few years ago so I’m sure FalStore is putting that behind them!

FreeStor

FalconStor has recently release a brand new product called FreeStor. Don’t get too carried away, its not a Free product in terms of price (more on that later) but rather Free as in Freedom. FreeStor is a product to build a distributed storage resource pool across almost any type of underlying storage. It’s basically virtualised storage using FalconStore’s “Intelligent Abstraction” core so you can easily move, protect and dedupe data on or off cloud without being reliant on any particular hardware, networks or protocols. This means you can freely choose the right storage for the right price and have FreeStor manage and protect it all.

This virtualised platform then allows you to seamlessly move workloads across different underlying storage. There is WAN optimised space efficient replication and everything is globally deduped.

Read more…

VMworld EU 2015 Buzz: Office of the CTO Stand: vRDMA & Unifying Virtual and Physical Desktops with Synthetic Block Devices

October 30th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-13 14.42.28vRDMA

I made a point as I always do at VMworld to head to the VMware Office of the CTO booth to have a glimpse into the future. I spoke to Jorgen Hansen about vRDMA.

This is an interesting research project within VMware to create a new class of distributed application in a virtual environment by allowing very fast transport by bypassing much of the VMware kernel and accessing memory on another host. This will allow applications to reserve VM memory via the hypervisor but be extremely scalable and fast, think HPC and financial trading. Expand the pic to look into the project:

Unifying Virtual and Physical Desktops with Synthetic Block Devices

2015-10-13 18.03.58Later on they also had a new research project to talk about: Unifying Virtual and Physical Desktops with Synthetic Block Devices. Rami Stern talked me through it which is all about having a single instance storage across the physical and virtual world so a single store of Mirage data as well as VMDK. Users would be able to move OS data from physical to virtual with very little data transfer, very much linking the different technologies VMware has acquired. Again VSAN is being looked at to do this, image deduped storage for OS + File + VM + Mirage + Cloud Volumes Data, again very interesting.

Max the pic for more details: