Archive

Posts Tagged ‘storage’

Restoring DevOps to Infrastructure with Actifio

August 23rd, 2016 No comments

As Enterprises integrate DevOps into more of their development lifecycles they start to bump up against some of the practicalities of managing data. A major tenet of DevOps is being able to ship code quicker to give you that edge against your competitors. It may be fast to write code and a continuous integration pipeline and continuous deployment capability allows you to take that new code, test it and push it out to production in an automated and repeatable fashion.

DevOps and Data

Data however is often one of the speed bumps that causes all this fancy CI/CD to slow to a crawl. If your developers need to test their small change against a large chunk of data you need to somehow have access to this data. Creating copies of databases or files is usually slow and inefficient, a time consuming process that negates most of the speedy DevOps cleverness you’ve done for your code writing.

I’ve worked on numerous projects where a robocopy/rsync was run weekly over the weekend to refresh the 100s of GBs UAT and DEV environment from production data, taking in effect three copies of production. This could only run at the weekend due to the size of the transfer and the impact on the underlying storage and network. One solution had to have the database down during the copy which meant the production one couldn’t even be used for a few hours over the weekend while the copy happened. Put that in your DevOps pipeline and smoke it!

Some storage vendors are able to work around some of the speed problem by being able to mount snapshots but Actifio has a very interesting, slick and more comprehensive solution. Actifio presented at a recent Tech Field Day 11 event.

The DevOps capabilities of Actifio are part of a far bigger solution which they call Copy Data Virtualisation. I previewed the solution in my pre-event post: Tech Field Day 11 Preview: Actifio

Basically you can create multiple copies of data very quickly without creating as many physical copies of the data. These copies can be used for multiple things, backups, analytics, compliance, forensics, DR, migrations etc. as well as DevOps.

Read more…

Categories: Storage, Tech Field Day, TFD11 Tags: , ,

Crowdsourcing Community Knowledge with CloudPhysics

August 22nd, 2016 No comments

Image result for cloudphysicsCloudPhysics is a SaaS based solution for sucking up all your on-premises vSphere metadata into its own data lake and performing any number of analytics crunching on it.

The Cloud Physics offering is built upon a system of displaying cards where you can correlate configuration and/or performance information to show you for example datastore utilisation or iSCSI LUNs.

One of the interesting aspects of CloudPhysics is how they can actively monitor the bloggosphere to crowd-source knowledge to help its customers. There are a whole bunch of built in cards which customers can use to report on their environments but something I didn’t realise was that CloudPhysics can also monitor blogs for issues plaguing vSphere environments. If the investigation involves gathering data from your vSphere deployment, CloudPhysics likely has that data already.

At its recent Tech Field Day 11 presentation, CloudPhysics showed how information from fellow delegate Andreas Lesslhumer’s blog which was about tracking down whether a vSphere Changed Block Tracking (CBT) bug which breaks backups affected you. CloudPhysics was able to code the information Andreas wrote about into a new card which customers could then use to report on their own infrastructure, so much easier than writing the code to gather the information yourself.

This could be even more important if you are not even aware of the bug. CloudPhysics or even any user can scan the VMware Knowledge Base as well as many other blogs and write a card to tell you for example that with the exact version of vSphere you are running on some or all of your hosts whether an issue affects you. Of course this wouldn’t apply to you if you were continually scanning all the official and community sites for all bugs reported and able to report on them! Thought you weren’t, well CloudPhysics may have your back.

I would have loved to have had this a few years ago when I had spent ages correlating vSphere versions with HP/Broadcom/Emulex Nic card drivers and firmware to track down the too many issues that plagued the HP Virtual Connect blade chassis networking at the time. I wrote a PowerCLI script which invoked Putty and SSH to connect to each ESXi host to gather the firmware version so I could check the support matrix, it was time consuming and cumbersome. CloudPhysics would have made this so much easier. I could have used the Developer Edition to create my own cards so much quicker and then this could have been made available to others by publishing it to the Card Store.

Read more…

Tech Field Day 11 Preview: CloudPhysics

June 9th, 2016 1 comment

Tech Field Day 11 is happening in Boston, from 22-24 June and I’m super happy to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: I’m heading to Tech Field Day 11 and DockerCon!

CloudPhysics

CloudPhysics is somewhat of a darling of the visualisation ecosystem, founded by a number of ex-VMware brains. CloudPhysics has previously presented at Virtualisation Field Day 3, two years ago

It has a SaaS product for analysing on-premises VMware installations. This is hugely valuable, vSphere is powerful, can have fantastic performance but by nature of it touching compute, storage and networking can be difficult to see where performance or configuration issues are.
CloudPhysics sucks up all your vSphere config and performance data via a small virtual appliance and sends the data to the cloud and crunches it to give you visibility across your entire infrastructure so you can view reports, see config changes and cluster performance. You can also look ahead and use the product’s trending and predictive analysis. You can get going in 15 minutes and spend no money with the Free edition or upgrade to the Premium edition for more features which is a yearly subscription.

The user interface is all based on cards, each one is a mash of systems data and analytics. In the free Edition you can see things like inventory information, VM reservations and limits, snapshots and host resource commitment. If you start paying you get many more cards including datastore space, cluster health, unused VMs, orphaned VM files, I/O contention, a helpful knowledge based advisor to match KB articles to your infrastructure and also some cost comparison calculators for vCloud Air and Azure. As its a SaaS platform the cards are continually being updated and new ones appear fairly regularly. You can also create your own.

Being able to spot bad configurations and unauthorised changes is so useful and if you can correlate a performance change to a configuration change that can save hours of needless investigation.

Its strange to say but you really shouldn’t need any of this, I wish vCenter was able to give you all this information in an easily digestible format but it doesn’t so CloudPhysics is great. Who knows if VMware ever get to vCenter as a Service whether analytics like this is part of the future roadmap?

CloudPhysics has always had the VM analytics but has recently been fleshing out its host and cluster exploration capabilities so can better see the relation between VMs for noise neighbours for example, it will be interesting to hear what’s new.

Partner Edition

Read more…

Categories: TFD11 Tags: , , ,

ZeroStack’s full stack from Infrastructure to Application

January 13th, 2016 No comments

ZeroStack is a recently out of stealth company providing a cloud managed hyper-converged appliance running OpenStack. It is targeting private cloud customers who are wanting to stand up their own OpenStack instances but don’t want the hassle of getting it all working themselves. What ZeroStack also does which is unique is combine this infrastructure part with application deployment which for me is the exciting bit.

It is early days for the company but it has seasoned financial backers, advisers and founders and after just a year has an impressive amount of functionality in its product.

Private Cloud

imageThe use case is companies wanting to replicate the ease of public cloud but in as a private cloud. Amazon’s AWS and Microsoft’s Azure make spinning up VMs or even direct application instances easy and allow you to pay per use. It’s all about lowering the admin of deployment and moving to an IT consumption model.

This is all great but companies at the moment need to replicate this functionality in-house and may like to built out a private cloud. They may need data kept on premises due to perceived security concerns or even legally requiring data to be held in a particular location. There may be more practical concerns like the amount of data to be stored/analysed that makes it impractical to move externally. The issue of cost may be an issue with scare stories of AWS bills racking up quickly although I do find companies are very poor at working out their own internal data center costs so comparisons are not necessarily accurate.

The point where deployment happens is also shifting away from infrastructure support teams to application support teams and further along to applications themselves managing their own infrastructure resources via API calls to a cloud to spin up new VMs with automated deployment and scaling of applications.

Suffice to say companies are wanting to replicate public cloud functionality internally to give applications the resources they require. Current software options are generally VMware which is feature rich with excellent infrastructure resiliency with a cost model to match the functionality or OpenStack which is open source, not as feature rich with deliberately less infrastructure resiliency but doesn’t have license costs due to a vendor.

ZeroStack uses the tagline “Public Cloud Experience, Private Cloud Control” and as I see it is attempting to give its customers four key things:

1. Hardware: Hyper-Converged Appliance

Read more…

FalconStor’s rebirth with FreeStor

December 7th, 2015 No comments

image In my preview post before attending: Virtualisation Field Day 6 Preview: FalconStor I raised my concerns whether FalconStor was “yet another storage company”. I thought it would be useful to detail what I learned during its Virtualisation Field Day presentation as well as speaking to other delegates.

Rebirth

FalconStor as a company seems to have had a much needed rebirth which it sorely needed after legal issues and a tragic CEO loss 4 years ago started to sink the ship. FalconStor then bled cash for a while and lost another CEO before current boss Gary Quinn took the helm. Current management as expected takes pains to distance themselves from the dark times and are passionate about the company’s future and believe they have what it takes to succeed.

I’ve also learned FalconStor previously didn’t have the best reputation for code quality leading to products with less than stellar stability. Apparently this has been rectified with a new team who managed to ink a lucrative partnership with Violin Memory to provide data services software to the lacking Violin arrays. Violin is in the business of high performing storage so this must have been a win partnership for FalconStor as it could learn all about high performing flash as part of the deal. Unfortunately it seems this buddying up dissolved a year or so ago and there doesn’t seem to be much information on why. I get the impression FalconStor wanted to continue but Violin didn’t so hopefully FalconStor received enough of what it needed to improve, speed up and modernise its codebase. Violin is going through its own issues including a tanking stock price yet FalconStor hasn’t been dragged down as well so the market sees Violin as overvalued and has some faith in FalconStor. More recent OEM deals are being done with X-IO Technologies, Kaminario as well as Huawei so FalconStor software seems in high demand.

FreeStor

Read more…

Virtualisation Field Day 6 Preview: FalconStor

November 10th, 2015 No comments

Updated on 11/11/2015 with some changes based on additional information.

Virtualisation Field Day 6 is happening in Silicon Valley, California from 18th-20th November and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post:I’m heading to Virtualisation Field Day 6.

FalconStor SoftwareFalconStor is a company I’ve heard of over the years (it’s been going for 15 years) but haven’t had any direct experience with their products previously. It seems to have had a chequered history with fines for paying bribes and then covering it up in its books but that’s a few years ago so I’m sure FalStore is putting that behind them!

FreeStor

FalconStor has recently release a brand new product called FreeStor. Don’t get too carried away, its not a Free product in terms of price (more on that later) but rather Free as in Freedom. FreeStor is a product to build a distributed storage resource pool across almost any type of underlying storage. It’s basically virtualised storage using FalconStore’s “Intelligent Abstraction” core so you can easily move, protect and dedupe data on or off cloud without being reliant on any particular hardware, networks or protocols. This means you can freely choose the right storage for the right price and have FreeStor manage and protect it all.

This virtualised platform then allows you to seamlessly move workloads across different underlying storage. There is WAN optimised space efficient replication and everything is globally deduped.

Read more…

VMworld EU 2015 Buzz: Office of the CTO Stand: vRDMA & Unifying Virtual and Physical Desktops with Synthetic Block Devices

October 30th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-13 14.42.28vRDMA

I made a point as I always do at VMworld to head to the VMware Office of the CTO booth to have a glimpse into the future. I spoke to Jorgen Hansen about vRDMA.

This is an interesting research project within VMware to create a new class of distributed application in a virtual environment by allowing very fast transport by bypassing much of the VMware kernel and accessing memory on another host. This will allow applications to reserve VM memory via the hypervisor but be extremely scalable and fast, think HPC and financial trading. Expand the pic to look into the project:

Unifying Virtual and Physical Desktops with Synthetic Block Devices

2015-10-13 18.03.58Later on they also had a new research project to talk about: Unifying Virtual and Physical Desktops with Synthetic Block Devices. Rami Stern talked me through it which is all about having a single instance storage across the physical and virtual world so a single store of Mirage data as well as VMDK. Users would be able to move OS data from physical to virtual with very little data transfer, very much linking the different technologies VMware has acquired. Again VSAN is being looked at to do this, image deduped storage for OS + File + VM + Mirage + Cloud Volumes Data, again very interesting.

Max the pic for more details:

 

VMworld EU 2015 Buzz: The Future of Software-Defined Storage – What does it look like in 3 years time? – CTO6453

October 28th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Richard McDougall, a Principal Engineer at VMware led this presentation peeking into the future.

2015-10-13 17.01.15 2015-10-13 16.57.54

This session was about the futures & trends for storage hardware and next-gen distributed services. shared NVMe/PCIe rack scale, flash densities & if magnetic storage still has a place.

2015-10-13 17.02.18Richard gave an interesting talk explaining the needs of Big Data/No SQL etc. applications and their storage requirements building up a graph using two axis, horizontal for size from 10s of TBs to 10s of PBs and vertical for IOPS from 1000 to 1,000,000.

He built up the picture showing where various memory and storage applications sit and then added what hardware / software platforms are used to service these applications, it was a great visual aid.

He spend time going through how cloud native applications and containers still have a storage requirement with some options copying the whole root tree, using a Docker solution by cloning using another union file system (aufs), like redo logs for VMDKs.

Containers still need files, not blocks and need snapshots and clones. You need non-persistent boot environment as well as somewhere to put persistent data. Shared volumes may be needed as well as an object store for retention/archive.

Richard went on to talk about hardware and the massive increase in performance for NVDIMMs, getting closer to DRAM. Have a look at the comparison chart relative for travel time from California to Australia.

2015-10-13 17.15.39 2015-10-13 17.23.36

He then went through some of the device interconnects and posited that NVMe will take over most current interconnect methods, he was very positive about NVMe!

2015-10-13 17.39.19 2015-10-13 17.43.27

He mentioned how hard it is to actually build true scale out performant storage.

2015-10-13 17.48.31He mentioned a great use case for caching companies like PernixData and how they in the future could be used to front end things like S3 storage, so have massive buckets in the cloud yet give very fast locally cached access, interesting.

The dream is a single common storage platform that can be used with a single HCL and common software defined storage platform for Block, CEPH, MySQL, MongoDB, Hadoop etc. I think that’s what VMware is trying to make VSAN do.

This is very difficult to achieve but I certainly see future VSAN not too far away with native SMB and NFS access as well as persistent storage for containers running on the Photon Platform. This will give you the best of both worlds, stateless containers running natively as well as stateful containers with their data stored locally within the container being replicated to other nodes in the VSAN cluster as they are VMs. Other services can access SMB and NFS for file data natively on VSAN which will also be replicated across the cluster and across sites for DR.

HP Discover Buzz: HP Storage

June 3rd, 2015 No comments

The HP Storage Coffee Talk session that we attended was first of all to introduce the new New SVP & GM of HP Storage, Manish Goel, who takes over from David Scott who worked for HP, left to start 3PAR and then brought it into HP.

David has been credited with revitalising HPs storage portfolio with its now flagship product range based mainly on 3PAR based. Manish therefore has big shoes to fill. He started at HP storage in March having being a 7 year NetApp veteran, another storage titan facing difficulties at the moment. He left in 2013 and tried his hand at retirement and a startup which apparently didn’t agree with him and he’s now at HP.

HP has some announcements around the 3PAR storage platform which you are read more about:
A new StoreServ 20850

HP 3PAR Streaming Remote Copy Replication

I asked how with the move away from central SAN storage towards server SAN and hyper-converged how HP Storage manages this transition. HP is in a unique position hardware wise that they sell both servers and SAN but its is separate business units and separate product portfolios (3PAR vs. Lefthand VSA vs. VSAN etc on HP DAS).

Read more…

Categories: HP, HP Discover Tags: , ,

CommVault: We’re not just a backup company but we don’t like telling you

April 8th, 2015 2 comments

I was very fortunate to attend Virtualisation Field Day earlier this year. One of the companies presenting was CommVault who bill themselves as a “data” company.

They spent the majority of their time at Virtualisation Field Day going through all the details of how they can do backups and restores and to be honest it was rather dull. Backups are hugely critical to your infrastructure and just like insurance you don’t want to find out you are not protected when it is too late. The thing though is backup nowadays is such a utility service. It would be unfair to say that backups haven’t evolved because they have particularly with virtualisation but ultimately you are still taking a copy of your data and storing it remotely from your live data. The what hasn’t changed much even if the how has.

This makes talking about backup a difficult task because your audience always certainly knows what backup does and generally how it works even if your tool may have a few differences. Being able to back something up and restore it is a given, being able to mount backups of VMs and restore files within those backed up VMs is now a given as well however your backup vendor choses to do it.

I feel CommVault did itself a disservice at Virtualisation Field Day which is evident by the lack of post game talk and analysis about their solution compared to some of the other presentations, proof that backups are not sexy.

However I feel that CommVault has an interesting story to tell if they could just elevate themselves from the backup bandwagon.

CommVault Simpana’s USP is not in the backup but in the use and analysis of the data that has been ingested. I use ingested deliberately to make the distinction between it just being a backup used to recover something some time in the future. Companies are being asked to do more and more with their data, some of it is in live databases or files but a huge amount is actually archive data, old log files, old emails, old text messages, old voicemails, old x-rays, old files. Companies are often required legally to keep this old stuff around for a long time and you know how this is stored, in a completely separate copy from backups. Emails are journalled by product x. text messages by product y, voicemails by product z. These products may be even separate companies with completely separate data formats, there’s no way you could search across them.

Read more…