Archive for the ‘DevOps’ Category

Maybe spend time looking at how new tech CAN help you rather than CAN’T help you

September 12th, 2018 No comments

I was recently invited to do an internal enterprise financial company presentation on serverless computing as part of a general what’s happening in IT series. There was a wider range of people than I expected who attended, some business people and some IT people.

The business lens

In the questions and feedback afterwards interestingly some of the business people could see some of the value more easily than the IT people. Business people liked the coming together of business logic and IT and could see the benefit of just encoding what they need doing in a serverless function without having to worry as much about all the IT infrastructure stuff behind the scenes. Although the business people weren’t coders, someone likened the approach to using Excel macros. Some fairly sophisticated Excel functions have graced the trading desks of many an organisation. They didn’t need to think about infrastructure with Excel Macros, Excel was just a platform you could code mathematical functions in. Sure, Excel macros had many issues, security, performance, availability etc. but they served the business need easily without having to get IT involved.

The IT lens

I then spoke to a development team leader afterwards. She’s very well versed in coding, a super smart algorithmic trading developer. She voiced valid concerns though that with serverless functions you couldn’t control the latency of the function and so she couldn’t see any use for them in their work. Part of the workflow they develop is low-latency trading, pricing and analytics which of course is very latency and performance sensitive. Some of the workflows include many steps necessary for compliance and auditing. A price range traded may need to be put into a database to reference later. A trade that is priced needs to be logged somewhere and a trade completed needs to go into another database which kicks off a whole other bunch of workflows to be reconciled in the back-office. She mentioned the low-latency algo stuff was working well but they sometimes struggled with performance and speed when it was a very busy trading day. Some of the compliance and auditing code sits very close compute wise to the low-latency code. This makes it simpler to code the end-to-end transactions but it means the most expensive physical server hardware low-latency compute cycles are also being “wasted” on compliance and auditing code which may struggle to keep up on an extra busy trading day. To improve this would generally require scaling up existing compute resources. The compliance and auditing data was also used by many other integrated systems so care needed to be taken so that the secondary databases could keep up with low-latency demand.

This made me think of two things, first of all how this application would of course benefit from some splitting up. The app could be changed for the low-latency code to push out the minimal amount of compliance and auditing information to another database, queue or even stream. A separate set of serverless functions could then very efficiently respond to an event or pick up these trades or prices and do whatever needs to be done (BTW, its not just functions that can be serverless, databases, queues and storage can be too!). This could also be massively scalable in parallel. 1 trade at a time or a million and this wasn’t latency sensitive stuff once the initial small record was created.

CAN use or CAN’t USE

Secondly was how the developer team leader was seeing how serverless functions COULD NOT be used for latency sensitive workloads but not seeing how useful they COULD be for all the rest of the compliance and auditing code. The low latency code was the most important so naturally her focus is on that.

The splitting up of the app is an architectural discussion and may not in fact be suitable in the end but the more important point is sometimes we are a little myopic and only see what a technology CAN’T do rather than looking at the bigger picture and seeing what it CAN do. This can distance you from the business. Oh, and of course, Excel can do a LOT!

Categories: Cloud, DevOps, Scale, Serverless Tags:

Cloud Field Day 2 Preview: ServiceNow

July 21st, 2017 No comments

Cloud Field Day 2, part of the Tech Field Day family of events is happening in San Francisco and Silicon Valley, from 26-28 July and I’m super excited to be invited as a delegate.

We are hearing from a number of companies about how they cloud!

ServiceNow has a SaaS suite of products and is trying to take traditional enterprise IT Service Management to the next level. It has had a number of leadership changes recently so seems to be shaking itself up for new things.

ITSM products have been often maligned and I have been fairly vocal over the years in my scorn for some ITSM/ITIL products. They often insert an inordinate amount of unnecessary bureaucracy between getting things done and protecting your IT estate. This causes all sorts of problems. IT teams try as hard as they can to navigate around the horrible tools they are forced to use, hence some shadow IT. ITSM bureaucrats continually add more and more process to trap more and more potential issues, more process = more hassle. Processes get so complicated and change is avoided at any cost. It can take weeks to shepherd a change through the system and finally get approved. People dump as much as they can in a change to avoid having to repeat the process. Business users have to log IT issues in a system that makes them feel IT doesn’t care. IT SLAs are tracked through the ITSM tools so solving tickets becomes wack-a-mole for IT staff.

ITSM Dispair

Read more…

Categories: CFD2, Cloud, DevOps, Tech Field Day Tags: , , ,

The Danger of Denying Reality: DevOps Enterprise Summit London Review

June 16th, 2017 No comments

I was very fortunate to be able to attend the DevOps Enterprise Summit in London recently. The conference is organised by ITRevolution which is lead by DevOps luminary Gene Kim who has written a number of books and papers about DevOps.

DevOps can touch every part of a business and help to create enormous benefit but there is a danger people can get caught up in the hype of things and possibly ignore reality.

I appreciate it takes huge courage to stand up at a conference and talk about what you’ve done and you should rightly be proud of having the courage and of you successes. I certainly don’t want to diminish the work some of the presenters had done but feel a reality check may be in order. One of the presentations at the conference was from the Ministry of Justice with one of its IT partners which was titled “Hybrid-Clouds: How to Go Slow and Haemorrhage Money Doing It”. The premise was public cloud was the future and wasting time and resources doing hybrid cloud is a fools errand.

Peeking into the future this is something I actually agree with. Hybrid cloud should be today’s on-ramp to the public cloud rather than a destination in itself. Now, there are going to be use cases for “private cloud” but this should actually be public cloud services managing on-premises IT if there is a need. The on-premises IT will more likely be devices that need to be physically located near to other things to report on/number crunch where there  are network limitations. It won’t be a mashing together of existing traditional on-premises IT linked up to a public cloud. Its not a public cloud more agile UAT version of your database that refreshes from the production on-prem version.

So, when an organisation as well known and important as the Ministry of Justice says it is all in on public cloud, its time to sit up and take notice. However, the reality doesn’t match up with the advertising. The Ministry of Justice says it has 50 cloud native public cloud applications out of 950. The other 900 applications run on traditional on-premises infrastructure. I questioned this and was told the Ministry doesn’t have a timeline to move the 900 apps to public cloud and will still even continue to deploy new workloads on-premises. They are only 5% public cloud yet seemingly ignoring the other 95%. That’s a bit of a reality check and not an all in on public cloud (yet). Have they learned enough from migrating or creating net new 5% public cloud apps to proclaim hybrid cloud a waste of time even if its temporary?

Unfortunately this kind of presentation reinforces the wrong message and I would believe creates more of a bi-modal IT mess. The IT estate gets carved up into the old and the new without acknowledging and managing the important bridge between the two and the continual evolution of IT from old to new wherever it may be hosted. Ring fencing your current, legacy, heritage, traditional, whatever you want to call it actually adds more technical debt if you don’t have a migration plan. There needs to be continuous evolution from old to new, the smaller and quicker the steps the easier it is.

Nordea bank had a provocatively titled talk “How Do You Fit a Core Banking System into a Few Containers?” which went through a development effort to migrate from a monolithic Oracle and Java estate to something new and containerised. There are two things which they did at least highlight which showed this isn’t the unicorns and rainbows one may expect from the title. First, this is only currently in test, not production. This certainly still has huge benefits with being able to test more quickly and have a more nimble and repeatable deployment model. Nordea says it is working with its compliance and regulatory teams to be able to move towards production. I hope for their sake that compliance/security/regulators were brought in at the very beginning and have been part of the journey rather than coming in only when they need to think about production. Secondly they openly said their containers are still very heavy which is an issue. They have large containers with Oracle and JVMs installed. The title did say a “few containers”. This seems like they are using containers not for micro-services but more of a packaging and standardised deployment format, all fine and useful but the real wins will be pulling apart that massive Oracle and Java codebase into discrete micro-services.

I will be very interested to see how Nordea get on with this in production, at least they are trying, playing and learning, I look forward to a future talk: “How Do You Fit a Production Core Banking System into a Lots of Containers?”

Enterprises certainly should have a goal of using more and more public cloud services and even be bold and have a public cloud first policy. Refactoring monolithic applications into micro-services allows you to do so much more. However you can’t ignore what you currently have and if you’re going to talk about where you are going, be realistic with where you are coming from and what that might mean as you learn more.

Categories: DevOps, DOES17 Tags: ,

Do Enterprise People Not Care About DevOps? A DevOps Enterprise Summit London Review

June 15th, 2017 No comments

I was very fortunate to be able to attend the DevOps Enterprise Summit in London recently. The conference is organised by ITRevolution which is lead by DevOps luminary Gene Kim who has written a number of books and papers about DevOps.

I was struck that there were apparently 650 attendees which I found surprisingly low. If DevOps is the solution to many IT problems I would have thought that it would appeal to more than 650 people. It was hosted in easily accessible London and seemed to attract people from all over Europe in my unscientifically unofficial badge watching and accent analysis.

I can think of two reasons for the seemingly low turnout. First of all this is targeted squarely at the enterprise who are certainly laggards by nature of their size and complexity. Although DevOps is now fairly established practice, enterprise are still wrangling their own inertia and bureaucracy. Enterprises may not yet see the value in sending people to a conference about DevOps, they are happy to spend vast amounts of money on wasting time  in the office doing meaningless work yet a few days at a conference to hear from peers and experts is seen a time away from the office which must be therefore unproductive.. This would be seen as ridiculous in smaller, IT focused companies that have fully embraced “cloud native” where DevOps is obvious and the norm. Enterprises can be very strange.

Secondly, DevOps is very difficult to define and articulate. It could be seen as a term possibly as loosely defined as “cloud”. How many people would see value and attend a “Cloud Enterprise Summit”? I think this is one of the challenges facing the DevOps “movement” for enterprises. It is tough to articulate DevOps and therefore tough to define its value. When you understand what DevOps can bring it seems obvious but when you don’t it can seem nebulous. I’ve heard anecdotally that the DevOpsDays are seen as somewhat insular. Is the DevOps movement itself a barrier to adoption?

One of the things that backed up my observation at the conference is more physical, books. The conference gave out Gene Kim et al’s The Phoenix Project, DevOps Handbook and other books for free at the conference. I had thought people would attend the conference having already read the books and wanting to find out more but a number of people I spoke to had never heard of the books or if they had heard of them hadn’t read them. I found this amazing as The Phoenix Project has been out for more than 4  years and was one of the books that sparked my interest in discovering what is holding back IT.

Attendance numbers aside it seemed the people who were there were very engaged and the presentations were diverse and interesting. Does this mean DevOps is still in its infancy in the enterprise and conference attendees are its early proponents or is DevOps still not getting the attention in enterprises it needs?

Categories: DevOps, DOES17 Tags: ,

DevOps Enterprise Summit London Review: Organisational Change is Mandatory

June 15th, 2017 No comments

I was very fortunate to be able to attend the DevOps Enterprise Summit in London recently. The conference is organised by ITRevolution which is lead by DevOps luminary Gene Kim who has written a number of books and papers about DevOps.

One of the recurring themes throughout the conference was about organisational change. Many enterprises are making some headway with DevOps practices if we define DevOps in one of its purest aims of “shipping higher quality code faster” as well as in a more broader sense of making IT more efficient and adaptive.

I listened to a number of the presentations, attended the great Ask the Speakers Sessions, took part in a group discussion and spoke to a number of attendees. The number one barrier to improving IT efficiency and getting that better code shipped faster is organisational change rather than technical tools or capabilities. The repeated message was, the bigger the focus on organisation change, the better performing an IT organisation will be. By organisational change I mean team function as well as financing.

I was struck listening to the presentation from Barclays by Jonathan Smart, The Yin and Yang of Speed and Control and followed up in the Ask the Speakers Session. Barclays has spent three years completely changing the way its organisation is structured and aiming for the so-called 2 pizza team or smaller more nimbler teams. What is also interesting is they are extending this to their external partners by structuring these engagements also using the same small cell based structure approach. Basically one-size all encompassing outsourcing deals are out the windows which is something IT professionals have been telling their bosses for years!

Read more…

Categories: DevOps, DOES17 Tags: ,

VMworld EU 2015 Buzz: VMware Video Game Container System

October 30th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

VMware Video Game Container System

I then spent some time at the fascinating VMware Video Game Container System. This is basically a demo of running MS-DOS in a linux container and then launching the awesome DOS Prince of Persia game within that container. Seriously impressive technology.

You can see the demo:

I spent time with Ben Corrie (the guy in the demo) who was the guy to worked it all out and put the demo together. He developed Project Bonneville which is running containers as VMs which is something I’ve spent a fare amount of time exploring at VMworld. Ben gave me a great overview of Bonneville and explained how docker and the container host and ESXi interact. I discovered that the stripped down VM appliance which is the base of the system is extremely light weight without even the docker client installed. As it is not on Linux but rather ESXi means you can run any VM that ESXi can run, hence being able to run Windows MS-DOS!

Why MS-DOS?:

We chose MS-DOS 6.22 partly for nostalgia, and partly because it neatly encapsulates a simple legacy OS. In 48 hours, we were able to use a vanilla Docker client to pull a Lemmings image from a Docker repository and run it natively in a 32MB VM via a Docker run command. The image was built using a Dockerfile, layered on top of FAT16 scratch and DOS 6.22 OS images with TTY and graphics exported back to the client

Read more…

VMworld EU 2015 Buzz: Group Discussion: DevOps and Continuous Delivery – MGT6401-GD

October 29th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

This was held by Thomas Corfmat from VMware who currently works with vRealize CodeStream although the discussion wasn’t product specific.

Group discussions by nature aren’t presentations but rely on everyone giving input.

The session started with introductions from everyone to get an idea of what kinds of companies are in the session, as the introductions started quite a few people left immediately! Must be the pressure of a group discussion or wanting to stay anonymous!

2015-10-14 15.45.40Went through what was Continuous Integration / Continuous Delivery.

A poll was done as to where people were on the DevOps journey. 6% said they can release code into production within the hour, 60% were split between having no DevOps and starting to look at it and the rest were somewhere further along on the journey.

Some people shared stories about how they started, from changing from waterfall to agile approach, others being forced by the business.

The consensus was to start with continuous integration to get started, establishing toolsets like TFS or Jenkins for example and work on the people and process.

Read more…

Categories: DevOps, VMware, VMworld Tags: , ,

VMworld EU 2015 Buzz: Running Cloud Native Apps on your Existing Infrastructure – CNA5479

October 29th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-14 14.04.32This session was delivered by Martijn Baecke and Robbie Jerrom, both from VMware, who were on the earlier panel. The session started talking about the transition from the client/server era into mobile/cloud. There have been huge changes in how software is being engineered, witness VMworld having a DevOps day today. There still needs to be a way to host both the apps of what they call yesterday (in reality 99% of apps running anywhere!) and the new generation of cloud native apps. VMware wants to ensure it can provide the platform to support both these worlds. The term used was future proofing so the idea is to build an infrastructure for your current workloads that can also be extended to deliver the new cloud native apps. The work needs to be done to transform current infrastructure but keeping one management later.

They looked at various strategies such as starting with a new cloud and rebuilding apps but this is a good choice for cloud native only, something not feasible for many companies. Greenfield can be done, starting again, but yet this is another environment to manage with upfront investment. Unsurprisingly they suggest using vSphere as the cornerstone of your infrastructure. Use software defined storage with hyperconverged/VVOLs and NSX for networking. Having a single platform makes it much easier to automate and orchestrate.

A great quote was given “Automation=Speed, Orchestration=Control”.

Read more…

VMworld EU 2015 Buzz: VMware CTO Panel – CTO6630

October 29th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

Guido Appenzeller – Chief Technology Strategy Officer of Networking & Security, VMware, Inc.
Joe Baguley – CTO EMEA, VMware
Paul Strong – VP & CTO Global field, VMware

Ray O’Farrell was sick so couldn’t make it, hope he gets well!

My notes, hard to encapsulate as it was a pretty broad discussion!

Paul Strong led the panel and went through each CTO and asked about their role

Guido: NSX.

Joe: EVO:Rail, IoT and unikernels

Paul: Connecting R&D to partners and customers. integration back into R&D

Guido: Networking previously virtually integrated (same stack) unlike servers. Networking sales model changing to be like servers.

Local switching is easy, future is all about global connectivity with security built in.

Joe: talking about second effect, for example having virtualised networking, what will that mean, like cars being the enabler for Walmart which couldn’t exist without people being able to drive to out of town Walmarts yet the invention of the car couldn’t have predicted Walmart.

Read more…

VMworld EU 2015 Buzz: Panel: Enterprise Architecture for Cloud Native Applications – CNA5379

October 29th, 2015 No comments

Adding some more colour to the highlights from my VMworld Europe 2015 coverage:

2015-10-14 11.01.25EMEA CTO Joe Baguley led the discussions with Martijn Baecke, Aaron Sweemer, Chris Sexsmith and Robbie Jerrom all from VMware which was to highlight VMware’s vision for next generation application development and hosting. The chat went through micro-services, 12 factor apps and how they could be deployed using PaaS and/or containers.

HA & FT are no longer needed. No resilience in infrastructure is required but pushed to the developers to create. This then becomes everyone’s responsibility to coordinate which isn’t an easy thing to do, I see more passing the buck!

There was some background on how agile makes things faster.

There was an interesting discussion for a typical use case for enterprise applications in that there’s no way you can make the entire application cloud native. There may be backend databases for example or even customer records on a mainframe which will never reach the cloud native world. Joe mentioned he is seeing a lot of traction for changing the middle of a 3 tier app so not messing with the back-end or the front end but breaking apart all the middleware into microservices to make them more efficient. Going cloud native for an application doesn’t have to mean going all in.

Read more…