Archive

Archive for September, 2018

Generating vCenter Solution User Certificates With Custom Names

September 28th, 2018 No comments

Many enterprises require replacing all vCenter certificates with Enterprise CA trusted certificates.

vSphere 6.5 has made the certificate updating process so much easier than the complication of the vSphere 5.x days.

Basic vCenter now has a single Machine SSL certificate as well as four Solution user certificates: machine (different from machine SSL), vpxd, vpxd-extension, vsphere-webclient.

Although the Solution user certificates are only used for internal vCenter communication, many enterprise security standards require using enterprise CA issues certificates for everything.

BTW, when you migrate from a Windows vCenter 5.5 to VCSA 6.5 using the excellent migration tool, only the Machine SSL certificate is taken across, the Solution user certificates remain self-signed and may need to be manually updated.

Each Solution user certificate needs to have a unique name.

Also remember, the SubjectAltName must contain DNS Name=machine_FQDN

I used the great guide from Ian Sanderson for updating the certs as a base https://www.snurf.co.uk/vmware/replace-ssl-certificates-on-vmware-psc-v6-5/

You can use the VMware supplied vSphere Certificate Manager in the VCSA (sidebar, you should really be using the VCSA rather than Windows by now!) to generate the solutions user certificate.

/usr/lib/vmware-vmca/bin/certificate-manager

When you select Option 5 and then Option 1 to generate the certificate private keys and certificate signing requests to send off to your Enterprise CA, the tool has a particular format for the signing requests.

Read more…

Categories: vCenter, VMware Tags: , , ,

Maybe spend time looking at how new tech CAN help you rather than CAN’T help you

September 12th, 2018 No comments

I was recently invited to do an internal enterprise financial company presentation on serverless computing as part of a general what’s happening in IT series. There was a wider range of people than I expected who attended, some business people and some IT people.

The business lens

In the questions and feedback afterwards interestingly some of the business people could see some of the value more easily than the IT people. Business people liked the coming together of business logic and IT and could see the benefit of just encoding what they need doing in a serverless function without having to worry as much about all the IT infrastructure stuff behind the scenes. Although the business people weren’t coders, someone likened the approach to using Excel macros. Some fairly sophisticated Excel functions have graced the trading desks of many an organisation. They didn’t need to think about infrastructure with Excel Macros, Excel was just a platform you could code mathematical functions in. Sure, Excel macros had many issues, security, performance, availability etc. but they served the business need easily without having to get IT involved.

The IT lens

I then spoke to a development team leader afterwards. She’s very well versed in coding, a super smart algorithmic trading developer. She voiced valid concerns though that with serverless functions you couldn’t control the latency of the function and so she couldn’t see any use for them in their work. Part of the workflow they develop is low-latency trading, pricing and analytics which of course is very latency and performance sensitive. Some of the workflows include many steps necessary for compliance and auditing. A price range traded may need to be put into a database to reference later. A trade that is priced needs to be logged somewhere and a trade completed needs to go into another database which kicks off a whole other bunch of workflows to be reconciled in the back-office. She mentioned the low-latency algo stuff was working well but they sometimes struggled with performance and speed when it was a very busy trading day. Some of the compliance and auditing code sits very close compute wise to the low-latency code. This makes it simpler to code the end-to-end transactions but it means the most expensive physical server hardware low-latency compute cycles are also being “wasted” on compliance and auditing code which may struggle to keep up on an extra busy trading day. To improve this would generally require scaling up existing compute resources. The compliance and auditing data was also used by many other integrated systems so care needed to be taken so that the secondary databases could keep up with low-latency demand.

This made me think of two things, first of all how this application would of course benefit from some splitting up. The app could be changed for the low-latency code to push out the minimal amount of compliance and auditing information to another database, queue or even stream. A separate set of serverless functions could then very efficiently respond to an event or pick up these trades or prices and do whatever needs to be done (BTW, its not just functions that can be serverless, databases, queues and storage can be too!). This could also be massively scalable in parallel. 1 trade at a time or a million and this wasn’t latency sensitive stuff once the initial small record was created.

CAN use or CAN’t USE

Secondly was how the developer team leader was seeing how serverless functions COULD NOT be used for latency sensitive workloads but not seeing how useful they COULD be for all the rest of the compliance and auditing code. The low latency code was the most important so naturally her focus is on that.

The splitting up of the app is an architectural discussion and may not in fact be suitable in the end but the more important point is sometimes we are a little myopic and only see what a technology CAN’T do rather than looking at the bigger picture and seeing what it CAN do. This can distance you from the business. Oh, and of course, Excel can do a LOT!

Categories: Cloud, DevOps, Scale, Serverless Tags: