I’m super excited to be joining AWS as a Senior Developer Advocate for the Serverless Product Group.
I’ve been busily working as an IT practitioner on the customer side my entire career. 20ish years (eek!) mixing up IT infrastructure with radio stations, journalism and finance. Now seems the perfect time to abstract that all away and transition to the vendor side starting in September.
I’ll be joining AWS to help drive the development and adoption of the AWS Serverless platform and serverless applications through community engagement. That’s the actual job spec…and its awesome! I’ll be working within the Serverless Product group who look after Lambda, API Gateway, Step Functions, SAM, and a whole surrounding serverless ecosystem. https://aws.amazon.com/serverless. The team has been ramping up this year so I’m joining some amazing colleagues under the watchful eye of the superb Chris Munns.
Here are some of my thoughts and questions before starting the role. I’m going to add a BIG caveat. This is all swirling through my head at the moment, I’m likely going to be very wrong on lots of things and I have a HUGE amount to learn so I’m really hoping my ideas will be bashed to pieced and reformed into something better. That’s the point, to learn and explore.
There’s always been a lot of talk about defining “Serverless”. That’s what you get for coming up with a term that describes what something isn’t rather than what something is.
There isn’t actually an authoritative definition of what “serverless” is, as it depends on what message your trying to convey, and that’s actually OK, we’ve somehow managed to eventually understand what “cloud” generally means.
Into Day 3 of the conference which is when more of the announcements start rolling in!
Andy Jassy Keynote
As I’ve settled into Vegas time zone, this felt like an early wake up to make it in time to the keynote as there was going to be a queue. It would be far more sensible to stay in a hotel or other venue and watch remotely but feeling the reactions in the room for the announcements seems more interesting.
The keynote was streamed to some big spaces in the other conference venues and a good change this year was also streaming it to many more of the breakout sessions rooms all across the venues so you had more chance of seeing the keynote without queuing like crazy.
AWS has made so many announcements in the build-up to re:Invent one wonders whether they’re trying to hit a particular release number to flash onto the big screen! A quick way to see the list of announcements is to look at AWS What’s New 2018:
CEO Andy Jassy as usual was master of disclosure.
I had no intention of live blogging the keynote, far too much information and others who are quicker typists!
There was a DELUGE of announcements, some recaps from the few weeks and many new…I needed to take stock a few times, pause and try to make sense of it all.
Well, we were super lucky. I was excited before as you can’t get more hackathon than a Robocar Rally which this was in the session list as. During the Andy Jassy keynote this morning, DeepRacer was announced which is a machine learning car.
I’m the first to admit I haven’t done much machine learning so this appealed to me as it was for developers with no prior ML or robotic experience.
We joined up pit crews, I teamed up with fellow UK VMUGer and Virtual Design Master winner Chris Porter (I’m not ashamed to grab onto the knowledge coattails when needed!).
The idea was to use machine learning to train the car in a virtual racetrack built in RoboMaker to learn to stay on track. One the training model was done, the data could be downloaded to the actual car and then the car would attempt to race a real track using the learning model from the training data.
We had to build a data-driven web app using React that lets users upload to shared, secure photo galleries. <sidebar>I’m struggling enough with photo management at home as it is, multiple people, too many photos and complicated ways to sync between devices, back them up and make them available to view! </sidebar>
I was recently invited to do an internal enterprise financial company presentation on serverless computing as part of a general what’s happening in IT series. There was a wider range of people than I expected who attended, some business people and some IT people.
The business lens
In the questions and feedback afterwards interestingly some of the business people could see some of the value more easily than the IT people. Business people liked the coming together of business logic and IT and could see the benefit of just encoding what they need doing in a serverless function without having to worry as much about all the IT infrastructure stuff behind the scenes. Although the business people weren’t coders, someone likened the approach to using Excel macros. Some fairly sophisticated Excel functions have graced the trading desks of many an organisation. They didn’t need to think about infrastructure with Excel Macros, Excel was just a platform you could code mathematical functions in. Sure, Excel macros had many issues, security, performance, availability etc. but they served the business need easily without having to get IT involved.
The IT lens
I then spoke to a development team leader afterwards. She’s very well versed in coding, a super smart algorithmic trading developer. She voiced valid concerns though that with serverless functions you couldn’t control the latency of the function and so she couldn’t see any use for them in their work. Part of the workflow they develop is low-latency trading, pricing and analytics which of course is very latency and performance sensitive. Some of the workflows include many steps necessary for compliance and auditing. A price range traded may need to be put into a database to reference later. A trade that is priced needs to be logged somewhere and a trade completed needs to go into another database which kicks off a whole other bunch of workflows to be reconciled in the back-office. She mentioned the low-latency algo stuff was working well but they sometimes struggled with performance and speed when it was a very busy trading day. Some of the compliance and auditing code sits very close compute wise to the low-latency code. This makes it simpler to code the end-to-end transactions but it means the most expensive physical server hardware low-latency compute cycles are also being “wasted” on compliance and auditing code which may struggle to keep up on an extra busy trading day. To improve this would generally require scaling up existing compute resources. The compliance and auditing data was also used by many other integrated systems so care needed to be taken so that the secondary databases could keep up with low-latency demand.
This made me think of two things, first of all how this application would of course benefit from some splitting up. The app could be changed for the low-latency code to push out the minimal amount of compliance and auditing information to another database, queue or even stream. A separate set of serverless functions could then very efficiently respond to an event or pick up these trades or prices and do whatever needs to be done (BTW, its not just functions that can be serverless, databases, queues and storage can be too!). This could also be massively scalable in parallel. 1 trade at a time or a million and this wasn’t latency sensitive stuff once the initial small record was created.
CAN use or CAN’t USE
Secondly was how the developer team leader was seeing how serverless functions COULD NOT be used for latency sensitive workloads but not seeing how useful they COULD be for all the rest of the compliance and auditing code. The low latency code was the most important so naturally her focus is on that.
The splitting up of the app is an architectural discussion and may not in fact be suitable in the end but the more important point is sometimes we are a little myopic and only see what a technology CAN’T do rather than looking at the bigger picture and seeing what it CAN do. This can distance you from the business. Oh, and of course, Excel can do a LOT!
Tim Wagner the AWS Serverless GM and Jeet Kaul from FICO
This session was about new things in serverless.
Tim reiterated how mazing Lambda is coming, its even inside a camera which was announced in the keynote as well as the top memory size being doubled to 3Gb which also doubled the CPU power.
Magic!
There was a mini magic show which was apparently a nod to something they did last year.
The idea is to show disappearing servers, as there are more and more serverless offerings, this means more and more disappearing servers.
Serverless Application Repository.
It’s worth looking at the recently announces Serverless Application Repository, its a marketplace of serverless functions published by AWS and others.
Another session on managing serverless and the new architectural patterns required to make it a success. The idea of to create reusable serverless patterns with an continual eye on reducing costs.
The venue was spectacular, in the Venetian Theatre which is magnificent. Shows the importance AWS is placing on serverless where most of the other sessions are in smaller rooms at the Aria.
Drew and Maitreya went through a number of patterns, giving operational and security best practices.
This was a fly by the seat of your pants session, so many AWS services were talked about, you needed an AWS dictionary to know what some are. If you are an infrastructure person who manages an OS, this was a very busy but insightful look at what is possible.
Serverless Foundations
For running your apps you can do it yourself with EC2 and even Docker, have managed services like EMR, ES, RDS etc. and then there’s services with no OS which is how they’re defining serverless, so that’s things like API Gateway, Kinesis Streams & Analytics, DynamoDB, S3, Step Functions, Config, X-Ray and Athena.
They reiterated the “never paying for idle” line and its all built for HA and DR.
You need to be aware of cold start, instantiate AWS client and database client outside the scope of the handler to take advantage of container re-use. Schedule with CloudWatch Events for pre-warming. ENUs for VPC support are attached during cold start
Lambda Best Practices
Minimise package size to necessities
Separate Lambda handler from core logic,
Use environment variables to modify operational behaviours.
Self-contain dependencies in your function package.
Leverage “Max Memory Used” to right size your functions.
Steven Challis & Derek Felska from AWS were the workshop leaders and it was very hands on, basically up to you and anyone else you wanted to team up with.
This is one of the reasons to actually attend a conference, you get to do things in person and interact with other people rather than watching a recorded session or just follow a step by step plan when you can’t confer.
Intro
Availability and fast performance is key to user experience. Building a global application from the start is traditionally extremely difficult. Think before serverless how you would have to manage a global fleet of EC2 instances, load balancers, databases and storage. You would need to be a DNS guru and keeping your compute generic yet regionalised was super tough. Enter serverless and the promise was there but Lambda needed a whole lot of hacking to get functions to fire based on geographical access.
In the workshop we set up a fictional company called www.wildrydes.com (would you use a ride sharing company called this!). This wasn’t just a normal rider sharing company though, the drivers were unicorns! They needed a customer support application which customers can use to report any issues, be it lost property or a grumpy unicorn! As the service was global and needed to be built, serverless was touted as the ideal platform to use as much as possible (of course, it’s re:Invent!). We needed to lash together Lambda, API Gateway, DynamoDB, Route 53, CloudFront and S3 for better availability. Cognito Federated Identities was also used for user authentication.
The workshop was also to highlight the new “API Gateway regional endpoints” feature which was recently released.
Recent Comments