Home > AWS, re:Invent > AWS re:Invent 2018: Serverless Retail Technologies at Scale Workshop – RET302

AWS re:Invent 2018: Serverless Retail Technologies at Scale Workshop – RET302

November 26th, 2018

Mike Mackay, Toby Knight, Bastien Leblanc, Imran Dawood, Mike Morain, Andrew Kane, Samuel Waymouth, Lee Packham, all from Amazon and Charles Wilkinson Architecture Head from River Island

This was a retail focused workshop which although not in my normal sphere of interest piqued my interest as its a great example of designing technology for massive, seasonal scale. There’s a difference between designing a system for continuous scale and one for rapid change scale. Black Friday has just happened so the reasons are fresh in our minds. Cost is obviously a factor, you don’t want to be paying for your peak system needs when they’re not being used.

20181126_184135587_iOS

Part of the consideration is being able to satisfy all transactions going through the system when its sale time. Parts of the system are being flooded with sale related requests, yet the non impacted parts of the system must not be squeezed. You can’t let the rest of your IT fall over because just ‘re doing a sale

The workshop used AWS Lambda, Amazon SNS, Amazon SQS, and Amazon API Gateway with existing non-microservices backend systems to divert traffic from the core critical infrastructure using Amazon CloudFront and AWS Lambda@Edge.

You can also play along at home with the instructions https://docs.lee.fishing

Set up

We started with setting up some of the environment which was going to take some time and went through an explanation of the workshop. There’s a pre-written shop front-end written in Django pointing to a PostgreSQL database on RDS. There’s a pre-written backend in Java for a restful API.

Then the interesting part, we’re going to replace this Java back-end with an AWS Lambda + API Gateway version but going to reuse the Java code. This was highlighted as a Lambda myth, that you need to change your code. AWS provides containers for exiting code for Java, NodeJS/JavaScript, .NET Core and Go. This means you can take full advantage of the scale Lamdba has to offer while using your existing “legacy” code. We were also going to deploy a static mini site on S3 using Cloud Formation to direct the traffic with Lamdba@Edge traffic rules.

The why from a customer

20181126_191624037_iOSCharles Wilkinson from well known UK retailer River Island then came up while things were deploying in the background. He went though the history of River Island IT. They had a customer Core Merchandising System (CMS) which was a typically large monolith purchased and then customised, making it harder to upgrade. Lots of large Oracle databases.
The backend ended up being broken apart into smaller parts as its seemed as easer but the business logic end up having its gravitational pull and updating it become scary, there was baggage. There were lots of batch jobs and things were clunky, slow and not as scalable. Serverless of course to the rescue and they managed to refactor a lot of their processes

Cloud9

We started off setting up a Cloud9 IDE environment. Cloud9 is an IDE in the cloud so you can manage your AWS environment without having to SSH into anything. Cloud9 deploys an EC2 instance behind the scenes. There’s a super useful cost-saving setting – we set it to 4 hours for this workshop. This means the instance will turn itself off automatically after the assigned 4 hour time so you don’t get a bill surprise.

Then we looked into the CloudFormation stack that was deployed as part of the setup which had CloudFront as well as an Application Load Balancer which routes traffic to the fishing shop’s frontend application which are two blocks both deployed using AWS Elastic Beanstalk.

Next, we deployed the basic web app which was  Python frontend with Django, and backend orders in Java using Spring Boot. This was the “legacy” environment that we’re going to bypass later.

We’re obviously expecting vast amounts of traffic to our successful sale. EC2 scaling groups are typically currently used to react to traffic growth with CloudWatch metric. Lambda, however does all of this scaling automatically, you don’t need to configure any scaling groups or concern yourself with CloudWatch for this.

Java Code reuse for Lamdba

The point of reusing your code for Lamdba come up and you can use the AWS Serverless Java Container available on GitHub. It supports Spring Boot, which means we can use it to run our service without changing our business logic. API Gateway will be used to connect to the backend Java Lambda function via HTTP.

The build process differs for preparing the app for Lambda but not code changes are required. We then needed to need to point our Elastic Beanstalk shop front-end at this new API.

Then came another interesting part. We wanted to ensure when people were buying our most popular product, the “Muskie Casting Rod”, that it wouldn’t overload other customers from buying other things for example, our “Tourney Casting Rod.”

Split Origin CloudFront

Split Origin CloudFront to the rescue which can specify traffic routing behaviour based on the request URI. This was used so we could create and use a CloudFront behaviour to check a URI for “shop/rods/muskie-casting-rod/*”. The rest of the customer traffic would go via the normal older way of serving up the website via the CMS and only this product would be all happily serverless. Of source you can move more things over to Lambda but the point being made was you don’t have to throw out everything to move to serverless but can actually redirect orders for a single product via an API Gateway + Lambda serverless route and not touch anything else, that is powerful.

Tracing

We then looked at AWS X-Ray as a distributed tracing system. X-Ray builds a Service Map, you can trace connections to a URI with a simple filter so you can see exactly the path of requests. So in our example you can see the call through API Gateway and through to the Lambda function. If Lambda was then to call other services such as RDS or DynamoDB, you can see that.

Last part was to look at Simple Queue Service to provide a buffer between Lambda functions and downstream services, these could be databases or even physical warehouse limitations. If you get hammered for orders for a product but want to ensure your fulfillment database doesn’t fall over, SQS allows orders to queue up somewhere where you can process them when your back-end services have capacity.

Being able to use the cool new crazy scale toys without rewriting existing investments makes architectural and cost sense.

This was a well delivered workshop, I had some issues with setting up things and fell behind but the presenter, Lee Packham went through the workshop as well on the screen so I could follow along.

Categories: AWS, re:Invent Tags: , , ,
Comments are closed.