Home > AWS, re:Invent, Serverless > AWS re:Invent 2017: Serverless Architectural Patterns and Best Practices – ARC401

AWS re:Invent 2017: Serverless Architectural Patterns and Best Practices – ARC401

November 28th, 2017

Another session on managing serverless and the new architectural patterns required to make it a success. The idea of to create reusable serverless patterns with an continual eye on reducing costs.

2017-11-28 13.28.01The venue was spectacular, in the Venetian Theatre which is magnificent. Shows the importance AWS is placing on serverless where most of the other sessions are in smaller rooms at the Aria.

Drew and Maitreya went through a number of patterns, giving operational and security best practices.

This was a fly by the seat of your pants session, so many AWS services were talked about, you needed an AWS dictionary to know what some are. If you are an infrastructure person who manages an OS, this was a very busy but insightful look at what is possible.

Serverless Foundations

For running your apps you can do it yourself with EC2 and even Docker, have managed services like EMR, ES, RDS etc. and then there’s services with no OS which is how they’re defining serverless, so that’s things like API Gateway, Kinesis Streams & Analytics, DynamoDB, S3, Step Functions, Config, X-Ray and Athena.

They reiterated the “never paying for idle” line and its all built for HA and DR.

You need to be aware of cold start, instantiate AWS client and database client outside the scope of the handler to take advantage of container re-use. Schedule with CloudWatch Events for pre-warming. ENUs for VPC support are attached during cold start

Lambda Best Practices

  • Minimise package size to necessities
  • Separate Lambda handler from core logic,
  • Use environment variables to modify operational behaviours.
  • Self-contain dependencies in your function package.
  • Leverage “Max Memory Used” to right size your functions.
  • Delete large unused functions
  • Use X-Ray integration for great insights.


The Serverless Application Model (SAM) is a great Cloud Formation extension optimised for serverless with new serverless resource types: functions, APIs and tables. It supports anything CloudFormation supports and is an open specification (Apache 2,0)

SAM Local is very interesting as it allows you to develop and test Lambda locally.

You can invoke functions with mock serverless events locally, there’s local template validation and also very useful is a local API Gateway with hit loading.

https://github.com/awslabs/aws-sam-local

The best way to deploy CloudFormation is using CodePipeline with the various modules.

There is a new capability with CodeDeploy to be able to use Lambda Canary Deployments so you can direct a portion of traffic to a new version and slowly migrate over or test usability. This can be used to monitor stability with CloudWatch and initiate rollback if needed and these can all be incorporated into your SAM templates.

Web Application Pattern

Standard webs apps with CloudFront and S3 for static content, API Gateway for dynamic content and data stored in DynamoDB. Cognito also involved for authentication.

Security for web application however need extra attention. IAM is obvioiusly the base for everything but for CloudFront you have DDOS protection, Origin Access Identity (OAI) as well as geo-restriction. S3 has bucket policies and ACLs. For API Gateway you have throttling, caching, usage plans and certificate management.

Interestingly,you can use Cognito for IAM authentication using Lambda to get an IAM policy with a custom authorizer. So Cognito can basically grab the right IAM policy based on the user logged in. There is a lots of flexibility in Cognito to either itehr use 3rd party providers from Facebook to Google or maintain your own userbase.

They discussed the newly released multi-region for API GAteway which was just the workshop I did yesterday, very powerful. Build a Multi-Region Serverless Application for Resilience and High Availability Workshop

There are a few frameworks for serverless web apps. WAS Chalice for Python, AWS Serverless Express for Node.js and a few for Jave

Data Lake Pattern

I got lost down the rabbit hole a bit with this one, there was so much information.

There are some serverless data lake characteristics:

  • Collect/store/process/consume and analyse all organisational data
  • Support structured, semi-structured and unstructured data
  • AI/ML and BI/Analytics use cases
  • Fast automated ingestion
  • Schema on Read
  • Complementary to Enterprise Data Management (EDW)
  • Should decouple compute and Storage

At the center of the architecture is S3 which means no compute to manage for storage. There are virtually unlimited number of objects and volumes that can be stored. S3 can handle a huge amount of bandwidth. You can use S3 tagging to effectively categorise your data.

You are going to need to ingest lots of data with Kinesis, Firehose via perhaps Direct Connect.

You need discover, to catalogue and search with DynamoDB, AWS Glue using crawlers to look inside files and infer schema, and ElasticSearch. This can be event based so Lambda things run as soon as something arrives, perhaps updating metadata ,not just handling the actual data.

Analytics and processing with Lambda there are a lot of tools, Athena which can be used as a serverless interactive query language using in effect SQL to query S3 at even 4GB/sec. You can auto partition your data for more parallel processing but you need to think about optimising your file sizes. QuickSight for say geospatial visualisations, Glue and Redshift.

There’s always a security component with a data lake which you can do with IAM, KMS, CloudTrail and Macie which is a security service using machine learning to find and protect sensitive data.

For the user interface you have Cognito, API Gateway and of course IAM.

Stream Processing Pattern

This is all about taking in for example click stream data from your website to deal with vast amounts of data streaming in, it has a high inget rate, needs near real-time processing, lots of spiky traffic coming from a lot of devices. Messages need to be durable and ordered correctly to recreate the actions the customer did..

Raw records can be brought into Kinesis Firehose which buffers them into micro batches and then hands them off to other services which are then no longer streams. These can head to Lambda for transformations to say amend a number into a date format. Then can then be sent back to Firehose for transformation to S3 for buffered files, Redshift for table loads, ElasticSearch for domain loads (sorry, not sure what this even means!). All can be monitored of course with CloudWatch metrics taken from Firehose and see if you are falling behind.

You may need to tune Firehose for buffer size and interval to decide how frequently messages are delivered and how large large the batches are.

For the Lambda functions:

  • Tune batch sizes – higher batch sizes = fewer Lambda invocations
  • Tune memory settings – higher memory = shorter execution time

Use Kiness Producer Library (KPL) to batch messages and saturate Kinesis Streams

You should also enable compression to reduce storage costs. You can also enable source record backup for transformations to anything that isn’t processed correctly can be captured and analysed separately.

There are some more best practices in Amazon Redshift Best Practice for Loading Data.

Another scenario mentioned is taking in IoT data for sensor data collections.

You can use IoT rules and actions that send data through Kinesis Streams for real type analytics, through Kinesis Firehose and onto S3 for buffered files.

Thomson Reuters is using a similar pattern for usage analysis tracking. It currently does 4,000 requests per second but they’re planning on ramping this up to 10,000 requests/second or 25 billion requests/month!

They are able to get in new events and send them onto user dashboards in less than 10 seconds with no data loss.

Operations Automation

Periodic Jobs

AWS OAWS Ops Automator, with a large fleet of AWS accounts, you can use Lambda functions in a main account to manage your S3 snapshots in your other accounts for example. CloudWatch events can be used as a scheduled task to fire of the functions.

Can use workflows with Step Functions to for example run a transaction for image processing to in parallel extract message meta data, create a thumbnail and use Rekognition for see what’s in the picture. Once the whole step function has complete you can store the meta data back in DynamoDB for example.

Step Functions state machines are very powerful.

Inforcing Security Policies Pattern

A CloudWatch events rule can be created to watch for rules added, this can be passed to Lambda to check for allowed rules and denied rules. You can then send an email and then delete the new rule, your exposure is then only a few seconds.

There’s lots that can be done with security automation, DevSecOps is all about using automated tools to enforce security compliance. I love the idea of Lambda being used as “Operations as Code” particularly with compliance. Lambda functions firing off to maintain your standards and stop the bad stuff happening. I was talking to a “traditional” network security engineer (his definition of what he does) last night who was fired up about “compliance as code” and using Lambda to manage firewall rule compliance.

A jam packed session with lots of information. I needed to decompress afterwards!, Very much worth looking at the video if its posted.

Comments are closed.