ASP.NET Core 2.2/3.0 Serverless Web API’s in AWS Lambda with a Custom Runtime and Lambda Warmer

Michael Dimoudis
5 min readApr 27, 2019

July 2020 update:
This blog post is now outdated. I have a new updated blog post titled ASP.NET Core Serverless Web API running in AWS Lambda, using API Gateway HTTP API, with a Lambda Warmer.

Lambda support for .NET Core 3.1 Update, April 2020

On 31 March 2020 AWS announced Lambda support for .NET Core 3.1!! AWS blog post regarding the announcement is here: https://aws.amazon.com/blogs/compute/announcing-aws-lambda-supports-for-net-core-3-1/

.NET Core 3.0 Update, October 2019

Now that .NET Core 3.0 has been released, I thought I’d quickly revisit this blog post. ASP.NET Core 3.0 compatibility broke in the previews as per PR discussions, however AWS has now released an update to Amazon.Lambda.AspNetCoreServer. I’ve added a .NET Core 3.0 PR to my sample GitHub repo. Note that .NET Core 3.1 LTS should be coming in November 2019, and AWS lambda should hopefully not long after natively support this.

A note on some (non scientific) cold start speed tests I observed running the sample ASP.NET Core Web API: (Note that running a plain .NET Core Lambda)

  • Lambda native runtime 2.1 API request returns in ~2.85 seconds
  • Lambda custom runtime 2.2 API request returns in ~7 seconds
  • Lambda custom runtime 3.0 API request returns in ~4.25 seconds
  • Lambda custom runtime 3.0, with PublishReadyToRun flag to true, API request returns in ~2.85 seconds

So the custom runtime is a lot slower than the native runtime, however .NET Core 3.0 has some speed improvements in app startup performance, coupled with the PublishReadyToRun flag to true, has lower cold start up times equivalent to the native runtime.

PublishReadyToRun will improve the startup time of your application by compiling it in the ReadyToRun (R2R) format, reducing the amount of work the JIT compiler has to do, but in the process it will make the binary heavier. To use the PublishReadyToRun=true flag though, you must compile the app on Linux. I used CodeBuild and an Amazon Linux 2 image to build and deploy this PR. The buildspec.yml file is included in the PR.

It’s still preferable to use .NET Core 2.1, however if you require newer features, it’s best to move straight to .NET Core 3.0 (skipping 2.2) with R2R turned on. A more scientific and detailed speed test has been performed by Zac Charles here.

Below is the original blog post…

AWS Lambda is so versatile and cheap, it supports full blown ASP.NET Core Web API’s for cost effectively running an API entirely serverless. This is nothing new, announced back in January 2017, however what is new is the ability to break out of the supported .NET Core versions AWS Lambda supports. Currently as of this writing the latest supported by Lambda is LTS .NET Core 2.1 with patch version of 2.1.8.

Recently Amazon announced Amazon.Lambda.RuntimeSupport that “enables you to easily create Lambda functions using .NET standard 2.0-compatible runtimes”. In other words, create a .NET Core Lambda as a self-contained deployment so it won’t rely on the presence of shared components on the target system, as the .NET Core libraries and runtime are included with the application. This means we are now able to deploy .NET Core 2.2 and 3.0 Lambda’s!

However the above blog and Lambda template is only available for a single AWS Lambda Function, not an ASP.NET Core serverless Web API. For the last few months I’ve been building an Aussie weather app using an ASP.NET Core 2.1 Web API hosted as a serverless application in AWS Lambda for my API backend. I decided to see if I can convert this to ASP.NET Core 2.2 using Amazon.Lambda.RuntimeSupport.

And it was actually quite simple!

To play along at home, everything to get you started is in this repo. The master branch is an empty ASP.NET Core 2.1 Web API created with the AWS Toolkit for Visual Studio implementation of the AWS Serverless Application Model (AWS SAM). You can deploy this to your AWS account by running dotnet lambda deploy-serverless (make sure you change the S3 bucket name in aws-lambda-tools-defaults.json, this is where the artifacts will be uploaded to for deployment), or dotnet run locally. You can then hit the api/values controller and get [“value1”,”value2"] back.

Now it’s time to convert this to ASP.NET Core 2.2 with a custom runtime!

PR #1 shows what you need to change. Most is documented here but I’ll quickly recap what needs to be done; however, it’s easier to view the PR directly on GitHub:

  • As per any upgrade, change the csproj file to target the netcoreapp2.2 framework, and update your nuget packages for ASP.NET Core 2.2 support, as well as setting the MVC compatibility version to 2.2
  • Add a bash script called bootstrap that the Lambda host calls to start the custom runtime. This will need to call your ASP.NET Core project name, so in my demo case it will be AWSServerless1. The bootstrap bash script will have the line /var/task/AWSServerless1. Remember to update your csproj so this file is included when you deploy
  • In aws-lambda-tools-defaults.json update the framework to netcoreapp2.2 with MSBuild parameters of --self-contained true, as we want all libraries and runtimes included in the deployment package
  • In the serverless.template, update the Handler with anything as it won’t be used, Cloudformation requires it. As per the AWS docs I chose not_required_for_custom_runtime, and change Runtime to provided.
  • Now the most important part will be to update the static Main method in LocalEntryPoint. This will no longer only be for local development, but will now also be the entry point for the Lambda function too. I wrapped the existing call in an #if DEBUG C# preprocessor directive so we can still debug the Lambda locally. However, when built in release mode it will execute the magic Lambda bootstrap in Amazon.Lambda.RuntimeSupport

This should be it, you can dotnet run locally, or dotnet lambda deploy-serverless to run and test this in your Amazon account.

Voila! Simple as that.

Bonus round, keeping the Lambda warm!

If you’re hosting a public facing API in a Lambda and it doesn’t get hit every 5 minutes or so, it will get cold. When the next consumer accesses the API they will be hit by a 5–8 second startup penalty. You can mitigate errors in the front end by retrying the call if the first one times out. But in a situation where you require responsive API calls all the time, especially for my situation in a weather app where I’d like fresh weather available as soon as someone opens the app, an 8 second penalty is not ideal.

Thanks to a great blog post by Robert Schiefer, he details how to keep an ASP.NET Core Web API warm. I won’t go into the details as that blog has nearly everything covered, but you can look at PR #2 for the updates to keep your Lambda warm.

The main difference between the above blog and mine, is using the AWS Serverless Application Model (SAM) template for the Cloudwatch event, instead of trying to Cloudformation that yourself. The AWS SAM template takes care of everything for you including permissions. Nice!

Also be aware of the AWS Lambda concurrency model, as detailed by a great blog post by Yan Cui, so you can optimise your concurrency settings when keeping the Lambda warm.

That’s it! All code can be found here, I hope this helps in your ASP.NET Core serverless journey, and you enjoyed this blog.

--

--

Software Developer, .NET & Xamarin Forms, self confessed cool geek 👨🏻‍💻 love ⚽️ @smfc & @arsenal, I ♥️ Apple, Microsoft, AWS and iOS! Creator @auweatherapp🌦