The Serverless Framework is great for building Serverless applications on Amazon AWS, and over time our team identified a set of boilerplate good practices we wanted to include in all of our projects for observability and reporting reasons.
In the end, it is about delivering value fast without compromising quality with everything we do, so the team wanted to be able to add and update our good practices across multiple Serverless projects as they evolved, and to do it fast and efficiently, the plugin mechanism of the Serverless Framework provided the means, so we built a plugin to take care of this for us.
“In the end, it is about delivering value fast without compromising quality”
At first, we created the package in a private AWS Code Artifact repository, and we realized that it could be useful to other developers and teams out there, so it is our pleasure to introduce the AWS Good Practices Plugin for the Serverless Framework.
We have made the plugin available for easy installation via NPM:
$ npm install @labinhood/serverless-aws-good-practices --save
Check it out on NPM:
Once installed, the plugin can be enabled in your Serverless project with:
plugins:
- '@labinhood/serverless-aws-good-practices'
The plugin has sensible defaults so you might not need to do much else, but it can also be customized to your needs, for more information about options available and their behavior, the plugin’s full documentation is available on GitHub:
Plugin’s Features
The node module has two complementary parts bundled into the same package, those parts are: (1) the “Serverless Plugin”, and (2) “Lambda Utils”.
Part 1: Serverless Plugin Functionality
Once enabled, the plugin adds the following functionality and verifications to your Serverless Project during deployment time:
1. Create Standard Resource Tags
The plugin adds a set of standard resource tags to the main AWS resources created by the Serverless Framework during deployment, we tried to provide a set of values appropriate for most classification and reporting needs, and to also auto-populate them with runtime values by default.
AWS resource tags are of great help for reporting, specially in a multi-account and/or multi-application environment, given they can provide high level views of cost across the organization by combining those tags in different ways. e.g. how much all AppEnv = ‘staging’ resources are costing us across departments, or only those of Department = ‘marketing’, etc.
2. Create Standard Environment Variables
Similar to the automatic creation of resource tags, the plugin also injects a set of standard environment variables, so your Lambda functions have them available by default. Environment variables like: AGP_APP_ACCOUNT_ID, AGP_APP_REGION, AGP_APP_NAME, AGP_SERVICE_NAME, AGP_APP_ENV, and AGP_APP_ROLE are added to name a few.
The environment variables added by the plugin can be used in a variety of ways from your code, but also, they are complementary to the “Logger Instance” available in the “Lambda Utils” part of the plugin; if the environment variables are present, the Logger Instance extends the messages sent to CloudWatch with additional fields to every log entry, like: awsAccountId, appName, serviceName, appEnv, and appVersion.
3. New Custom Variables
The plugin also adds a couple of custom variables that can help standardize resource naming: “${agp:sls-default-name}” which returns the combination of “[service_name]-[stage_name]”, and “${agp:sls-regional-name}” that returns the combination of “[service_name]-[stage_name]-[region]”.
The reason: we observed when naming resources that concatenating values manually by our team was error prone, for example, someone might start with the service name, and another team member might do stage name first, etc.; we added the custom variables to make this process simpler and ensure consistency across the team.
4. Deployment Bucket Recommended Configuration
This is a good one for us, by default, the Serverless Framework creates an S3 bucket in the target AWS account with a dynamically generated name to store files related to each stack it deploys, like:myservicename-prod-serverlessdeploymentbucke-zshkvtwtq6s4
The problem with dynamically named deployment buckets, is that every service-stage combination of your application will create a new bucket in the target account, leaving multiple S3 buckets behind if there is not a solid cleanup routine as part of your CI/CD or workflow.
To prevent an S3 “junk yard” issue, the plugin checks for a specific “provider.deploymentBucket” configuration and enforce it, the specific configuration makes all Serverless projects and stages deployed to a given AWS account reutilize the same S3 deployment bucket very time. The Serverless Framework stores all deployment files in a service/stage directory structure by default, so it all works naturally well from there.
5. Log Level / Logger Configuration
There are a couple of configuration properties that define “log level”, and “debug sample rate” values for the complementary “Logger Instance” bundled with the package.
Capturing debug level traces in Production can be costly, at the same time, detailed Production traces could provide valuable information to rapidly troubleshoot an issue in live environments, this is why having “debug samples” captured at random is a good practice.
The loggerDebugSampleRate config property of the plugin enables DEBUG logging level at random for the percentage of invocations it defines, so some debug messages are captured in Production, providing a valuable sample of detailed data without the disadvantages that capturing everything would create.
Part 2: Lambda Utils
Lambda Utils are importable objects and middleware, complementary to actions performed by the Serverless Plugin functionality in the module, they help initialize Lambda functions with Serverless/AWS good practices by providing a standard Logger, etc.