Create a sample app with sam init command: sam init
or
sam init --runtime <favourite-runtime>
sam
requires a SAM template in order to know how to invoke your
function locally, and it's also true for spawning API Gateway locally
-If no template is specified template.yaml
will be used instead.
Alternatively, you can find other sample SAM Templates by visiting SAM official repository.
- Invoke functions locally
- Run automated tests for your Lambda functions locally
- Generate sample event source payloads
- Run API Gateway locally
- Debugging Applications
- Fetch, tail, and filter Lambda function logs
- Validate SAM templates
- Package and Deploy to Lambda
You can invoke your function locally by passing its --SAM logical
ID--and an event file. Alternatively, sam local invoke
accepts stdin
as an event too.
Resources:
Ratings: # <-- Logical ID
Type: 'AWS::Serverless::Function'
...
Syntax
# Invoking function with event file
$ sam local invoke "Ratings" -e event.json
# Invoking function with event via stdin
$ echo '{"message": "Hey, are you there?" }' | sam local invoke "Ratings"
# For more options
$ sam local invoke --help
You can use the sam local invoke
command to manually test your code by
running Lambda function locally. With SAM CLI, you can easily author
automated integration tests by first running tests against local Lambda
functions before deploying to the cloud. The sam local start-lambda
command starts a local endpoint that emulates the AWS Lambda service's
invoke endpoint, and you can invoke it from your automated tests.
Because this endpoint emulates the Lambda service's invoke endpoint,
you can write tests once and run them (without any modifications)
against the local Lambda function or against a deployed Lambda function.
You can also run the same tests against a deployed SAM stack in your
CI/CD pipeline.
Here is how this works:
1. Start the Local Lambda Endpoint
Start the local Lambda endpoint by running the following command in the directory that contains your AWS SAM template:
sam local start-lambda
This command starts a local endpoint at http://127.0.0.1:3001 that emulates the AWS Lambda service, and you can run your automated tests against this local Lambda endpoint. When you send an invoke to this endpoint using the AWS CLI or SDK, it will locally execute the Lambda function specified in the request and return a response.
2. Run integration test against local Lambda endpoint
In your integration test, you can use AWS SDK to invoke your Lambda function with test data, wait for response, and assert that the response what you expect. To run the integration test locally, you should configure AWS SDK to send Lambda Invoke API call to local Lambda endpoint started in previous step.
Here is an Python example (AWS SDK for other languages have similar configurations):
import boto3
import botocore
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
if running_locally:
# Create Lambda SDK client to connect to appropriate Lambda endpoint
lambda_client = boto3.client('lambda',
region_name="us-west-2",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=0,
retries={'max_attempts': 0},
)
)
else:
lambda_client = boto3.client('lambda')
# Invoke your Lambda function as you normally usually do. The function will run
# locally if it is configured to do so
response = lambda_client.invoke(FunctionName="HelloWorldFunction")
# Verify the response
assert response == "Hello World"
This code can run without modifications against a Lambda function which
is deployed. To do so, set the running_locally
flag to False
. This
will setup AWS SDK to connect to AWS Lambda service on the cloud.
Both sam local invoke
and sam local start-api
support connecting the
create lambda docker containers to an existing docker network.
To connect the containers to an existing docker network, you can use the
--docker-network
command-line argument or the SAM_DOCKER_NETWORK
environment variable along with the name or id of the docker network you
wish to connect to.
# Invoke a function locally and connect to a docker network
$ sam local invoke --docker-network my-custom-network <function logical id>
# Start local API Gateway and connect all containers to a docker network
$ sam local start-api --docker-network b91847306671 -d 5858
To make local development and testing of Lambda functions easier, you can generate and customize event payloads for the following services:
- Amazon Alexa
- Amazon API Gateway
- AWS Batch
- AWS CloudFormation
- Amazon CloudFront
- AWS CodeCommit
- AWS CodePipeline
- Amazon Cognito
- AWS Config
- Amazon DynamoDB
- Amazon Kinesis
- Amazon Lex
- Amazon Rekognition
- Amazon S3
- Amazon SES
- Amazon SNS
- Amazon SQS
- AWS Step Functions
Syntax
$ sam local generate-event <service> <event>
You can generate multiple types of events from each service. For example, to generate the event from S3 when a new object is created, use:
$ sam local generate-event s3 put
To generate the event from S3 when an object is deleted, you can use:
$ sam local generate-event s3 delete
For more options, see sam local generate-event --help
.
sam local start-api
spawns a local API Gateway to test HTTP
request/response functionality. Features hot-reloading to allow you to
quickly develop and iterate over your functions.
Syntax
$ sam local start-api
sam
will automatically find any functions within your SAM template
that have Api
event sources defined, and mount them at the defined
HTTP paths.
In the example below, the Ratings
function would mount
ratings.py:handler()
at /ratings
for GET
requests.
Ratings:
Type: AWS::Serverless::Function
Properties:
Handler: ratings.handler
Runtime: python3.6
Events:
Api:
Type: Api
Properties:
Path: /ratings
Method: get
By default, SAM uses Proxy
Integration
and expects the response from your Lambda function to include one or
more of the following: statusCode
, headers
and/or body
.
For example:
// Example of a Proxy Integration response
exports.handler = (event, context, callback) => {
callback(null, {
statusCode: 200,
headers: { "x-custom-header" : "my custom header value" },
body: "hello world"
});
}
For examples in other AWS Lambda languages, see this page.
If your function does not return a valid Proxy Integration response then you will get a HTTP 500 (Internal Server Error) when accessing your function. SAM CLI will also print the following error log message to help you diagnose the problem:
ERROR: Function ExampleFunction returned an invalid response (must include one of: body, headers or statusCode in the response object)
Both sam local invoke
and sam local start-api
support local
debugging of your functions.
To run SAM Local with debugging support enabled, just specify
--debug-port
or -d
on the command line. SAM CLI debug port option
--debug-port
or -d
will map that port to the local Lambda container
execution your IDE needs to connect to.
# Invoke a function locally in debug mode on port 5858
$ sam local invoke -d 5858 <function logical id>
# Start local API Gateway in debug mode on port 5858
$ sam local start-api -d 5858
Note: If using sam local start-api
, the local API Gateway will expose
all of your Lambda functions but, since you can specify a single debug
port, you can only debug one function at a time. You will need to hit
your API before SAM CLI binds to the port allowing the debugger to
connect.
Here is an example showing how to debug a NodeJS function with Microsoft Visual Studio Code:
In order to setup Visual Studio Code for debugging with AWS SAM CLI, use the following launch configuration after setting directory where the template.yaml is present as workspace root in Visual Studio Code:
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to SAM CLI",
"type": "node",
"request": "attach",
"address": "localhost",
"port": 5858,
// From the sam init example, it would be "${workspaceRoot}/hello_world"
"localRoot": "${workspaceRoot}/{directory of node app}",
"remoteRoot": "/var/task",
"protocol": "inspector",
"stopOnEntry": false
}
]
}
Note: localRoot is set based on what the CodeUri points at template.yaml, if there are nested directories within the CodeUri, that needs to be reflected in localRoot.
Note: Node.js versions --below-- 7 (e.g. Node.js 4.3 and Node.js 6.10)
use the legacy
protocol, while Node.js versions including and above 7
(e.g. Node.js 8.10) use the inspector
protocol. Be sure to specify the
corresponding protocol in the protocol
entry of your launch
configuration. This was tested with VS code version 1.26, 1.27 and 1.28
for legacy
and inspector
protocol.
Python debugging requires you to enable remote debugging in your Lambda function code, therefore it's a 2-step process:
- Install ptvsd library and enable within your code
- Configure your IDE to connect to the debugger you configured for your function
As this may be your first time using SAM CLI, let's start with a boilerplate Python app and install both app's dependencies and ptvsd:
sam init --runtime python3.6 --name python-debugging
cd python-debugging/
# Install dependencies of our boilerplate app
pip install -r requirements.txt -t hello_world/build/
# Install ptvsd library for step through debugging
pip install ptvsd -t hello_world/build/
cp hello_world/app.py hello_world/build/
As we installed ptvsd library in the previous step, we need to enable
ptvsd within our code, therefore open up hello_world/build/app.py
and
add the following ptvsd specifics.
import ptvsd
# Enable ptvsd on 0.0.0.0 address and on port 5890 that we'll connect later with our IDE
ptvsd.enable_attach(address=('0.0.0.0', 5890), redirect_output=True)
ptvsd.wait_for_attach()
0.0.0.0 instead of localhost for listening across all network interfaces and 5890 is the debugging port of your preference.
Now that we have both dependencies and ptvsd enabled within our code we configure Visual Studio Code (VS Code) Debugging - Assuming you're still in the application folder and have code command in your path, let's open up VS Code:
code .
NOTE
: If you don't have code in your Path, please open up a new
instance of VS Code from python-debugging/
folder we created earlier.
In order to setup VS Code for debugging with AWS SAM CLI, use the following launch configuration:
{
"version": "0.2.0",
"configurations": [
{
"name": "SAM CLI Python Hello World",
"type": "python",
"request": "attach",
"port": 5890,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/hello_world/build",
"remoteRoot": "/var/task"
}
]
}
]
}
For VS Code, the property localRoot under pathMappings key is really important and there are 2 aspects you should know as to why this is setup this way:
- localRoot: This path will be mounted in the Docker Container and needs to have both application and dependencies at the root level
- workspaceFolder: This path is the absolute path where VS Code instance was opened
If you opened VS Code in a different location other than
python-debugging/
you need to replace it with the absolute path where
python-debugging/
is.
Once complete with VS Code Debugger configuration, make sure to add a
breakpoint anywhere you like in hello_world/build/app.py
and then
proceed as follows:
- Run SAM CLI to invoke your function
- Hit the URL to invoke the function and initialize ptvsd code execution
- Start the debugger within VS Code
# Remember to hit the URL before starting the debugger in VS Code
sam local start-api -d 5890
# OR
# Change HelloWorldFunction to reflect the logical name found in template.yaml
sam local generate-event apigateway aws-proxy | sam local invoke HelloWorldFunction -d 5890
Golang function debugging is slightly different when compared to Node.JS, Java, and Python. We require delve as the debugger, and wrap your function with it at runtime. The debugger is run in headless mode, listening on the debug port.
When debugging, you must compile your function in debug mode:
GOARCH=amd64 GOOS=linux go build -gcflags='-N -l' -o <output path> <path to code directory>
You must compile [delve]{.title-ref} to run in the container and provide its local path via the [--debugger-path]{.title-ref} argument. Build delve locally as follows:
GOARCH=amd64 GOOS=linux go build -o <delve folder path>/dlv github.com/derekparker/delve/cmd/dlv
NOTE: The output path needs to end in [/dlv]{.title-ref}. The docker container will expect the dlv binary to be in the <delve folder path> and will cause mounting issue otherwise.
Then invoke [sam]{.title-ref} similar to the following:
sam local start-api -d 5986 --debugger-path <delve folder path>
NOTE: The --debugger-path
is the path to the directory that contains
the [dlv]{.title-ref} binary compiled from the above.
The following is an example launch configuration for Visual Studio Code to attach to a debug session.
{
"version": "0.2.0",
"configurations": [
{
"name": "Connect to Lambda container",
"type": "go",
"request": "launch",
"mode": "remote",
"remotePath": "",
"port": <debug port>,
"host": "127.0.0.1",
"program": "${workspaceRoot}",
"env": {},
"args": [],
},
]
}
.NET Core function debugging is similiar to golang function debugging
and requires you to have vsdbg
available on your machine to later
provide it to SAM CLI. VS Code will launch debugger inside Lambda
container and talk to it using pipeTransport
configuration.
When debugging, you must compile your function in debug mode:
Either locally using .NET SDK
dotnet publish -c Debug -o <output path>
Or via Docker
docker run --rm --mount type=bind,src=$PWD,dst=/var/task lambci/lambda:build-dotnetcore<target-runtime> dotnet publish -c Debug -o <output path relative to $PWD>
NOTE: both of these commands should be run from the directory with .csproj file
You must get vsdbg
built for AWS Lambda runtime container on your host
machine and provide its local path via the --debugger-path
argument.
Get compatible debugger version as follows:
# Create directory to store vsdbg
mkdir <vsdbg folder path>
# Get and install vsdbg on runtime container. Mounted folder will let you have it under <vsdbg folder path> on your machine too
docker run --rm --mount type=bind,src=<vsdbg folder path>,dst=/vsdbg --entrypoint bash lambci/lambda:dotnetcore2.0 -c "curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg"
Then invoke sam
similar to the following:
sam local start-api -d <debug port> --debugger-path <vsdbg folder path>
NOTE: The --debugger-path
is the path to the directory that contains
the vsdbg
binary installed from the above.
The following is an example launch configuration for Visual Studio Code to attach to a debug session.
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Docker Attach",
"type": "coreclr",
"request": "attach",
"processId": "1",
"pipeTransport": {
"pipeProgram": "sh",
"pipeArgs": [
"-c",
"docker exec -i $(docker ps -q -f publish=<debug port>) ${debuggerCommand}"
],
"debuggerPath": "/tmp/lambci_debug_files/vsdbg",
"pipeCwd": "${workspaceFolder}",
},
"windows": {
"pipeTransport": {
"pipeProgram": "powershell",
"pipeArgs": [
"-c",
"docker exec -i $(docker ps -q -f publish=<debug port>) ${debuggerCommand}"
],
"debuggerPath": "/tmp/lambci_debug_files/vsdbg",
"pipeCwd": "${workspaceFolder}",
}
},
"sourceFileMap": {
"/var/task": "${workspaceFolder}"
}
}
]
}
To pass additional runtime arguments when debugging your function, use
the environment variable DEBUGGER_ARGS
. This will pass a string of
arguments directly into the run command SAM CLI uses to start your
function.
For example, if you want to load a debugger like iKPdb at runtime of
your Python function, you could pass the following as DEBUGGER_ARGS
:
-m ikpdb --ikpdb-port=5858 --ikpdb-working-directory=/var/task/ --ikpdb-client-working-directory=/myApp --ikpdb-address=0.0.0.0
.
This would load iKPdb at runtime with the other arguments you've
specified. In this case, your full SAM CLI command would be:
$ DEBUGGER_ARGS="-m ikpdb --ikpdb-port=5858 --ikpdb-working-directory=/var/task/ --ikpdb-client-working-directory=/myApp --ikpdb-address=0.0.0.0" echo {} | sam local invoke -d 5858 myFunction
You may pass debugger arguments to functions of all runtimes.
To simplify troubleshooting, we added a new command called sam logs
to
SAM CLI. sam logs
lets you fetch logs generated by your Lambda
function from the command line. In addition to printing the logs on the
terminal, this command has several nifty features to help you quickly
find the bug. Note: This command works for all AWS Lambda functions; not
just the ones you deploy using SAM.
To simplify troubleshooting, SAM CLI has a command called sam logs
.
sam logs
lets you fetch logs generated by your Lambda function from
the command line. In addition to printing the logs on the terminal, this
command has several nifty features to help you quickly find the bug.
Note: This command works for all AWS Lambda functions; not just the ones you deploy using SAM.
Basic Usage: Using CloudFormation Stack
When your function is a part of a CloudFormation stack, you can fetch logs using the function's LogicalID:
sam logs -n HelloWorldFunction --stack-name mystack
Basic Usage: Using Lambda Function name
Or, you can fetch logs using the function's name
sam logs -n mystack-HelloWorldFunction-1FJ8PD
Tail Logs
Add --tail
option to wait for new logs and see them as they arrive.
This is very handy during deployment or when troubleshooting a
production issue.
sam logs -n HelloWorldFunction --stack-name mystack --tail
View logs for specific time range You can view logs for specific
time range using the -s
and -e
options
sam logs -n HelloWorldFunction --stack-name mystack -s '10min ago' -e '2min ago'
Filter Logs
Use the --filter
option to quickly find logs that match terms, phrases
or values in your log events
sam logs -n HelloWorldFunction --stack-name mystack --filter "error"
In the output, SAM CLI will underline all occurrences of the word "error" so you can easily locate the filter keyword within the log output.
Error Highlighting
When your Lambda function crashes or times out, SAM CLI will highlight the timeout message in red. This will help you easily locate specific executions that are timing out within a giant stream of log output.
JSON pretty printing
If your log messages print JSON strings, SAM CLI will automatically pretty print the JSON to help you visually parse and understand the JSON.
Validate your templates with $ sam validate
. Currently this command
will validate that the template provided is valid JSON / YAML. As with
most SAM CLI commands, it will look for a template.[yaml|yml]
file in
your current working directory by default. You can specify a different
template file/location with the -t
or --template
option.
Syntax
$ sam validate
<path-to-file>/template.yml is a valid SAM Template
Note: The validate command requires AWS credentials to be configured. See IAM Credentials.
Once you have developed and tested your Serverless application locally,
you can deploy to Lambda using sam package
and sam deploy
command.
sam package
command will zip your code artifacts, upload to S3 and
produce a SAM file that is ready to be deployed to Lambda using AWS
CloudFormation.
sam deploy
command will deploy the packaged SAM template to
CloudFormation.
Both sam package
and sam deploy
are identical to their AWS CLI
equivalents commands aws cloudformation
package
and aws cloudformation
deploy
respectively - Please consult the AWS CLI command documentation for
usage.
Example:
# Package SAM template
$ sam package --template-file sam.yaml --s3-bucket mybucket --output-template-file packaged.yaml
# Deploy packaged SAM template
$ sam deploy --template-file ./packaged.yaml --stack-name mystack --capabilities CAPABILITY_IAM