Based on the logmatic-lambda function from Logmatic.io: https://github.com/logmatic/logmatic-lambda
AWS lambda function to ship ELB, S3, CloudTrail, VPC, CloudFront and CloudWatch logs to Logstash via TCP input plugin
- Use AWS Lambda to re-route triggered S3 events to Logstash via TCP socket
- ELB, S3, CloudTrail, VPC and CloudFont logs can be forwarded
- SSL Security
- JSON events providing details about S3 documents forwarded
- Structured meta-information can be attached to the events
The provided Python script must be deployed into your AWS Lambda service. We will explain how in this step-by-step tutorial.
Select the s3-get-object-python
blue print. It has been designed to listen S3 created document and trigger actions.
Let's configure it now.
- Select S3 event source and:
- The bucket where your logs are located and prefix if it applies
- The event type:
Object Created (All)
- Select "CloudWatch Logs" event source and:
- Select the LogGroup
- Set a name
- Give the function name you want
- Set the Python 2.7 runtime and:
- Copy-paste the content of the
lambda_function.py
file in the Python editor of the AWS Lambda interface.
- Copy-paste the content of the
At the top of the script you'll find a section called #Parameters
, that's where you want to edit your code.
#Parameters
host = "<your_logstash_hostname>"
metadata = {"context":{"foo": "bar"}}
- host:
Replace <your_logstash_hostname>
with the hostname for your Logstash server.
- metadata:
You can optionally change the structured metadata. The metadata is merged to all the log events sent by the Lambda script.
To allow access to the S3 objects you must define a S3 execution role and assign it to the Lambda function.
The role must be created into the IAM Management Console as follow:
- Select
Roles
- Create new Roles
- Follow instructions and select
AWS Lambda
then theAWSLambdaExecute
policy
Then attach it to your Lambda function.
Set the memory to the highest possible value. Set also the timeout limit. Logmatic.io recommends the highest possible value to deal with big files.
You are all set!
The test should be quite natural if the pointed bucket(s) are filling up. There may be some latency between the time a S3 log file is posted and the Lambda function wakes up.
If you have any suggestions you are more than welcome to comment or propose pull requests on this project!