-
Notifications
You must be signed in to change notification settings - Fork 58
Kotless configuration
Kotless configuration of a Gradle plugin consists of a few globally applied parts:
- Terraform configuration;
- Kotless service configuration;
- Optimization configuration.
Here is a simple snippet of the whole configuration:
kotless {
config {
bucket = "kotless.s3.example.com"
prefix = "dev"
dsl {
type = DSLType.Kotless
//or for Ktor
//type = DSLType.Ktor
//or for SpringBoot
//type = DSLType.SpringBoot
}
terraform {
profile = "example-profile"
region = "us-east-1"
}
optimization {
mergeLambda = MergeLambda.All
}
}
//<...> - Web Application config and Extensions Config
}
Let us take a look at the items of this configuration one by one.
In a terraform
configuration you can set up a version of Terraform used, its
AWS provider, and a bucket for tfstate
.
You will need to set up region
and profile
— region for deployment
and local profile used for it accordingly.
Here is the simplest possible snippet with a setup of Terraform configuration:
terraform {
//Will use for terraform state buckets used by kotless service configuration
//version of terraform and aws provider will be default
profile = "example-profile"
region = "us-east-1"
}
Note that the chosen version of Terraform will be downloaded automatically.
In a dsl
configuration you may define type of DSL used: DSLType.Kotless
or DSLType.Ktor
, otherwise type will be deduced from used DSL dependency.
In case of Kotless DSL and Spring Boot, you may change staticsRoot
variable —
it is a folder that Kotless will use as a root to resolve all static route files.
In Ktor DSL it is defined via staticRootFolder
variable in code.
dsl {
type = DSLType.Kotless
//or for Ktor
//type = DSLType.Ktor
//or for Spring Boot
//type = DSLType.SpringBoot
}
Kotless service configuration is a set of directories, S3 bucket, and several values used by Kotless to deploy your application.
You will need to set up bucket
— it is a name of a bucket that Kotless will use to
store all files. For example, it will store packed jars and static files there.
You can set prefix
variable — it is a prefix with which all resources
created in a cloud will be prepended. prefix
can be used to deploy several
environments of one application.
Here is a simple snippet with a setup of service configuration:
config {
bucket = "kotless.s3.example.com"
prefix = "dev"
//<...>
}
Note that bucket
value will be used for Terraform state, if it is not set explicitly
in the Terraform configuration.
We are doing our best to make Kotless-based lambdas as fast as possible.
There are plenty of optimizations that are embedded in a Kotless synthesizer and runtime. Some of them can be configured to align with you needs.
This optimization defines if different lambdas should be merged into one and when.
Basically lambda serving several endpoints is more likely to be warm.
There are 3 levels of merge optimization:
- None — lambdas will never be merged;
- PerPermissions — lambdas will be merged, if they have equal permissions;
- All — all lambdas in context are merged into one.
Here is a simple snippet of optimization configuration:
optimization {
mergeLambda = MergeLambda.All
}
This optimization sets up a timer that will autowarm lambda, by default, each 5 minutes. Such an optimization makes cold start less frequent.
Each timer event executes a warming sequence. This sequence triggers LambdaWarming
objects
and is described in a Lifecycle Control section.
optimization {
//default config
autowarm = Autowarm(enable = true, minutes = 5)
}
Kotless's lambdas can be autowarmed. It means that some scheduler will periodically (by default, each 5 minutes) call lambda to make sure it will not be displaced from the hot pool of cloud provider.