A batteries included NodeJS API Kickstater
focused on extensability and developer productivity.
What's inside? • Requirements • Usage • Developer experience • Customizing it • Troubleshooting • TODO •
This is a project based on NodeJS and Koa (and some other libs) implementing a REST API to authenticate users and let them manage TODOs stored in a PostgreSQL database. It's easy to extend and provides you the best developer experience possible with a lot of nifty things already configured.
It's meant to be used with a front-end consuming the API like a mobile app or a SPA built with for example React, Angular or Vue.
Koa.js REST API |
User authentication |
SMTP + Email templating |
E2E tests |
Kubernetes deployement |
CI/CD with CircleCI |
The boilerplate uses:
- Koa with async/await code to handle asynchronous tasks
- TypeScript for awesome Developer Experience
- TypeORM to manage SQL entities
- async_hooks to encapsulate each HTTP request in a different SQL transaction
- Koa-Router to separate the logic of each route
- Jest for routes testing with watch mode
- Eslint linting with pre-commit test and auto-fixing
- Node-Config for configuration and environment variable handling
- Nodemon to auto-reload your server when saving
And includes:
- Email/password account creation API routes
- Compatibility to add other OAuth providers as login methods and merge of multiple auth providers for a single account
- Account confirmation with a link sent by email
- Email sending with your SMTP server (and templating also included)
- Object injection in Koa context when references in the URL (like
/todo/12
injects the Todo object with id 12 inctx.todo
) - Error middleware with custom error classes and asserts to handle basic errors like Not Found
- Code coverage
- Full comments of the tricky parts of the code
- Kubernetes Deployement with PostgreSQL
- Continuous integration / deployement with CircleCI
- Separated env on Kubernetes for each branch
- NodeJS (tested on v9.10.0): Installation
- Git
To start working on this project, you need to make a private copy of it:
- Clone it on your computer:
git clone https://github.com/geekuillaume/koa-boilerplate.git
- If you are using Github, create a repository and set the origin in git with
git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO_NAME
- Change the informations about the code in the package.json file
- Add and commit the changed
package.json
:git add package.json
thengit commit
- Push the repository content to your new repo with
git push origin master
The server comes packed with some useful commands (defined in package.json
):
npm run test
: Launch the Jest test suitenpm run test:watch
: Launch the Jest test suite in watch mode (the tests are executed after each file change)npm run test:coverage
: Launch the Jest test suite and save coverage information in thecoverage
foldernpm run watch
: Start the project in watch mode, restarting it after each file changenpm run lint
: Analyze the project code with ESlint and show coding style errors (executed before each commit)npm run doc:generate
: Generate the documentation in theapiDoc
folder
This project uses node-config to handle the different configuration options. I highly recommend you to read this module README to learn about the different ways to configure this project for different environments. The config/local.js file should be used for secrets in development.
Async_hooks are used to create a "request global context". Everywhere inside your controllers or middleware, you can use the getContext()
method from src/lib/asyncContext.ts to access an object specific to each request. It used by the log module to append the unique requestID and the connected userId to each log message. It's also used by the src/models/db.ts module to always return the same database transaction wrapper for each SQL query inside the middlewares and controllers.
You can use the addToLoggerContext()
method to add fields into each log message in the current request context. You can also directly use the getContext()
method and manipulate the returned context to pass resources in each of your controllers or middlewares.
The SQL integration is made with TypeORM. By default, the PostgreSQL adapter is installed but you can use another, look at the TypeORM documentation for more information about how to adapt to other SQL databases.
Each request creates a new transaction which is commited just before the response is sent back to the client. If an error is thrown inside one of the controller or middleware, the transaction is rollbacked. This prevent a lot of errors related to invalid database states.
The E2E tests are configured to run in a transaction to allow fast testing. The DB is migrated and seeded at the start of the test so be sure not to run the tests on your production DB.
The SQL conf is located in each file of the config folder.
Migrations are handled by TypeORM migration tool. You can look at the default migration file for an example. Migrations are run before running the tests and so the database is wiped clean, be careful to never use you production database when running the tests. To create a new migration, use the npm run db:createMigration YOUR_MIGRATION_NAME
command. It will look at changes in your entities from the last migration and generate one for you as a new file in the migrations/ folder. You should check this file and possibly edit it to define your migration steps. To run all non executed migrations, use the npm run db:migrate
command.
This project includes a service to send emails to your users. It's used to send the activation link after the account creation and send the "Forgot password" link. In development mode, you can use the Ethereal.email service to debug the emails your sending without really having to send them. I've included the instructions about how to create an account in the config/development.js file.
The email service uses the Nodemailer module and is compatible with all SMTP transactional email service providers. To get more information about how you can configure it for you own usage, look at the documentation.
By default, every account created with an email/password combo is not active. It means that the user is in a read-only mode (you can change this behaviour according to your needs). To activate its account, the user should click the link in the email sent to his email address. This link points to the API that will redirect the user with a 302
to the URL specified in the activateCallbackUrl
config variable. The auth token will be appended to this activateCallbackUrl
as a ?auth_token=
query string. For example, if your activateCallbackUrl
is https://app.example.com/after_activation
, the user will be redirected to https://app.example.com/after_activation?auth_token=AUTHENTICATION_TOKEN_FOR_THE_USER
. This token can then be used like a regular authentication token to access other API routes.
Everywhere in this project, UUIDs are used instead of the classic auto-incremental integers IDs. This way you don't expose the number of elements in your db (like the number of users of your API). It can also help prevent bugs in your code as you cannot guess the id of a specific object and so cannot directly target it.
The npm run doc:generate
command can be used to execute the script to generate the documentation in the apiDoc
folder. Here's an example of the resulting documentation: ApiDoc.
You should look at the dedicated README for more information.
A fully-fonctional Helm chart is provided in this project to deploy this project. It will deploy PostgreSQL and the API on your Kubernetes cluster. A CircleCI integration is also provided to deploy the API on each push and create new environments on each branch creation. The configuration doesn't depend on a specific Cloud service and you can deploy it to bare metal servers. No GCloud or AWS load-balancers are used (could be a limitation).
To start using it, you first need a Kubernetes Cluster accessible from the outside world. It can be a little hard to get a good source about how to deploy a cluster from scratch when you don't have any Kubernetes experience. I used Rancher to deploy one and I highly recommend it. You also need to have Helm and kubectl installed on your machine.
If you use Rancher or have a Nginx Ingress Controller installed (Rancher install by default), you can easily add Let's Encrypt integration with cert-manager to get free SSL on your API. To do so, install the cert-manager helm chart with helm install --name cert-manager stable/cert-manager
. You then need to create a ClusterIssuer ressource on your K8S cluster. I've included an example file that you should edit to include your email address (replace YOUR_EMAIL_ADDRESS
), then create the ressource with kubectl create -f ./misc/clusterissuer.yaml
.
Next, we need a Docker Registry to host the Docker images that we will build (or that CircleCI will build for us). You can use the public Docker registry but if you are not creating an open-source project you probably want a private Docker Registry. I've included a file containing the basic values needed to deploy a registry on your Kubernetes Cluster. You need to edit it (misc/docker_registry.yaml
) to include your hostname (something like registry.your-awesome-project.com
) and you username/password combo (the command needed to get the secret is documented in the file comments), then install it with helm install stable/docker-registry --name registry --namespace registry -f ./misc/docker_registry.yaml
. You can now authentify your local Docker client with docker login YOUR_REGISTRY_HOSTNAME
.
By default, Helm execute your SQL migrations on install and on each upgrade. This can be a behaviour that you don't want if you need to control the way it's executed. You can switch autoMigrate
to false
in the helm/values.yaml
file to disable it.
When you install for the first time a release, a new JWT secret is created. To get more information about how to access it, use the helm status [YOUR_RELEASE_NAME]
command. To get the releases installed, use the helm ls
command.
Now, you can either build and deploy the API from your own computer or configure CircleCI to do it for you.
I've included a functional CircleCI configuration file that will test the code, generate the test coverage and save it as artifacts, build the docker image, push it to your Docker Registry and use Helmfile to upgrade or install the API. All branch will be deployed on Kubernetes, each on a new subdomain with a new instance of the database. You should configure the domain names for the master and the other branches in helmfile.yaml
. You also need to add 4 environment variables in your project CircleCI configuration:
- DOCKER_REGISTRY: the hostname of your registry
- DOCKER_USER: the username you used to generate the secret in
misc/docker_registry.yaml
- DOCKER_PASSWORD: the password you used to generate the secret in
misc/docker_registry.yaml
- KUBE_CONFIG: your kubectl config, you can get it from your own machine in
$HOME/.kube/config
. You need to convert it to JSON because we loose newlines in the env variable. To do so, use an online service like json2yaml.
Once it's done, you just have to launch a build from the CircleCI interface or push to Github to trigger a build.
To deploy it from your dev machine or to adapt the CI process to another CI provider. I suggest that you take a look at the .circleci/config.yml
file to get a sense of what it is doing.
This is because you didn't change the jwtSecret
in you config file. Add a jwtSecret
in your config/local.js
file with a random string (at least 15 characters).
- [Sentry] integration (optional)
- Password reset via email
- Upload coverage and documentation to AWS S3
- Prometheus integration
- Adding info in helm NOTES.txt
Icons made by Freepik and Nhor Phai from www.flaticon.com is licensed by CC 3.0 BY