Skip to content
Sina Khanifar edited this page Apr 7, 2015 · 10 revisions

Server documentation

Server architecture

The server uses nginx as a web server, with a reverse proxy through gunicorn.

  • nginx config location: /etc/nginx/nginx.conf
  • The nginx config includes /etc/nginx/site-enabled`

Gunicorn runs without a config.

It is loaded in the github post-receive hook, located at/home/ubuntu/git/call-tool.git/hooks/post-receive

Deployment

After you have cloned the repo, you then add the production remote:

git remote add production ubuntu@<ec2address.com>:/home/ubuntu/git/call-tool.git

At the time of writing, the ec2 address was ec2-54-219-216-34.us-west-1.compute.amazonaws.com

Then you can simply deploy the latest commit by pushing git push production master

This triggers /home/ubuntu/git/call-tool.git/hooks/post-receive

How the post-receive hook works

The post-receive file:

  1. Loads $HOME/.profile which contains environment variables and secrets, including the Sunlight and Twilio API keys, and the secret key required to access the /stats URL.

  2. Shuts down any gunicorn processes

  3. Runs setup scripts

  4. Restarts gunicorn again.

Number of gunicorn workers

The number of gunicorn workers should be set based on the performance of the instance the server is running on. From experience an EC2 micro server can handle about 5 workers and an m2.medium instance can handle 20.

The number of workers is set in this line of the post-receive hook:

/usr/bin/gunicorn --log-file gunicorn.log --workers=20 -D app:app

Logging

  • Nginx logs can be found in /var/log/nginx
  • Gunicorn logs are disabled by default but can be enabled by adding the --log-file gunicorn.log flag in the post-receive file. When enabled, they can be found at /home/ubuntu/apps/call-congress/gunicorn.log

Restarting

The easiest way to restart the server is simply to run the post-receive hook.

Provisioning New Servers

There are no scripts to create the server, simply create an image from within AWS and use the AMI to build new machines. Hopefully we can Dockerize this service in the future to reduce provisioning friction.

Clone this wiki locally