SuttaCentral is a Python Flask server which serves a Progressive Web App (client
) and its associated JSON API (server
).
The API pulls its data in real time from an ArangoDB instance populated periodically with data from the sc-data
repository.
$ git clone git@github.com:suttacentral/suttacentral.git
$ cd suttacentral
$ git checkout production
$ make prepare-host
$ make run-production-env
-> Supply needed env variables, if you chose random you will be prompted back with generated values. Remember them! You will use some of them to access admin services.- Done
cd /opt/suttacentral
git pull
make generate-env-variables
-> Supply needed env variables (only if env has been changed)make run-prod-no-logs
-> run docker containersmake delete-database
-> OPTIONAL: Skip this if data hasn't seriously changed (do it, if texts have been deleted or renamed)make migrate
-> only needed for delete-database, but harmless to run anywaymake load-data
-> load data into arangodbmake index-arangosearch
-> index arangosearchmake reload-uwsgi
-> make sure flask server is not serving cached stale data
If no containers need to be rebuilt then this is all that needs to be run:
cd /opt/suttacentral
git pull
make load-data
make reload-uwsgi
make index-arangosearch
make rebuild-frontend
cd /opt/suttacentral
git checkout <code-branch>
cd server/sc-data
git checkout <data-branch>
Then run the commands for updating, probably including the make delete-database
step.
- Install docker and docker-compose.
- Clone the repo
git clone git@github.com:suttacentral/suttacentral.git
. - Cd into the repo
cd suttacentral
. - run
make prepare-host
in order to make some small adjustment on the host machine. - run
make run-preview-env
- Build images, load data, index-arangosearch and more. This will run the project for the first time.
- run
make run-dev
.
When changes are made on bilara-data
and sc-data
, they do not automatically get updated in suttacentral
. During initial setup in step 1.0.4 above, the raw data from those repositories is brought into suttacentral
and added to the database. So if you make changes in bilara-data
and sc-data
you must run the steps below to see them in the build.
- ensure server is up and run
make load-data
. - To index arangosearch run
make index-arangosearch
.
API documentation is available at suttacentral.net/api/docs
.
Swagger documentation is generated from doc strings in api methods. The docstring should use OpenAPI specification 2.0 yaml format. This yaml docstring will be interpreted as OpenAPI's Operation Object.
In this mode server, nignx, client dirs are mounted in Docker's containers so that any local changes take place in the container as well.
In addition Uwsgi+Flask
expose port 5001
on local host, arangodb port 8529
.
There is a Makefile with following commands:
prepare-host
- Setup client git-hooks.run-dev
- Run containers in development mode.run-dev-no-logs
- Run containers in development mode without output to the console.run-prod
- Run containers in production mode.run-prod-no-logs
- Run containers in production mode without output to the console.migrate
- Run migrations in flask container.clean-all
- Remove all containers, volumes and built imagesreload-nginx
- Reloads Nginx.reload-uwsgi
- Reloads uWSGI+Flask.prepare-tests
- Starts containers in test mode and wait for start-ups to finnish.test
- Run tests inside containers.test-client
- Run only frontend tests.test-server
- Run only server test.load-data
- Pulls most recent data from github and loads it fromserver/sc-data
folder to the db.delete-database
- Delete database from ArangoDB.index-arangosearch
- Index ArangoSearch with data from the db.run-preview-env
- Fully rebuild and run most recent development version.run-preview-env-no-search
- Fully rebuild and run most recent development version but does not index ArangoSearch.run-production-env
- Fully rebuild and run most recent production version. You will be prompted with questions regarding env variables.generate-env-variables
- Runs env_variables_setup.py script and generate env variables for production version.
In addition, the following rebuilds the front end.
docker-compose build sc-frontend
- then
make run-dev
Our project is using ArangoDB on the back-end. In the development mode it exposes port 8529 on the localhost. You can access its web interface on http://127.0.0.1:8529.
In the code that is running in the docker containers you can access the database on the address sc-arangodb
on the same port.
In the development mode:
Login: root
password: test
In order to change password you have to change ARANGO_ROOT_PASSWORD
in env's .env
fiel eg. If you want to change it in development env you have to edit .dev.env
file.
Our project is using nginx as a HTTP reverse proxy. It is responsible for serving static files and passing /api/*
endpoints to the uwsgi+flask server.
Flask is hidden behind uWSGI. uWsgi communicate with nignx with unix socket. The socket file (uwsgi.sock
) is in socket-volume
shared between nginx
and flask+uwsgi
In order to create database migration in out app you have to follow those simple steps:
- in
server/server/migrations/migrations
folder create file with name<migration_name>_<id of the last migration + 1>.py
. - Add this line at the top of the file:
from ._base import Migration
. - Create class that inherits from
Migration
class. - Set
migration_id
class attribute to match the file name. - create some tasks. Each task should be separate method accepting only
self
as a parameter. - Set tasks =
['first_task', 'second_task', ...]
in class attributes. - You are good to go just remember to never change the 'migration_id'. otherwise your migrations might fail.
For example:
from common.arangodb import get_db
from migrations.base import Migration
class InitialMigration(Migration):
migration_id = 'initial_migration_001'
tasks = ['create_collections']
def create_collections(self):
"""
Creates collections of suttas and collection of edges between them.
"""
db = get_db()
graph = db.create_graph('suttas_graph')
suttas = graph.create_vertex_collection('suttas')
parallels = graph.create_edge_definition(
name='parallels',
from_collections=['suttas'],
to_collections=['suttas']
)
python manage.py migrate
- Run migrations.python manage.py list_routes
- Lists all available routes/URLs.
-
Follow PEP8 for Python code.
-
Try to keep line width under 120 characters.
-
Use formatted string literals for string formatting.
-
Use Type Hints whenever possible.
-
In views methods (get, post, etc.) Use YAML OpenAPI 2.0 object format in docstrings.
-
For the rest of docstrings use google style docstring.
-
Code for the API endpoints should be places in
api
folder, except of thesearch
endpoint. -
Names, variables, docstring, comments, etc. should be written in english.
-
Test files should be placed in
tests
dir in directory where tested file is.
- Based on the Airbnb JavaScript Style Guide for JS code...
-
Use template strings.
-
Use ES6 classes (
class MyElement extends Polymer.Element
) instead of the oldPolymer({...})
syntax when declaring an element inside your <script> tags. -
Use
const
/let
instead ofvar
when declaring a variable. -
Use
===
and!==
instead of==
and!=
when comparing values to avoid type coercion. -
Comments explaining a function's purpose should be written on the line directly above the function declaration.
-
Internal HTML imports should come after external ones (from bower_components) and be separated by a newline.
-
When commenting Components at the top-level (above
<dom-module>
), keep HTML comment tags (\<!--
&-->
) on their own separate lines. -
Try to keep line width under 120 characters.