Location for the OpenLMIS v3+ Reference Distribution
The Reference Distribution utilizes Docker Compose to gather the published OpenLMIS Docker Images together and launch a running application. These official OpenLMIS images are updated frequently and published to our Docker Hub. These images cover all aspects of OpenLMIS: from server-side Services and infrastructure to the reference UI modules that a client's browser will consume.
The docker-compose files within this repository should be considered the authoritative OpenLMIS Reference Distribution, as well as a template for how OpenLMIS' services and UI modules should be put together in a deployed instance of OpenLMIS following our architecture.
- Docker Engine: 1.12+
- Docker Compose: 1.8+
Note that Docker on Mac and Windows hasn't always been as native as it is now with Docker for Mac and Docker for Windows. If you're using one of these, please note that there are some known issues:
- docker compose on Windows hasn't supported our development environment setup, so you can use Docker for Windows to run the Reference Distribution, but not to develop
- if you're on a Virtual Machine, finding your correct IP may have some caveats - esp for development
- Copy and configure your settings, edit
VIRTUAL_HOST
andBASE_URL
to be your IP address (if you're behind a NAT, then don't mistakenly use the router's address), You should only need to do this once, though as this is an actively developed application, you may need to check the environment file template for new additions.
$ cp settings-sample.env settings.env
Note that 'localhost' will not work here—-it must be an actual IP address (like aaa.bbb.yyy.zzz) or domain name. This is because localhost would be interpreted relative to each container, but providing your workstation's IP address or domain name gives an absolute outside location that is reachable from each container. Also note that your BASE_URL will not need the port ":8080" that may be in the environment file template.
-
Update api access configs in https://github.com/OpenLMIS/openlmis-ref-distro/blob/master/reporting/.env
-
Pull all the services, and bring the Reference Distribution up. Since this is actively developed, you should pull the services frequently.
$ docker-compose pull
$ docker-compose up -d # drop the -d here to see console messages
-
When the application is up and running, you should be able to access the Reference Distribution at:
http://<your ip-address>/
note if you get a
HTTP 502: Bad Gateway
, it is probably still starting up all of the microservice containers. You can wait a few minutes for everything to start. You can also rundocker stats
to watch each container using CPU and memory while starting.
By default the demo configuration (facilities, geographies, users, etc) is loaded on startup. To use that demo you may start with a demo account:
Username: administrator
Password: password
If you opted not to load the demo data, and instead need a bare-bones account to configure your system, de-activate the demo data and use the bootstrap account:
Username: admin
Password: password
If you are configuring a production instance, be sure to secure these accounts ASAP and refer to the Configuration Guide for more about the OpenLMIS setup process.
-
To stop the application & cleanup:
- if you ran
docker-compose up -d
, stop the application withdocker-compose down -v
- if you ran
docker-compose up
note the absence of-d
, then interupt the application withCtrl-C
, and perform cleanup by removing containers. See our docker cheat sheet for help on manually removing containers.
- if you ran
-
To enable unskipping previously skipped requisition line items during approval add this flag in settings.env file
UNSKIP_REQUISITION_ITEM_WHEN_APPROVING=true
It's possible to load demo data using an environment variable. This variable is
called spring.profiles.active
. When this environment has as one of it's
values demo-data
, then the demo data for the service will be loaded. This
variable may be set in the settings.env file or in your shell with:
$ export spring_profiles_active=demo-data
$ docker-compose up -d
Performance data may also be optionally loaded and is defined by some Services. If you'd like to start a demo system with a lot of data, run this script instead of executing step #2 of the Quick Setup.
$ export spring_profiles_active=demo-data
$ ./demo-data-start.sh
See http://docs.openlmis.org/en/latest/conventions/performanceData.html for more.
This deployment profile is used by a few services to help ensure that the database they're working against is in a good state. This profile should be set when:
- Manual updates to the database have been made (INSERT, UPDATE, DELETE) through SQL or another tool other than the HTTP REST API each service exposes.
- When the Release Notes call for it to be run in an upgrade.
Using this profile means that extra checks and updates are performed. This uses extra resources such as memory, cpu, etc. When set, Services will start slower, sometimes significantly slower.
Usually this profile only needs to be set before the service(s) starts once. If no further upgrades or manual database changes are made, the profile may be removed before subsequent starting of the service(s) to quicken startup time.
spring_profiles_active=refresh-db
The docker-compose.yml file may be customized to change:
- Versions of Services that should be deployed.
- Host ports that should be used for specific Services.
This may be configured in the included .env file or overridden by setting the same variable in the shell.
For example to set the HTTP port to 8080 instead of the default 80:
export OL_HTTP_PORT=8080
./start-local.sh
A couple conventions:
- The .env file has service versions. See the .env file for more.
- Port mappings have defaults in the docker-compose.yml:
- OL_HTTP_PORT - Host port that the application will be made available.
- OL_FTP_PORT_20 - Host port that the included FTP's port 20 is mapped to.
- OL_FTP_PORT_21 - Host port that the included FTP's port 21 is mapped to.
When a container needs configuration via a file (as opposed to an environment variable for example), then
there is a special Docker image that's built as part of this Reference Distribution from the Dockerfile of
the config/
directory. This image, which will also be deployed as a container, is only a vessel for
providing a named volume from which each container may mount the /config
directory in order to self-configure.
To add configuration:
- Create a new directory under
config/
. Use a unique and clear name. e.g. kannel. - Add the configuration files in this directory. e.g.
config/kannel/kannel.config
. - Add a COPY statement to
config/Dockerfile
which copies the configuration file to the container's/config
. e.g.COPY kannel/kannel.config /config/kanel/kannel.config
. - Ensure that the container which will use this configuration file mounts the named-volume
service-config
to/config
. e.g.
kannel:
image: ...
volumes:
- 'service-config:/config'
- Ensure the container uses/copies the configuration file from
/config/...
. - When you add new configuration, or change it, ensure you bring this Reference Distribution with the
--build
flag. e.g.docker-compose up --build
.
The logging configuration utilizes this method.
NOTE: that the configuration container that's built here doesn't run. It is normal for it's Status to be Exited.
Logging configuration is "passed" to each service as a file (logback.config) through a named docker volume:
service-config
. To change the logging configuration:
- update
config/log/logback.xml
- bring the application up with
docker-compose up --build
. The--build
option will re-build the configuration image.
Most logging is collected by way of rsyslog (in the log
container) which writes to the named volume: log
.
However not every docker container logs via rsyslog to this named volume. For these services they log either
via docker logging or to a file for which a named-volume approach works well.
The log
container runs rsyslog which Services running in their own containers may forward their logging
messages to. This helps centralize all the various Service logging into one location. This container writes
all of these messages to the file /var/log/messages
of the named volume syslog
.
The steps below work for default settings so you don't have to edit any logback.xml files.
- Log methods in a service with "DEBUG" level
- Build the code with
sudo docker-compose run --service-ports <service-name>
followed bygradle clean build integrationTest
- Build an image of the service you're working on with
docker-compose -f docker-compose.builder.yml build image
- Change the service's version to the recently built one in .env file, for example:
OL_REFERENCEDATA_VERSION=latest
- Bring the application up with
docker-compose -f docker-compose.yml up
- Check what is the version of your openlmis/dev image
- To read the file with logs, mount this filesystem via:
docker run -it --rm -v openlmis-ref-distro_syslog:/var/log openlmis/dev:<your-image-version> bash
> tail /var/log/messages
Different versions of docker and different deployment configurations can result in different names of the syslog volume. If openlmis-ref-distro_syslog
doesn't work, run docker volume ls
to see all volume names.
The default log format for the Services is below:
<timestamp> <container ID> <thread ID> <log level> <logger / Java class> <log message>
The format from the thread ID onwards can be changed in the config/log/logback.xml
file.
The nginx
container runs the nginx and consul-template processes. These two log to the named volumes:
nginx-log
under/var/log/nginx/log
consul-template-log
under/var/log/consul/template
e.g to see Nginx's access log:
$ docker run -it --rm -v openlmis-ref-distro_nginx-log:/var/log/nginx/log openlmis/dev:3 bash
> tail /var/log/nginx/log/access.log
Different versions of docker and different deployment configurations can result in different names of the syslog volume. If openlmis-ref-distro_nginx-log
doesn't work, run docker volume ls
to see all volume names.
With Nginx it's also possible to use Docker's logging so that both logs are accessible via docker logs <nginx>
.
This is owed to the configuration of the official Nginx image. To use this configuration, change the environment
variable NGINX_LOG_DIR
to NGINX_LOG_DIR=/var/log/nginx
.
If using the postgres container, the logging is accessible via: docker logs openlmisrefdistro_db_1
.
Sometimes it's useful to drop the database completely, for this there is a script included that is able to do just that.
Note this should never be used in production, nor should it ever be deployed
To run this script, you'll first need the name of the Docker network that the database is using.
If you're using this repository, it's usually the name openlmisrefdistro_default
. With this run
the command:
docker run -it --rm --env-file=.env --network=openlmisrefdistro_default -v $(pwd)/cleanDb.sh:/cleanDb.sh openlmis/dev:3 /cleanDb.sh
Replace openlmisrefdistro_default
with the proper network name if yours has changed.
Note that using this script against a remote Docker host is possible, though not advised
When deploying the Reference Distribution as a production instance, you'll need to remember to set the following environment variable so the production database isn't first wiped when starting:
export spring_profiles_active="production"
docker-compose up --build -d
Documentation is built using Sphinx. Documents from other OpenLMIS repositories are collected and published on readthedocs.org nightly.
Documentation is available at: http://openlmis.readthedocs.io
When connecting locally to the UAT
database on some networks connections were being cut after short amount of time. In order to
resolve it we have added following code to the .env
file:
spring.datasource.hikari.maxLifetime=180000
spring.datasource.hikari.idleTimeout=90000
When some requests are throwing 404
errors it is possible that the NGINX_TIMEOUT value has to be adjusted in the .env
file.