Skip to content

Commit

Permalink
Merge pull request #506 from amazeeio/docs
Browse files Browse the repository at this point in the history
Docs
  • Loading branch information
Schnitzel authored Jul 17, 2018
2 parents 5db7217 + 2be6f65 commit 68375fa
Show file tree
Hide file tree
Showing 10 changed files with 355 additions and 87 deletions.
4 changes: 2 additions & 2 deletions docs/using_lagoon/backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Lagoon differentiates between three backup solutions: Short-, Mid- and Long-Term

These Backups are provided by Lagoon itself and are implemented for Databases only. Lagoon will automatically instruct the `mariadb` and `postgres` [services types](./service_types.md) to setup a cron which creates backups once a day (see example [backup script](https://github.com/amazeeio/lagoon/blob/docs/images/mariadb/mysql-backup.sh) for mariadb). These backups are kept for four days and automatically cleaned up after that.

These Backups are accessible for developers directly with connecting to the corresponding container (like `mariadb`) and checking the [folder](https://github.com/amazeeio/lagoon/blob/docs/images/mariadb/mysql-backup.sh#L24) where the backups are stored). They can then be downloaded, extracted or in any other way used.
These Backups are accessible for developers directly with connecting via the [Remote Shell](./remote_shell.md) to the corresponding container (like `mariadb`) and checking the [folder](https://github.com/amazeeio/lagoon/blob/docs/images/mariadb/mysql-backup.sh#L24) where the backups are stored). They can then be downloaded, extracted or in any other way used.

## Mid-Term Backups

Expand All @@ -16,4 +16,4 @@ For amazee.io infrastructure: Every persistent storage and Docker Images are bac

## Long-Term Backups

Long-Term Backups referr to Backups that are kept for multiple months and years. These types of Backups also depend heavy on the underlining Infrastructure. Check with your Lagoon Administrator what Backups are created on your infrastructure.
Long-Term Backups refer to Backups that are kept for multiple months and years. These types of Backups also depend heavy on the underlining Infrastructure. Check with your Lagoon Administrator what Backups are created on your infrastructure.
61 changes: 61 additions & 0 deletions docs/using_lagoon/docker_images/php-fpm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# php-fpm Image

amazee.io Alpine 7 Dockerfile with php-fpm installed, based on the official PHP Alpine images at (https://hub.docker.com/_/php/)

This Dockerfile is intended to be used as an base for any php needs within amazee.io. This image itself does not create a webserver, rather just an php-fpm fastcgi listener. You maybe need to adapt the php-fpm pool config.

## amazee.io & OpenShift adaptions

This image is prepared to be used on amazee.io which leverages OpenShift. There are therefore some things already done:

- Folder permissions are automatically adapted with [`fix-permissions`](https://github.com/sclorg/s2i-base-container/blob/master/bin/fix-permissions) so this image will work with a random user and therefore also on OpenShift.
- The `/usr/local/etc/php/php.ini` and `/usr/local/etc/php-fpm.conf` plus all files within `/usr/local/etc/php-fpm.d/` are parsed through [envplate](https://github.com/kreuzwerker/envplate) with an container-entrypoint.
- See the [Dockerfile](./Dockerfile) for installed php extensions
- To install further extenstions, extend your Dockerfile from this image, and install extensions according to the docs, under the heading ["How to install more PHP extensions"](https://hub.docker.com/_/php/)

## Included php config

The included php config contains sane values that will make the creation of php pools configs easier. Here a list these, check `/usr/local/etc/php.ini`, `/usr/local/etc/php-fpm.conf` for all of it:

- `max_execution_time = 900` (changeable via `PHP_MAX_EXECUTION_TIME`)
- `realpath_cache_size = 256k` for handling big php projects
- `memory_limit = 400M` for big php projects (changeable via `PHP_MEMORY_LIMIT`)
- `opcache.memory_consumption = 265` for big php projects
- `opcache.enable_file_override = 1` and `opcache.huge_code_pages = 1` for faster php
- `display_errors = Off` and `display_startup_errors = Off` for sane production values (changeable via `PHP_DISPLAY_ERRORS` and `PHP_DISPLAY_STARTUP_ERRORS`)
- `upload_max_filesize = 2048M` for big file uploads
- `apc.shm_size = 32m` and `apc.enabled = 1` (changeable via `PHP_APC_SHM_SIZE` and `PHP_APC_ENABLED`)
- php-fpm error logging happens in stderr

Hint: If you don't like any of these configs, you have three possibilities:
1. If they are changeable via environment variables, use them (preferred version, see list of environment variables below)
2. Create your own fpm-pool config and set configs them via `php_admin_value` and `php_admin_flag` in there (learn more about them [here](http://php.net/manual/en/configuration.changes.php) - yes this refeers to Apache, but it is also the case for php-fpm). _Important:_
1. If you like to provide your own php-fpm pool, overwrite the file `/usr/local/etc/php-fpm.d/www.conf` with your own config or remove this file if you like another name. If you don't do that the provided pool will be started!
2. PHP Values with the [`PHP_INI_SYSTEM` changeable mode](http://php.net/manual/en/configuration.changes.modes.php) cannot be changed via an fpm-pool config. They need to changed either via already provided Environment variables or:
3. Provide your own `php.ini` or `php-fpm.conf` file (least preferred version)

## default fpm-pool

This image is shipped with an fpm-pool config ([`php-fpm.d/www.conf`](./php-fpm.d/www.conf)) that creates a fpm-pool and listens on port 9000. This is because we try to provide an image which covers already most needs for PHP and so you don't need to create your own. You are happy to do so if you like though :) Here a short description of what this file does:

- listens on port 9000 via ipv4 and ipv6
- uses the pm `dynamic` and creates betwwen 2-20 children
- respawns fpm pool children after 500 requests to prevent memory leaks
- replies with `pong` when makeing an fastcgi request to `/ping` (good for automated testing if the pool started)
- `catch_workers_output = yes` to see php errors
- `clear_env = no` to be able to inject PHP environment variables via regular Docker environment variables

## Environment Variables

Environment variables are meant to do common behavior changes of php.

| Environment Variable | Default | Description |
|--------|---------|---|
| `PHP_MAX_EXECUTION_TIME` | `900` | Maximum execution time of each script, in seconds, [see php.net](http://php.net/max-execution-time) |
| `PHP_MAX_INPUT_VARS` | 1000 | How many input variables will be accepted, [see php.net](http://php.net/manual/en/info.configuration.php#ini.max-input-vars) |
| `PHP_MEMORY_LIMIT` | `400M` | Maximum amount of memory a script may consume, [see php.net](http://php.net/memory-limit) |
| `PHP_DISPLAY_ERRORS` | `Off` | This determines whether errors should be printed to the screen as part of the output or if they should be hidden from the user, [see php.net](http://php.net/display-errors) |
| `PHP_DISPLAY_STARTUP_ERRORS` | `Off` | Even when display_errors is on, errors that occur during PHP's startup sequence are not displayed. It's strongly recommended to keep it off, except for debugging., [see php.net](http://php.net/display-startup-errors) |
| `PHP_APC_SHM_SIZE` | `32m` | The size of each shared memory segment given, [see php.net](http://php.net/manual/en/apc.configuration.php#ini.apc.shm-size) |
| `PHP_APC_ENABLED` | `1` | Can be set to 0 to disable APC, [see php.net](http://php.net/manual/en/apc.configuration.php#ini.apc.enabled) |
| `XDEBUG_ENABLED` | (not set) | Used to enable xdebug extension, [see php.net](http://php.net/manual/en/apc.configuration.php#ini.apc.enabled) |
81 changes: 81 additions & 0 deletions docs/using_lagoon/graphql_api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# GraphQL API

## Connect to GraphQL API

API interactions in Lagoon are done via GraphQL, we suggest the [GraphiQL App](https://github.com/skevy/graphiql-app) to connect. In order to authenticate to the API, we also need a JWT (JSON Web Token) which will authenticate you against the API via your SSH Public Key. To generate such token, use the Remote Shell via the `token` command:

```
ssh -p [PORT] -t lagoon@[HOST] token
```

Example for amazee.io:

```
ssh -p 32222 -t lagoon@ssh.lagoon.amazeeio.cloud token
```

This will return you with a long string, which is the jwt token.

We also need the URL of the API Endpoint, ask your Lagoon Administrator for this. On amazee.io this is https://api.lagoon.amazeeio.cloud/graphql

Now we need a GraphQL client, technically this is just HTTP, but there is a nice UI that allows you to write GraphQL requests with autocomplete. Download, install and start it.

Enter the API Endpoint URL. Then click on "Edit HTTP Headers" and add a new Header:

* "Header name": `Authorization`
* "Header value": `Bearer [jwt token]` (make sure that the jwt token has no spaces, as this would not work)

Close the HTTP Header overlay (press ESC) and now we are ready to make the first GraphQL Request!

Enter this on the left window:

```
query whatIsThere {
allProjects {
id
git_url
name
branches
pullrequests
production_environment
environments {
name
environment_type
}
}
}
```

And press the Play button (or press CTRL+ENTER). If all went well, you should see your first GraphQL response.

## Mutations

The Lagoon GraphQL API cannot only display Objects and create Objects, it also has the capability to update exisitng Objects, all of this happens in full GraphQL best practices manner.

Update the branches to deploy within a project:
```
mutation editProjectBranches {
updateProject(input:{id:109, patch:{branches:"^(prod|stage|dev|update)$"}}) {
id
}
}
```

Update the production Environment within a project (Important: Needs a redeploy in order for all changes to be reflected in the containers):
```
mutation editProjectProductionEnvironment {
updateProject(input:{id:109, patch:{production_environment:"master"}}) {
id
}
}
```

You can also combine multiple changes at once:

```
mutation editProjectProductionEnvironmentAndBranches {
updateProject(input:{id:109, patch:{production_environment:"master", branches:"^(prod|stage|dev|update)$"}}) {
id
}
}
```
30 changes: 15 additions & 15 deletions docs/using_lagoon/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,18 @@ Some Docker Images and Containers need additional customizations from the provid

## Supported Services & Base Images by Lagoon

| Type | Versions | Dockerfile | Notes |
| ---------------| --------------| -------------------------------------------------------------------------------------------------------------| ---------------------|
| nginx | 1.12 | [nginx/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/nginx/Dockerfile) | |
| nginx-drupal | | [nginx-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/nginx-drupal/Dockerfile) | |
| php-fpm | 5.6, 7.0, 7.1 | [php/fpm/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/fpm/Dockerfile) | |
| php-cli | 5.6, 7.0, 7.1 | [php/cli/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/cli/Dockerfile) | |
| php-cli-drupal | 5.6, 7.0, 7.1 | [php/cli-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/cli-drupal/Dockerfile) | |
| mariadb | 10 | [mariadb/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mariadb/Dockerfile) | |
| mariadb-drupal | 10 | [mariadb-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mariadb-drupal/Dockerfile) | |
| mongo | 3.6 | [mongo/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mongo/Dockerfile) | |
| solr | | | |
| solr-drupal | | | |
| redis | | | |
| varnish | | | |
| varnish-drupal | | | |
| Type | Versions | Dockerfile |
| ---------------| -------------------| -------------------------------------------------------------------------------------------------------------|
| nginx | 1.12 | [nginx/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/nginx/Dockerfile) |
| nginx-drupal | | [nginx-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/nginx-drupal/Dockerfile) |
| [php-fpm](docker_images/php-fpm.md) | 5.6, 7.0, 7.1, 7.2 | [php/fpm/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/fpm/Dockerfile) |
| php-cli | 5.6, 7.0, 7.1, 7.2 | [php/cli/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/cli/Dockerfile) |
| php-cli-drupal | 5.6, 7.0, 7.1, 7.2 | [php/cli-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/php/cli-drupal/Dockerfile) |
| mariadb | 10 | [mariadb/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mariadb/Dockerfile) |
| mariadb-drupal | 10 | [mariadb-drupal/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mariadb-drupal/Dockerfile) |
| mongo | 3.6 | [mongo/Dockerfile](https://github.com/amazeeio/lagoon/blob/master/images/mongo/Dockerfile) |
| solr | 5.5, 6.6 | |
| solr-drupal | 5.5, 6.6 | |
| redis | | |
| varnish | 5 | |
| varnish-drupal | 5 | |
13 changes: 8 additions & 5 deletions docs/using_lagoon/lagoon_yml.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ environments:
master:
routes:
- nginx:
- domain.com
- "www.domain.com":
- example.com
- "www.example.com":
tls-acme: 'true'
insecure: Redirect
cronjobs:
Expand All @@ -71,6 +71,7 @@ Tells the build script which docker-compose yaml file should be used in order to

#### `routes.insecure`
This allows you to define the behaviour of the automatic creates routes (NOT the custom routes per environment, see below for them). You can define:

* `Allow` simply sets up both routes for http and https (this is the default).
* `Redirect` will redirect any http requests to https
* `None` will mean a route for http will _not_ be created, and no redirect
Expand All @@ -81,9 +82,11 @@ There are different type of tasks you can define, they differ when exactly they

#### `post_rollout.[i].run`
Here you can specify tasks which need to run against your project, _after_:

- all Images have been successfully built
- all Containers are updated with the new Images
- all Containers are running have passed their readyness checks

Common uses are to run `drush updb`, `drush cim`, or clear various caches.

* `name`
Expand All @@ -103,9 +106,9 @@ In the route section we identify the domain names which the environment will res

The first element after the environment is the target service, `nginx` in our example. This is how we identify which service incoming requests will be sent to.

The simplest route is the `domain.com` example above. This will assume that you want a Let's Encrypt certificate for your route and no redirect from https to http.
The simplest route is the `example.com` example above. This will assume that you want a Let's Encrypt certificate for your route and no redirect from https to http.

In the `"www.domain.com"` example, we see two more options (also see the `:` at the end of the route and that the route is wrapped in `"`, that's important!):
In the `"www.example.com"` example, we see two more options (also see the `:` at the end of the route and that the route is wrapped in `"`, that's important!):

* `tls-acme: 'true'` tells Lagoon to issue a Let's Encrypt certificate for that route, this is the default. If you don't like a Let's Encrypt set this to `tls-acme: 'false'`
* `Insecure` can be set to `None`, `Allow` or `Redirect`.
Expand Down Expand Up @@ -178,7 +181,7 @@ additional-yaml:
ignore_error: true
```

Each definition is keyd by a unique name (`secrets` and `logs-db-secrets` in the example above), and takes these keys:
Each definition is keyed by a unique name (`secrets` and `logs-db-secrets` in the example above), and takes these keys:

* `path` - the path to the yaml file
* `command` - can either be `create` or `apply`, depending on if you like to run `kubectl create -f [yamlfile]` or `kubectl apply -f [yamlfile]`
Expand Down
36 changes: 36 additions & 0 deletions docs/using_lagoon/logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Logging

Lagoon provides access to the following logs via Kibana:

- Logs from the OpenShift Routers, including every single HTTP and HTTPs Request with:
- Source IP
- URL
- Path
- HTTP Verb
- Cookies
- Headers
- User Agent
- Project
- Container name
- Response Size
- Response Time
- Logs from Containers
- stdout and stderr messages
- Container name
- Project
- Lagoon Logs
- Webhooks parsing
- Build Logs
- Build Errors
- Any other Lagoon related Logs
- Application Logs (via Syslog) **WIP (not completed yet)**
- Any Logs sent by the running application via Syslog (Example: Drupal Watchdog)


To access the Logs, please check with your Lagon Administrator to get the URL for the Kibana Route (for amazee.io this is https://logs-db-ui-lagoon-master.ch.amazee.io/).

Each Lagoon Account has their own Login and will see the Logs only for the projects that they have access to.

Also each Account has their own Kibana Tenant, which means no saved searches or visualizations are shared with another Account.

If you like to know more on how to use Kibana: https://www.elastic.co/webinars/getting-started-kibana
Loading

0 comments on commit 68375fa

Please sign in to comment.