diff --git a/.gitignore b/.gitignore index ba0123cec..e8d960f41 100644 --- a/.gitignore +++ b/.gitignore @@ -20,3 +20,5 @@ devenv.local.nix # nix build output links result* + +.idea diff --git a/Guide/deployment.markdown b/Guide/deployment.markdown index 643e875b1..aa4e54b94 100644 --- a/Guide/deployment.markdown +++ b/Guide/deployment.markdown @@ -10,19 +10,151 @@ IHP comes with a standard command called `deploy-to-nixos`. This tool is a littl AWS EC2 is a good choice for deploying IHP in a professional setup. -### Creating a new EC2 Instance +### AWS infrastructure preparation + +#### Creating a new EC2 Instance Start a new EC2 instance and use the official NixOS AMI `NixOS-23.05.426.afc48694f2a-x86_64-linux`. You can find the latest NixOS AMI at https://nixos.org/download#nixos-amazon Example steps: - Visit [EC2 creation page](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#LaunchInstances:) in your desired region. - - Select AMI by name, it will appear under "Community AMIs" after searching by name. - - Select at least a `t3a.small` instance size to have enough RAM for the compilation - - Specify a generous root disk volume. By nature NixOS can consume lots of disk space as you trial-and-error your application deployment. As a minimum, we advise 60 GiB + - Select AMI by name, it will appear under "Community AMIs" after searching by name (there can be a slight delay before the result appears as it searches in all community AMIs). + - Select at least a `t3a.small` instance size to have enough RAM for the compilation. For a real-world application, chances are that you need `t3a.medium` to successfully compile it. + - Specify a generous root disk volume. By nature NixOS can consume lots of disk space as you trial-and-error your application deployment. As a minimum, we advise 60 GiB. - Under `Network settings`, allow SSH traffic from your IP address only, allow HTTPS and HTTP traffic from the internet. Due to the certificate validation for Let's Encrypt, even if your application does not need to have it, allow HTTP too. - Make sure to attach SSH keys to the instance at creation time, that is available locally, so you can SSH to the EC2 instance without password later. + - Either before creating the EC2 instance, you import your existing keypair to [EC2 Key Pairs](https://us-east-1.console.aws.amazon.com/ec2/home?region=eu-west-1#ImportKeyPair:), then you should select it at the EC2 creation page. + - Or let AWS create one on-the-fly: ![image](https://github.com/digitallyinduced/ihp/assets/114076/317b022a-ad6e-43ae-931d-8710db0b711c) . Afterwards, you will be able to download the private key file, later on it is referred as `ihp-app.pem` in this documentation. + +#### (Optional) Creating an RDS Instance + +For production systems, it is advised to use a fully managed PostgreSQL instance, it can be multi-region, fault tolerant, but most of all, +daily backups happen automatically with configurable retention. + +To switch from the local PostgreSQL instance to a managed one (you can do it after or before the initial deployment), you can execute the following steps: + - Visit [RDS creation page](https://us-east-1.console.aws.amazon.com/rds/home?region=eu-west-1#launch-dbinstance:) in your desired region. + - Select PostgreSQL as the Engine Type. + - Select a compatible Engine Version, there are good chances that the very last version will fit. + - At Templates, Choose `Free Tier` for any non-live environments, `Production` for the live environment. + - (Optional) Choose `Auto generate password` for having a secure master password. + - Choose `Connect to an EC2 compute resource` and select your already existing EC2 instance. + - Then you can `Create database`. This process is slow, check back in 10 minutes or so afterward. Note down the auto-generated password. + - Edit your `flake.nix`, under `flake.nixosConfigurations."ihp-app".services.ihp`, you can specify the database URL like: `databaseUrl = lib.mkForce "postgresql://postgres:YOUR-PASSWORD@YOUR-HOSTNAME.amatonaws.com/postgres";`. You can find the proper hostname after the initialization is complete, on the RDS instance detail page. + - `pg_dump --no-owner --no-acl` your existing local database on the EC2 instance directly, and then, you can load it to the newly created instance via `pgsql`. `deploy-to-nixos` won't populate the initial schema at an existing remote database, that's why dumping, `scp d`ing and loading it via `psql` is necessary. + +#### (Optional) Creating an S3 bucket + +If your application needs to store files, on AWS, those should use an S3 bucket for that. + +Infrastructure-side preparation: + - Visit the [S3 creation page](https://s3.console.aws.amazon.com/s3/bucket/create?region=eu-west-1) and create a bucket in the same region..If objects should or should not be public, it's up to the application's business requirements. The S3 [ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) from the S3 details page should be noted down. + - Create an new IAM user for the S3 access. Create an [AWS access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for that IAM user. + - For that user, attach a policy that allows access to the bucket, for example: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor1", + "Effect": "Allow", + "Action": "s3:*", + "Resource": [ + "YOUR-BUCKET-ARN", + "YOUR-BUCKET-ARN/*" + ] + } + ] +} +``` + - See the [Storage guide](https://ihp.digitallyinduced.com/Guide/file-storage.html#s3) on how to use the access key. + + If your application requires so, make the S3 bucket publicly available. + - Go to https://s3.console.aws.amazon.com/s3/buckets/YOUR-BUCKET?region=eu-west-1&bucketType=general&tab=permissions (permissions tab of the S3 bucket) + - Set `Block all public access˙ to entirely off. + - Set a bucket policy like this: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "PublicReadGetObject", + "Effect": "Allow", + "Principal": "*", + "Action": "s3:GetObject", + "Resource": "YOUR-BUCKET-ARN/*" + } + ] +} +``` + - Test the access by locating a file in a bucket under Objects and "Copy S3 URI" for it. + +#### (Optional) Connecting CloudWatch + +For a production system, logging is essential, so you are informed about anomalies before customer complaints, or you are able to provide an evidence for an incident and so on. + +Mind the region of your EC2 instance for these steps. + +- [Create a CloudWatch log group](https://eu-west-1.console.aws.amazon.com/cloudwatch/home?region=eu-west-1#logsV2:log-groups/create-log-group), note down the ARN. +- Create a log stream inside the previously created log group, for instance `in`. +- Create an IAM user with an access key and secret with the following policy: +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogStream", + "logs:PutLogEvents", + "logs:DescribeLogStreams" + ], + "Resource": [ + "[YOUR-GROUP-ARN]", + "[YOUR-GROUP-ARN]:*" + ] + } + ] +} +``` +- Configure the `services.vector` part in your `flake.nix` to activate logging: +``` +services.vector = { + enable = true; + journaldAccess = true; + settings = { + sources.journald = { + type = "journald"; + include_units = ["app.service" "nginx.service" "worker.service"]; + }; + transforms.remap_remove_specific_keys = { + type = "remap"; + inputs = ["journald"]; + source = '' + del(._STREAM_ID) + del(._SYSTEMD_UNIT) + del(._BOOT_ID) + del(.source_type) + ''; + }; + sinks.out = { + auth = { + access_key_id = "YOUR-IAM-ACCESS-KEY"; + secret_access_key = "YOUR-IAM-ACCESS-KEY"; + }; + inputs = ["remap_remove_specific_keys"]; + type = "aws_cloudwatch_logs"; + compression = "gzip"; + encoding.codec = "json"; + region = "us-east-1"; + group_name = "tpp-qa"; + stream_name = "in"; + }; + }; +}; +``` +- Review the incoming log entries, adjust remapping accordingly. You might want to remove or transform more entries to make the logs useful for alerts or accountability. -### Connecting to the EC2 Instance +### Connecting to the EC2 / Virtual Machine Instance After you've created the instance, configure your local SSH settings to point to the instance.