-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initiate CDK Pipeline #92
Changes from 19 commits
3ba852e
73944b4
72086ee
da4f4fc
000bdb7
27a136a
955f1ff
e5c77b2
4a91930
24f1749
e78660d
9656b5b
5ab7f0b
d34e89f
9be6c95
c70a326
1ebb36b
72b8d9f
2d595da
2458410
b9ff51d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -53,3 +53,6 @@ cdk.context.json | |
data/ | ||
|
||
Brewfile.lock.json | ||
*.xml | ||
|
||
target/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
#!/usr/bin/env node | ||
import 'source-map-support/register'; | ||
|
||
import * as cdk from 'aws-cdk-lib'; | ||
import { StatelessPipelineStack } from '../lib/pipeline/orcabus-stateless-pipeline-stack'; | ||
|
||
const AWS_TOOLCHAIN_ACCOUNT = '383856791668'; // Bastion | ||
const AWS_TOOLCHAIN_REGION = 'ap-southeast-2'; | ||
|
||
const app = new cdk.App(); | ||
|
||
new StatelessPipelineStack(app, `OrcaBusStatelessPipeline`, { | ||
env: { | ||
account: AWS_TOOLCHAIN_ACCOUNT, | ||
region: AWS_TOOLCHAIN_REGION, | ||
}, | ||
tags: { | ||
'umccr-org:Stack': 'OrcaBusStatelessPipeline', | ||
'umccr-org:Product': 'OrcaBus', | ||
}, | ||
}); |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,7 @@ | ||
import { OrcaBusStatefulConfig } from '../lib/workload/orcabus-stateful-stack'; | ||
import { AuroraPostgresEngineVersion } from 'aws-cdk-lib/aws-rds'; | ||
import { OrcaBusStatelessConfig } from '../lib/workload/orcabus-stateless-stack'; | ||
import { Duration, aws_lambda } from 'aws-cdk-lib'; | ||
import { Duration, aws_lambda, RemovalPolicy } from 'aws-cdk-lib'; | ||
|
||
const regName = 'OrcaBusSchemaRegistry'; | ||
const eventBusName = 'OrcaBusMain'; | ||
|
@@ -24,7 +24,7 @@ const orcaBusStatefulConfig = { | |
defaultDatabaseName: 'orcabus', | ||
version: AuroraPostgresEngineVersion.VER_15_4, | ||
parameterGroupName: 'default.aurora-postgresql15', | ||
username: 'admin', | ||
username: 'postgres', | ||
dbPort: 5432, | ||
masterSecretName: rdsMasterSecretName, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have to create per app/service db secret manager record per database to isolate them better. Instead of reusing this master (SA account). Wonder, we can do this arrangement straight up now at this point... By chance, do you happen to figure any solution to it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree that we should isolate on a per service level, but that means the stateless stack needs to be aware of the consuming services, right? I think the initial thinking here was to allow access on the cluster level to start with in order to not block any service development. Once we've figured out a better way, we could refactor. Or do you think it's worth working that out now and safe on the refactor ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd recommend we should work this out now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree in working this out. Could be wrong, but it doesn't look like CDK can create more than one database/user per cluster using the regular higher level constructs (see issue aws/aws-cdk#13588)? Might have to create a migration-style Lambda function which executes There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Or, each service could register their own migration function to create the databases/users? That way the databases wouldn't have to be registered somewhere higher up like in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, I wonder if the stateless stack could create IAM roles/policies for each service that would allow these services access to only their specific DB(s) on the instance. E.g. passing on the permissions to do what the service needs to do (without control/knowledge of what that may be), as long as it happens in the designated DB. I haven't found any examples, but it looks like you can create IAM policies with conditions to restrict access by rds:DatabaseName. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Will track this at issue #99 |
||
monitoring: { | ||
|
@@ -94,6 +94,7 @@ export const getEnvironmentConfig = ( | |
maxACU: 1, | ||
williamputraintan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
enhancedMonitoringInterval: Duration.seconds(60), | ||
enablePerformanceInsights: true, | ||
removalPolicy: RemovalPolicy.DESTROY, | ||
}, | ||
securityGroupProps: { | ||
...orcaBusStatefulConfig.securityGroupProps, | ||
|
@@ -123,6 +124,7 @@ export const getEnvironmentConfig = ( | |
maxACU: 1, | ||
enhancedMonitoringInterval: Duration.seconds(60), | ||
enablePerformanceInsights: true, | ||
removalPolicy: RemovalPolicy.DESTROY, | ||
}, | ||
securityGroupProps: { | ||
...orcaBusStatefulConfig.securityGroupProps, | ||
|
@@ -150,6 +152,7 @@ export const getEnvironmentConfig = ( | |
numberOfInstance: 1, | ||
minACU: 0.5, | ||
maxACU: 1, | ||
removalPolicy: RemovalPolicy.RETAIN, | ||
}, | ||
securityGroupProps: { | ||
...orcaBusStatefulConfig.securityGroupProps, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the Bastion account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We opt to CICD within the CodePipeline. So, the GHA PR build will only do pre-flight checks - lint, format, secret-scan, dependencies audits.
Choosing the Bastion account because we still opt with our AWS accounts. There, Bastion account has been repurposed as build/toolchain account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed; will reenable GHA PR Build with #100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also will tear down and re-deploy into UoM AWS accounts with #102