-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initiate CDK Pipeline #92
Conversation
williamputraintan
commented
Feb 2, 2024
- Deployed 2 CDK pipelines (stateful and stateless) at the bastion account
- Added cdk test with cdk-nag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, first time I see cdk-nag
being used!
Where can I see a successful run of this? Bastion (why Bastion, btw)? I'm interested in runtime and whether the outputs are easily visible without auth (publicly and safely, like GHA).
import * as cdk from 'aws-cdk-lib'; | ||
import { StatelessPipelineStack } from '../lib/pipeline/orcabus-stateless-pipeline-stack'; | ||
|
||
const AWS_TOOLCHAIN_ACCOUNT = '383856791668'; // Bastion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the Bastion account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We opt to CICD within the CodePipeline. So, the GHA PR build will only do pre-flight checks - lint, format, secret-scan, dependencies audits.
Choosing the Bastion account because we still opt with our AWS accounts. There, Bastion account has been repurposed as build/toolchain account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed; will reenable GHA PR Build with #100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also will tear down and re-deploy into UoM AWS accounts with #102
I will review in tick, soon |
Quick link: The last pipeline run is failing because it listen to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, Will. Great job setting this up! Few minor comments to attend and/or to discuss further. Approve otherwise.
@@ -24,7 +24,7 @@ const orcaBusStatefulConfig = { | |||
defaultDatabaseName: 'orcabus', | |||
version: AuroraPostgresEngineVersion.VER_15_4, | |||
parameterGroupName: 'default.aurora-postgresql15', | |||
username: 'admin', | |||
username: 'postgres', | |||
dbPort: 5432, | |||
masterSecretName: rdsMasterSecretName, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to create per app/service db secret manager record per database to isolate them better. Instead of reusing this master (SA account). Wonder, we can do this arrangement straight up now at this point... By chance, do you happen to figure any solution to it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we should isolate on a per service level, but that means the stateless stack needs to be aware of the consuming services, right?
And any database required by any service would have to be registered to and created by the stateless stack IIRC. The stateless stack can create "user" credentials for each service, but I think they'd need to be linked to existing DBs? I don't have the understanding yet of what's possible...
I think the initial thinking here was to allow access on the cluster level to start with in order to not block any service development. Once we've figured out a better way, we could refactor.
Or do you think it's worth working that out now and safe on the refactor ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend we should work this out now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree in working this out. Could be wrong, but it doesn't look like CDK can create more than one database/user per cluster using the regular higher level constructs (see issue aws/aws-cdk#13588)? Might have to create a migration-style Lambda function which executes create database ...
after the RDS cluster is up and running, e.g. using something like this. Then, I think each service would need to register their preferred database name, user and credentials somewhere (in constants.ts?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, each service could register their own migration function to create the databases/users? That way the databases wouldn't have to be registered somewhere higher up like in constants.ts
. Not sure if this makes more sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I wonder if the stateless stack could create IAM roles/policies for each service that would allow these services access to only their specific DB(s) on the instance. E.g. passing on the permissions to do what the service needs to do (without control/knowledge of what that may be), as long as it happens in the designated DB.
I haven't found any examples, but it looks like you can create IAM policies with conditions to restrict access by rds:DatabaseName.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will track this at issue #99
Folks, would like to merge this sometime today pls. Hope we are all good, if no more comments, hereafter. |
@mmalenic Marrrko; is this ok on your side? Planning to merge this in tick, next. Hopefully, you can rebase onto it once merge to main. |
Yes, go for it. I'll rebase/merge into my PRs. |
Merging... |