Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update Python Lending Library and Item Tracker examples for compatibi…
…lity with Aurora Serverless v2 Because both of these examples make use of Data API, and Data API for Serverless v2 currently supports Aurora PostgreSQL but not Aurora MySQL, modernizing to Serverless v2 also means switching database engines to a recent major version of Aurora PostgreSQL. Adapt Lending Library for using Serverless v2 in the demo: * Update wording in README to focus less on Serverless cluster notion. * Enhance the code that implements "wait for cluster to be created" Make it also wait for the associated DB instance. * Create a PostgreSQL wrapper for the library app equivalent to the MySQL one. Using PostgreSQL data types, SQL idioms, and in particular the RETURNING clause for INSERT statements as the way to get back the auto-generated ID value(s) from the new rows. Substantive changes are mainly around auto-increment columns. Instead of adding the AUTO_INCREMENT keyword into the column definition, the autoincrement attribute in the Python source just indicates which column(s) to skip in INSERT statements. It's the responsibility of the caller to specify one of the PostgreSQL auto-increment types like SERIAL or BIGSERIAL. * Add debugging output to see what's being executed for batch SQL statements. * Add more debugging code around interpretation of results from batch SQL statement. * Make the insert() operation use a RETURNING * clause. Since Data API for PostgreSQL doesn't include data in the generatedFields pieces in the result set. * Make the INSERT statement for the Authors table work from a single big string. Supply all the VALUES data as part of the original SQL string and submitted to ExecuteStatement, not an array of parameters used with BatchExecuteStatement. If the VALUES string is constructed with single-quoted values, e.g. ('firstname','lastname'), then it's vulnerable to a syntax error for names like O'Quinn that contain a single quote. So I'm making the delimiters be $$first$$ and $$last$$ to avoid any possibility of collisions. * Add some more debugging output around submitting SQL to lend or return books. Also exception/debug/tracing code to verify exactly which operations fail and what the parameters and return values are around that vicinity. * Change from IS NULL to IS NOT DISTINCT FROM NULL in get_borrowed_books() and return_book(). Because the substitution at the point of 'null' isn't allowed in Postgres with JDBC protocol. Even though it is allowed in MySQL. * Be more flexible in date/time-related types that get the DATE type hint. Don't cast today's date to a string, so it's recognized as a date/time type. Set up CDK setup path for the cross-service resources to use Serverless v2 instead of Serverless v1: * Create DatabaseCluster instead of ServerlessCluster. * Include the 'enable Data API' / 'enable HTTP endpoint' flag. Added recently to DatabaseCluster in this pull request: aws/aws-cdk#29338 * Updated the CDK version to 2.132.1, a recent enough version that includes that ^^^ pull request. * Switch from instanceProps to writer as the attribute of the DatabaseCluster. Based on deprecation of instanceProps that happened in Nov 2023. * Changes to VPC and subnets to get example to work with existing network resources. Had to boost VPC and VPC subnets attributes out of the instance and up to the cluster level. * Switched to Serverless v2 instance for the writer. Now serverless vs. provisioned is a different method call instead of asking for a different instance class. Reformat Python code with 'black' linter. For the aurora_serverless_app CloudFormation stack: * Make the cluster and associated instance be Aurora PostgreSQL version 15 * Update CFN stack to refer to the Serverless v2 scaling config attribute. * Add MinCapacity and MaxCapacity fields with numeric values. * Take out Autopause attribute. * See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbcluster-serverlessv2scalingconfiguration.html * Take out engine mode entirely for cluster. (Serverless v2 uses provisioned engine mode, which is the default.) * Add RDSDBInstance section. In Serverless v2, the Aurora cluster does have DB instances. It's the DB instances that are marked as serverless via the 'db.serverless' instance class. * Add a DependsOn attribute so instance creation waits for the cluster to be created first. In the Python item tracker code: * Apply same Serverless v2 + Data API change to INSERT statement as in PHP and Java examples, which were updated in earlier pull requests. * Turn the DDL statement into a query. Get the auto-generated ID value back from "records" instead of "generatedFields".
- Loading branch information