Skip to content

Generative AI Application Builder on AWS facilitates the development, rapid experimentation, and deployment of generative artificial intelligence (AI) applications without requiring deep experience in AI. The solution includes integrations with Amazon Bedrock and its included LLMs, such as Amazon Titan, and pre-built connectors for 3rd-party LLMs.

License

Notifications You must be signed in to change notification settings

Devoteam/generative-ai-application-builder-on-aws

 
 

Repository files navigation

Generative AI Application Builder on AWS

The Generative AI Application Builder (GAAB) on AWS solution provides a web-based management dashboard to deploy customizable Generative AI (Gen AI) use cases. This Deployment dashboard allows customers to deploy, experiment with, and compare different combinations of Large Language Model (LLM) use cases. Once customers have successfully configured and optimized their use case, they can take their deployment into production and integrate it within their applications.

The Generative AI Application Builder is published under an Apache 2.0 license and is targeted for novice to experienced users who want to experiment and productionize different Gen AI use cases. The solution uses LangChain open-source software (OSS) to configure connections to your choice of Large Language Models (LLMs) for different use cases. The first release of GAAB allows users to deploy chat use cases which allow the ability to query over users' enterprise data in a chatbot-style User Interface (UI), along with an API to support custom end-user implementations.

Some of the features of GAAB are:

  • Rapid experimentation with ability to productionize at scale
  • Extendable and modularized architecture using nested Amazon CloudFormation stacks
  • Enterprise ready for company-specific data to tackle real-world business problems
  • Integration with Amazon Bedrock and select third-party LLM providers
  • Multi-LLM comparison and experimentation with metric tracking using Amazon CloudWatch dashboards
  • Growing list of model providers and Gen AI use cases

For a detailed solution implementation guide, refer to The Generative AI Application Builder on AWS

On this page

Architecture Overview

There are 3 unique user personas that are referred to in the solution walkthrough below:

  • The DevOps user is responsible for deploying the solution within the AWS account and for managing the infrastructure, updating the solution, monitoring performance, and maintaining the overall health and lifecycle of the solution.
  • The admin users are responsible for managing the content contained within the deployment. These users gets access to the Deployment dashboard UI and is primarily responsible for curating the business user experience. This is our primary target customer.
  • The business users represents the individuals who the use case has been deployed for. They are the consumers of the knowledge base and the customer responsible for evaluating and experimenting with the LLMs.

Deployment Dashboard

When the DevOps user deploys the Deployment Dashboard, the following components are deployed in the AWS account:

Diagram

  1. The admin users can log in to the deployed Deployment Dashboard UI.
  2. Amazon CloudFront delivers the web UI which is hosted in an Amazon S3 bucket.
  3. AWS WAF protects the APIs from attacks. This solution configures a set of rules called a web access control list (web ACL) that allows, blocks, or counts web requests based on configurable, user-defined web security rules and conditions.
  4. The web app leverages a set of REST APIs that are exposed using Amazon API Gateway.
  5. Amazon Cognito authenticates users and backs both the Cloudfront web UI and API Gateway.
  6. AWS Lambda is used to provide the business logic for the REST endpoints. This Backing Lambda will manage and create the necessary resources to perform use case deployments using AWS Cloudformation.
  7. Amazon DynamoDB is used as a configuration store for the deployment details.
  8. When a new use case is created by the admin user, the Backing Lambda will initiate a CloudFormation stack creation event for the requested use case.
  9. If the configured deployment uses a third-party LLM, then a secret will be created in AWS Secrets Manager to store the API key.
  10. All of the LLM configuration options provided by the admin user in the deployment wizard will be saved in an AWS System Manager Parameter Store. This Parameter store is used by the deployment to configure the LLM at runtime.
  11. Using Amazon Cloudwatch, operational metrics are collected by various services to generate custom dashboards used for monitoring the solution's health.
Note: Although the Deployment dashboard can be launched in most AWS regions, the deployed use cases have some restrictions based on service availability. See Supported AWS Regions in the Implementation Guide for more details.

Use Cases

Once the Deployment Dashboard is deployed, the admin user can then deploy multiple use case stacks. When a use case stack is deployed by the admin user, the following components are deployed in the AWS account:

Diagram

  1. Business users can log in to the use case UI.
  2. Amazon CloudFront delivers the web UI which is hosted in an Amazon S3 bucket.
  3. The web UI leverages a WebSocket integration built using Amazon API Gateway. The API Gateway is backed by a custom Lambda Authorizer function, which returns the appropriate IAM policy based on the Amazon Cognito group the authenticating user is part of.
  4. Amazon Cognito authenticates users and backs both the Cloudfront web UI and API Gateway.
  5. The LangChain Orchestrator is a collection of Lambda functions and layers that provide the business logic for fulfilling requests coming from the business user.
  6. The LangChain Orchestrator leverages Parameter store and DynamoDB to get the configured LLM options and necessary session information (such as the chat history).
  7. If the deployment has enabled a knowledge base, then the LangChain Orchestrator will leverage Amazon Kendra to run a search query to retrieve document excerpts.
  8. Using the chat history, query, and context from Amazon Kendra, the LangChain Orchestrator creates the final prompt and sends the request to the LLM hosted on Amazon Bedrock.
  9. If using a third-party LLM outside of Amazon Bedrock, the API key is stored in AWS Secrets Manager and must be obtained before making the API call to the third-party LLM provider.
  10. As the response comes back from the LLM, the LangChain Orchestrator Lambda streams the response back through the API Gateway WebSocket to be consumed by the client application.
  11. Using Amazon Cloudwatch, operational metrics are collected by various services to generate custom dashboards used for monitoring the deployment's health.

Deployment

NOTE:

  • To use Amazon Bedrock, you must request access to models before they are available for use. Refer to Model access in the Amazon Bedrock User Guide for more details.
  • You can also test the UI project locally by deploying the API endpoints and the rest of the infrastructure. To do so, follow either of the below two options and then refer Deployment Dashboard and Chat UI project for details.

There are two options for deployment into your AWS account:

1. Using cdk deploy

Following are pre-requisites to build and deploy locally:

Note: Configure the AWS CLI with your AWS credentials or have them exported in the CLI terminal environment. In case the credentials are invalid or expired, running cdk deploy produces an error.

After cloning the repo from GitHub, complete the following steps:

  cd <project-directory>/source/infrastructure
  npm install
  npm run build
  cdk deploy DeploymentPlatformStack --parameters AdminUserEmail=<replace with admin user's email>

For the deployment dashboard to deploy LLM chat use cases, you would additionally need to stage assets from the source/infrastructure/cdk.out directory. To make it easy to stage, you can use the source/stage-assets.sh script. This script should be run from the source directory.

cd <project-directory>/source
./stage-assets.sh

2. Using a custom build

Refer section Creating a custom build

Source code

Project structure

├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Config
├── LICENSE.txt
├── NOTICE.txt
├── README.md
├── buildspec.yml
├── deployment
│   ├── build-open-source-dist.sh
│   ├── build-s3-dist.sh
│   ├── cdk-solution-helper
│   ├── clean-for-scan.sh
│   ├── get-cdk-version.js
│   └── manifest.yaml
├── pyproject.toml
├── pytest.ini
├── sonar-project.properties
└── source
    ├── images
    ├── infrastructure                       [CDK infrastructure]
    ├── lambda                               [Lambda functions for the application]
    ├── pre-build-lambda-layers.sh           [pre-builds lambda layers for the project]
    ├── run-all-tests.sh                     [shell script that can run unit tests for the entire project]
    ├── stage-assets.sh
    ├── test
    ├── ui-chat                              [Web App project for chat UI]
    └── ui-deployment                        [Web App project for deployment dashboard UI]

Creating a custom build

1. Clone the repository

Run the following command:

git clone https://github.com/aws-solutions/<repository_name>

2. Build the solution for deployment

  1. Install the dependencies:
cd <rootDir>/source/infrastructure
npm install
  1. (Optional) Run the unit tests:

Note: To run the unit tests, docker must be installed and running, and valid AWS credentials must be configured.

cd <rootDir>/source
chmod +x ./run-all-tests.sh
./run-all-tests.sh
  1. Configure the bucket name of your target Amazon S3 distribution bucket:
export DIST_OUTPUT_BUCKET=my-bucket-name
export VERSION=my-version
  1. Build the distributable:
cd <rootDir>/deployment
chmod +x ./build-s3-dist.sh
./build-s3-dist.sh $DIST_OUTPUT_BUCKET $SOLUTION_NAME $VERSION $CF_TEMPLATE_BUCKET_NAME

Parameter details:

$DIST_OUTPUT_BUCKET - This is the global name of the distribution. For the bucket name, the AWS Region is added to the global name (example: 'my-bucket-name-us-east-1') to create a regional bucket. The lambda
artifact should be uploaded to the regional buckets for the CloudFormation template to pick it up for deployment.

$SOLUTION_NAME - The name of This solution (example: generative-ai-application-builder-on-aws)
$VERSION - The version number of the change
$CF_TEMPLATE_BUCKET_NAME - The name of the S3 bucket where the CloudFormation templates should be uploaded

When you create and use buckets, we recommended that you:

  • Use randomized names or uuid as part of your bucket naming strategy.
  • Ensure that buckets aren't public.
  • Verify bucket ownership prior to uploading templates or code artifacts.
  1. Deploy the distributable to an Amazon S3 bucket in your account.

Note: You must have the AWS CLI installed.

aws s3 cp ./global-s3-assets/ s3://my-bucket-name-<aws_region>/generative-ai-application-builder-on-aws/<my-version>/ --recursive --acl bucket-owner-full-control --profile aws-cred-profile-name
aws s3 cp ./regional-s3-assets/ s3://my-bucket-name-<aws_region>/generative-ai-application-builder-on-aws/<my-version>/ --recursive --acl bucket-owner-full-control --profile aws-cred-profile-name

Anonymized data collection

This solution collects anonymized operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the implementation guide.


Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

About

Generative AI Application Builder on AWS facilitates the development, rapid experimentation, and deployment of generative artificial intelligence (AI) applications without requiring deep experience in AI. The solution includes integrations with Amazon Bedrock and its included LLMs, such as Amazon Titan, and pre-built connectors for 3rd-party LLMs.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 39.3%
  • Python 32.8%
  • JavaScript 26.3%
  • Other 1.6%