Skip to content

With this WhatsApp app, you can chat in any language with a Large language models (LLM) on Amazon Bedrock. Send voice notes and receive transcriptions. By making a minor change in the code, you can also send the transcription to the model.

License

Notifications You must be signed in to change notification settings

build-on-aws/building-gen-ai-whatsapp-assistant-with-amazon-bedrock-and-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

32 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Building a WhatsApp generative AI assistant with Amazon Bedrock and Python.

With this WhatsApp app, you can chat in any language with a Large language models (LLM) on Amazon Bedrock. Send voice notes and receive transcriptions. By making a minor change in the code, you can also send the transcription to the model.

Your data will be securely stored in your AWS account and will not be shared or used for model training. It is not recommended to share private information because the security of data with WhatsApp is not guaranteed.

Digrama parte 1
Digrama parte 1**

** UPDATE: Power with Anthropic's Claude 3.5

βœ… AWS Level: Advanced - 300

Prerequisites:

πŸ’° Cost to complete:

How The App Works

Digrama parte 1

1- Message input:

Digrama parte 1

  1. WhatsApp receives the message: voice/text/image.
  2. Amazon API Gateway receives the message from the WhatsApp webhook (previously authenticated).
  3. Then, an AWS Lambda Functions named whatsapp_in processes the message and sends it to an Amazon DynamoDB table named whatsapp-metadata to store it.
  4. The DynamoDB table whtsapp-metadata has a DynamoDB streaming configured, which triggers the process_stream Lambda Function.

2 - Message processing:

Text Message:

Digrama parte 1 process_stream Lambda Function sends the text of the message to a Lambda Function .

In this application are 2 Lambda Functions that can fulfill this function, one that uses LangChain to handle the conversations and Amazon Bedrock to invoke the LLM, named langchain_agent_text, another that uses the Amazon Bedrock API call directly named agent_text_v3 with Claude 3 Sonnet , which one to use is up to you.

Voice Message:

Digrama parte 1

  1. The audio_job_transcriptor Lambda Function is triggered. This Lambda Function downloads the WhatsApp audio from the link in the message in an Amazon S3 bucket, using Whatsapp Token authentication, then converts the audio to text using the Amazon Transcribe start_transcription_job API, which leaves the transcript file in an Output Amazon S3 bucket.

Function that invokes audio_job_transcriptor looks like this:

def start_job_transciptor (jobName,s3Path_in,OutputKey,codec):
    response = transcribe_client.start_transcription_job(
            TranscriptionJobName=jobName,
            IdentifyLanguage=True,
            MediaFormat=codec,
            Media={
            'MediaFileUri': s3Path_in
            },
            OutputBucketName = BucketName,
            OutputKey=OutputKey 
            )

πŸ’‘ Notice that the IdentifyLanguage parameter is configured to True. Amazon Transcribe can determine the primary language in the audio.

Digrama parte 1

  1. The transcriber_done Lambda Function is triggered with an Amazon S3 Event Notification put item once the Transcribe Job is complete. It extracts the transcript from the Output S3 bucket and sends it to whatsapp_out Lambda Function to respond to WhatsApp.

βœ… You have the option to uncomment the code in the transcriber_done Lambda Function and send the voice note transcription to langchain_agent_text Lambda Function.

try:       
    response_3 = lambda_client.invoke(
        FunctionName = LAMBDA_AGENT_TEXT,
        InvocationType = 'Event' ,#'RequestResponse', 
        Payload = json.dumps({
            'whats_message': text,
            'whats_token': whats_token,
            'phone': phone,
            'phone_id': phone_id,
            'messages_id': messages_id

        })
    )

    print(f'\nRespuesta:{response_3}')

    return response_3
    
except ClientError as e:
    err = e.response
    error = err
    print(err.get("Error", {}).get("Code"))
    return f"Un error invocando {LAMBDA_AGENT_TEXT}

Image Message:

Digrama parte 1

process_stream Lambda Function sends the text of the message to a Lambda Function named agent_image_v3.

3- LLM Processing:

Digrama parte 1

The agent receives the text and performs the following:

  1. Queries the Amazon DynamoDB table called user_metadata to see if the session has expired. If it is active, it recovers the SessionID, necessary for the next step, if it expires it creates a new session timer. In Lambda Function named langchain_agent_textthe chat history is managed with the Lanchain memory class, in the Lambdas Functions agent_text_v3 and agent_image_v3 it is solved with a Json array that is fed with the history of the conversation.
  2. Queries the Amazon DynamoDB table called session Table to see if there is any previous conversation history.
  3. Consult the LLM through Amazon Bedrock using the following prompt:
The following is a friendly conversation between a human and an AI. 
    The AI is talkative and provides lots of specific details from its context. 
    If the AI does not know the answer to a question, it truthfully says it does not know.
    Always reply in the original user language.
  1. Send the response to WhatsApp through whatsapp_out the Lambda Function.

πŸ’‘ The phrase "Always reply in the original user language" ensures that it always responds in the original language and the multilingual capacity is provided by Anthropic Claude, which is the model used in this application.

Let's build!

Step 0: Activate WhatsApp account Facebook Developers

1- Get Started with the New WhatsApp Business Platform

2- How To Generate a Permanent Access Token β€” WhatsApp API

3- Get started with the Messenger API for Instagram

Step 1: APP Set Up

βœ… Clone the repo

git clone https://github.com/build-on-aws/building-gen-ai-whatsapp-assistant-with-amazon-bedrock-and-python

βœ… Go to:

cd private-assistant

Step 2: Deploy architecture with CDK.

In private_assistant_stack.py edit this line with the whatsapp Facebook Developer app number:

DISPLAY_PHONE_NUMBER = 'YOUR-WHATSAPP-NUMBER'

Digrama parte 1

This agent manages conversation memory, and you must set the session time here in this line:

if diferencia > 240: #session time in seg

Tip: Kenton Blacutt, an AWS Associate Cloud App Developer, collaborated with Langchain, creating the Amazon Dynamodb based memory class that allows us to store the history of a langchain agent in an Amazon DynamoDB.

To use the Lambda Function langchain_agent_text: change the LAMBDA_AGENT_TEXT environment variable in Lambda Function process_stream in private_assistant_stack:

#Line 77
Fn.process_stream.add_environment(key='ENV_LAMBDA_AGENT_TEXT', value=Fn.langchain_agent_text.function_name)

Digrama parte 1

βœ… Create The Virtual Environment: by following the steps in the README

python3 -m venv .venv
source .venv/bin/activate

for windows:

.venv\Scripts\activate.bat

βœ… Install The Requirements:

pip install -r requirements.txt

βœ… Synthesize The Cloudformation Template With The Following Command:

cdk synth

βœ…πŸš€ The Deployment:

cdk deploy

Deployment Time

Step 3: WhatsApp Configuration

Edit WhatsApp configuration values in Facebook Developer in AWS Secrets Manager console.

Digrama parte 1

βœ… The verification token is any value, but it must be the same in step 3 and 4.

Step 4: Webhook Configuration

  1. Go to Amazon API Gateway Console
  2. Click on myapi.
  3. Go to Stages -> prod -> /cloudapi -> GET, and copy Invoke URL.

Invoke Url

  1. Configure Webhook in the Facebook developer application.
    • Set Invoke URL.
    • Set verification token.

Digrama parte 1


Enjoy the app!:

βœ… Chat and ask follow-up questions. Test the multi-language skills.

Digrama parte 1

βœ… Send and transcribe voice notes. Test the app's capabilities for transcribing multiple languages.

Digrama parte 1

βœ… Send photos and test the app's capabilities to describe and identify what's in the images. Play with prompts

Digrama parte 1

πŸš€ Keep testing the app, play with the prompt langchain_agent_text Amazon Lambda function and adjust it to your need.

Clean the house!:

If you finish testing and want to clean the application, you just have to follow these two steps:

  1. Delete the files from the Amazon S3 bucket created in the deployment.
  2. Run this command in your terminal:
cdk destroy

Conclusion:

In this tutorial, you deployed a serverless WhatsApp application that allows users to interact with an LLM through Amazon Bedrock. This architecture uses API Gateway as a connection between WhatsApp and the application. Amazon Lambda functions process code to handle conversations. Amazon DynamoDB tables manage and store message information, session details, and conversation history.

You now have the essential code to improve the application. One option moving forward is to incorporate Retrieval-Augmented Generation (RAG) to generate more sophisticated responses depending on the context.

To handle customer service scenarios, the application could connect to Amazon Connect and transfer calls to an agent if the LLM cannot resolve an issue.

With further development, this serverless architecture demonstrates how conversational AI can power engaging and useful chat experiences on popular messaging platforms.

🚨 Did you like this blog? πŸ‘©πŸ»β€πŸ’» Do you have comments?🎀 tell me every thinghere

πŸš€ Some links for you to continue learning and building:


πŸ‡»πŸ‡ͺπŸ‡¨πŸ‡± Β‘Gracias!


Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

With this WhatsApp app, you can chat in any language with a Large language models (LLM) on Amazon Bedrock. Send voice notes and receive transcriptions. By making a minor change in the code, you can also send the transcription to the model.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •