The “INquery” tool will allow Hospital administrators to upload procedure pre-authorization guidelines to an Amazon S3 bucket as a central repository. Amazon Sagemaker Foundation AI generative model will be leveraged to generate embeddings while storing and querying the PDF documents. This will allow Hospital administrators to efficiently use natural language while searching procedure-specific insurance company pre-authorization guidelines, thereby reducing the time and resources required to verify information before medical procedures.
Arizona State University Cloud Innovation Center (CIC) has build a tool that lets Hospital administrators query medical documents using natural language processing.
For a large hospital, it is a challenge to categorize and query medical documents. Medical facilities allocate manual resources to sort through cluttered document systems which results in long lead times for administrative tasks and frustrating patient experiences. When the health of children is involved, an organized and easily accessible search system for medical documents can be invaluable.
For example, insurance company guidelines on procedure pre-authorization are constantly changing. To prevent customers from unexpected charges or denial of claims, Hospital administrators manually validate insurance websites for pre-authorization requirements for every procedure.
The architectural diagram for full-stack application is:
The architectural diagram for API's reference is:
For the prerequisites, we need:
-
We need
npm
as package manager to install dependencies -
aws cdk
needs to be installed -
docker
to create lambda layers in aws cdk -
Access to AWS Bedrock models that needs to be configured from aws console
-
IAM user with access to Bedrock service
-
Key pair that you want to deploy to the EC2.
-> First we need to deploy the backend, then use the resources such as API to add to the Frontend.
Use aws configure
to either select an existing profile or enter IAM access key and secret key to configure aws CLI
-
Enter the values in
.env
for these:AWS_ACCOUNT="XXXXXXXXXXX"
- Your AWS AccountAWS_REGION="us-east-1"
- AWS Region you wantKEY_NAME_VALUE
- key pair value for ec2
-
Enter the values in
kendralangchainattribute.py
for these:AWS_ACCESS_ID="XXXXXXXXXXXXXXXXXXXX"
- IAM Bedrock role access keyAWS_SECRET_ACCESS="XXXXXXXXXXXXXXXXXXXX"
- IAM Bedrock role secret key
-
After cloning the repo, go to the
Backend
folder. -
Run the command
npm i
to install all the dependencies -
Run the command
cdk synth
to start creating the cloudformation in the cdk.out folder -
Run the command
cdk bootstrap
to bootstrap the project enviornment before deploying it. -
Run the command
cdk deploy
to deploy the project and get the ouputs. -
You will get three outputs:
- API for Storage:
dev_get_preSignedURL_API
- API for Kendra create Attribute:
dev_create_attribute_API
- API for Query:
dev_llm_generate_attribute_api
- API for Storage:
-> Now, we need to deploy the frontend:
-
Go to the folder
Frontend
, and enter the vaalues in.env
folder:S3_ACCESS_KEY="XXXXXXXXXXXXXXXXXXX"
- IAM access key to access bucketS3_SECRET_KEY="XXXXXXXXXXXXXXXXXXX"
- IAM secret key to access bucketBUCKET_NAME="XXXXXXXXXXXXXXXXXXXXXXXXXXX"
- name of the bucket we created
-
Go to the folder
Frontend
, and enter the vaalues inQuery.py
folder:LLM_api="XXXXXXXXXXXXXXXXXXX"
- the Query API we created in Backend
After creating a virtual env for python and activating it,
Now you can run pip install -r requirements.txt
to install all the requirements locally
-> You can run the streamlit locally by running this command- streamlit run Query.py
-> For deploying it on EC2, you can follow instructions in the /assets/docs/Deploying Streamlit App on EC2.pdf
-> For the Users using Frontend:
The aplication has two Tabs:
-
Query
: This tab can be used by user to query the documents uploaded from theUpload
tab. We can select the type of document we want to query from the left side as we define in theUpload
tab. -
Upload
: This tab can be used by user to upload the document with a particular attribute. The specific attribute can later on be used to query with that attribute.
Note: To start the test you need to upload a file with a particular attribute. The time that needs to index a document depends on the size of the file, a small size of pdf file will take a few seconds but a file of 300 pages might take a few minutes.
-> For API documentation:
You can find the API documentation in the /assets/docs/API_Postman_Guide.pdf
and follow theses steps to use APIs in your application as well as test in postman.
Developer: Loveneet Singh
Developer: Vishnusai kandati
Sr. Program Manager, AWS: Jubleen Vilku
Architect: Arun Arunachalam
General Manager, ASU: Ryan Hendrix
This project is designed and developed with guidance and support from the ASU Cloud Innovation Center and the City of Phoenix, Arizona teams.