The NLM Powered Search Processor is a web-based application that leverages OpenAI's GPT and Whisper models to provide an enhanced search experience. This document serves as a comprehensive guide to setting up, deploying, and using the application.
Before you begin, ensure you have Python 3.8 or higher installed on your system. You will also need Docker for containerization.
- Clone the repository containing the source code to your local machine.
- Navigate to the project directory and install the required dependencies using pip:
pip install -r requirements.txt
To securely manage the OpenAI API key, you will need to generate a Fernet key:
from cryptography.fernet import Fernet
key = Fernet.generate_key()
print(key.decode())
Set the generated key as an environment variable FERNET_KEY
.
- Start the Flask application:
python main.py
- Open your web browser and navigate to
https://localhost:5000
to access the application.
- Enter your search query in the provided text field.
- Click the 'Search' button to process your query using GPT.
- The search results will be displayed on the webpage.
- Click the 'Voice Search' button and speak your query.
- The application will transcribe your voice input using Whisper and display the transcription.
- The search results based on your voice query will be displayed on the webpage.
- Upon first use, the application will prompt you to enter your OpenAI API key.
- Enter your key and submit it. The application will validate and store the key securely for subsequent API requests.
- Build the Docker image using the provided Dockerfile:
docker build -t nlm-search-processor .
- Run the Docker container:
docker run -p 80:80 nlm-search-processor
- Access the application through
http://localhost
on your web browser.
To deploy the application on a server, you can use the Docker container you've built. Ensure that the server has Docker installed and simply transfer the image or use the Dockerfile to build it on the server.
Comprehensive documentation is provided in the form of code comments and this manual. To test the application, follow the testing instructions outlined in the testing.md
file (not provided here, but should be part of your project documentation).
- A fully functional web-based demonstration of the NLM powered search processor ready for live testing.
- A deployment package including all source code, Dockerfiles for containerization, and deployment instructions.
- Comprehensive documentation outlining the system's features, development journey, and guidance for setup and testing.
For any further assistance or troubleshooting, please refer to the troubleshooting.md
file (not provided here, but should be part of your project documentation) or contact our support team.