Skip to content

AI-powered tool to execute OpenAI requests and provide responses efficiently... Created at https://coslynx.com

Notifications You must be signed in to change notification settings

coslynx/OpenAI-Request-Executor-MVP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenAI-Request-Executor-MVP

A Python backend service for effortless OpenAI API interactions with natural language requests.

Developed with the software and tools below.

Framework-FastAPI Backend-Python Database-PostgreSQL LLMs-OpenAI
git-last-commit GitHub commit activity GitHub top language

πŸ“‘ Table of Contents

  • πŸ“ Overview
  • πŸ“¦ Features
  • πŸ“‚ Structure
  • πŸ’» Installation
  • πŸ—οΈ Usage
  • 🌐 Hosting
  • πŸ“„ License
  • πŸ‘ Authors

πŸ“ Overview

This repository contains a Minimum Viable Product (MVP) called "OpenAI-Request-Executor-MVP." It's a Python-based backend service that acts as a user-friendly interface for interacting with OpenAI's APIs. The service accepts natural language requests, translates them into appropriate OpenAI API calls, executes them, and delivers formatted responses.

πŸ“¦ Features

Feature Description
βš™οΈ Architecture The service employs a layered architecture, separating the presentation, business logic, and data access layers for improved maintainability and scalability.
πŸ“„ Documentation The repository includes a README file that provides a detailed overview of the MVP, its dependencies, and usage instructions.
πŸ”— Dependencies Essential Python packages are used, including FastAPI, Pydantic, uvicorn, psycopg2-binary, SQLAlchemy, requests, PyJWT, and OpenAI for API interaction, authentication, and database operations.
🧩 Modularity The code is organized into modules for efficient development and maintenance, including models, services, and utils.
πŸ§ͺ Testing The MVP includes unit tests for core modules (main.py, services/openai_service.py, models/request.py) using Pytest, ensuring code quality and functionality.
⚑️ Performance The backend is optimized for efficient request processing and response retrieval, utilizing asynchronous programming with asyncio and caching for improved speed and responsiveness.
πŸ” Security Security measures include secure communication with HTTPS, authentication with JWTs, and data encryption.
πŸ”€ Version Control Utilizes Git for version control, allowing for tracking changes and collaborative development.
πŸ”Œ Integrations Seamlessly integrates with OpenAI's API using the openai Python library, PostgreSQL database using SQLAlchemy, and leverages the requests library for communication.
πŸ“Ά Scalability The service is designed for scalability, utilizing cloud-based hosting like AWS or GCP, and optimized for handling increasing request volumes.

πŸ“‚ Structure

β”œβ”€β”€ main.py
β”œβ”€β”€ models
β”‚   └── request.py
β”œβ”€β”€ services
β”‚   └── openai_service.py
β”œβ”€β”€ utils
β”‚   └── logger.py
β”œβ”€β”€ tests
β”‚   β”œβ”€β”€ test_main.py
β”‚   β”œβ”€β”€ test_openai_service.py
β”‚   └── test_models.py
β”œβ”€β”€ startup.sh
β”œβ”€β”€ commands.json
└── requirements.txt

πŸ’» Installation

πŸ”§ Prerequisites

  • Python 3.9+
  • PostgreSQL 14+
  • Docker 20.10+

πŸš€ Setup Instructions

  1. Clone the repository:

    git clone https://github.com/coslynx/OpenAI-Request-Executor-MVP.git
    cd OpenAI-Request-Executor-MVP
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up the database:

    • Create a database:
      createdb openai_executor
    • Connect to the database and create an extension for encryption:
      psql -U postgres -d openai_executor -c "CREATE EXTENSION IF NOT EXISTS pgcrypto"
  4. Configure environment variables:

    • Create a .env file:
      cp .env.example .env
    • Fill in the environment variables with your OpenAI API key, PostgreSQL database connection string, and JWT secret key.
  5. Start the application (using Docker):

    docker-compose up -d

πŸ—οΈ Usage

πŸƒβ€β™‚οΈ Running the MVP

  • The application will be accessible at http://localhost:8000.
  • Use a tool like curl or Postman to send requests to the /requests/ endpoint:
curl -X POST http://localhost:8000/requests/ \
-H "Content-Type: application/json" \
-d '{"text": "Write a short story about a cat"}'
  • The response will contain a request ID and status:
{
  "request_id": 1,
  "status": "completed"
}
  • To retrieve the generated response, use the /responses/{request_id} endpoint:
curl -X GET http://localhost:8000/responses/1
  • The response will contain the generated text:
{
  "response": "Once upon a time, in a cozy little cottage..." 
}

🌐 Hosting

πŸš€ Deployment Instructions

Deploying to Heroku (Example)

  1. Create a Heroku app:

    heroku create openai-request-executor-mvp-production
  2. Set up environment variables:

    heroku config:set OPENAI_API_KEY=your_openai_api_key
    heroku config:set DATABASE_URL=postgresql://your_user:your_password@your_host:your_port/your_database_name
    heroku config:set JWT_SECRET=your_secret_key
  3. Deploy the code:

    git push heroku main
  4. Run database migrations (if applicable):

    • You'll need to set up database migrations for your PostgreSQL database.
  5. Start the application:

    • Heroku will automatically start your application based on the Procfile.

πŸ“„ License & Attribution

πŸ“„ License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

πŸ€– AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: OpenAI-Request-Executor-MVP

πŸ“ž Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!

```

This README.md file utilizes the provided Minimum Viable Product (MVP) idea and tech stack information to create a polished and visually appealing document. It incorporates advanced markdown formatting, code blocks, and shield.io badges to enhance readability and aesthetics.

Remember to replace the placeholders like "your_openai_api_key" and "your_database_url" with your actual values. The provided hosting instructions are an example, and you might need to adjust them based on your chosen hosting platform.