Skip to content

it-at-m/mucgpt

MUCGPT


Made with love by it@M Gitmoij GitHub license GitHub release version uv Supported python versions Supported npm versions

MUCGPT provides a web interface based on a large language model (LLM). The interface currently connects to one or multiple OpenAI-compatible LLM-enpdoints, which allows users to chat, summarise text, brainstorm some ideas and translate a text to plain or easy language. The chat function allows text to be generated and refined in several steps. Summarizing allows PDFs or text to be shortened and made more concise. Brainstorming allows users to create mind maps for different topics. Simplified language allows the user to translate a text into plain or easy language, resulting in a more understandable and easier-to-read text.

In addition custom GPTs can be generated and saved. A own GPT is an assistant for a specific task with an custom system prompt.

Why should you use MUCGPT? See for yourself:

Essay of MUCGPT to convince the user to use it!

Built With

The documentation project is built with technologies we use in our projects (see requirements-dev.txt):

Backend:

Frontend:

Deployment:

Table of contents

Roadmap

Roadmap

See the open issues for a full list of proposed features (and known issues).

Getting started

Install deps

Sync python environment vor development:

uv sync --all-extras # installs dev/test dependencies
# if you only want to run mucgpt without using development deps
uv sync

Install frontend deps

cd app/frontend
npm install

Configure

Configure your environment in config/default.json. The schema of the configuration is cofnig/mucgpt_config.schema.json described. Insert Model Endpoint and API Key for your connection to an OpenAI completion endpoint or an Azure OpenAI completions endpoint.

Run locally

cd app\frontend
npm run buildlocal
cd ..\backend
$env:MUCGPT_CONFIG="path to default.json"
$env:MUCGPT_BASE_CONFIG="path to base.json"
uv run app.py

Run with docker

  1. Build an Image docker build --tag mucgpt-local . --build-arg fromconfig="./config/default.json"
  2. Run the image docker run --detach --publish 8080:8000 mucgpt-local

Documentation

Architecture The architecture of MUCGPT is divided into two parts, the frontend and the backend. MUCGPT is deployed on Microsoft Azure as an AppService with a PostgreSQL database and an Azure OpenAI resource.

The frontend is based on a template from Microsoft Azure and is implemented using React, Typescript and Javascript.

The framework used to implement the backend of MUCGPT is called FastAPI. It is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints. The backend uses LangChain to connect to LLMs. In the config file, you can provide the user with various LLM options to select from in the frontend.

For more information about all the features of MUCGPT click here.

A cheatsheat to use MUCGPT is located here.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please open an issue with the tag "enhancement", fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Open an issue with the tag "enhancement"
  2. Fork the Project
  3. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  4. Commit your Changes (git commit -m 'Add some AmazingFeature')
  5. Push to the Branch (git push origin feature/AmazingFeature)
  6. Open a Pull Request

More about this in the CODE_OF_CONDUCT file.

License

Distributed under the MIT License. See LICENSE file for more information.

Contact

it@M - itm.kicc@muenchen.de