-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release please branches main #2120
Open
Egor4ik888
wants to merge
62
commits into
zylon-ai:fix/docs-info
Choose a base branch
from
Egor4ik888:release-please--branches--main
base: fix/docs-info
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Release please branches main #2120
Egor4ik888
wants to merge
62
commits into
zylon-ai:fix/docs-info
from
Egor4ik888:release-please--branches--main
+6,902
−3,801
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…aphy deps (zylon-ai#1841) * Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers * added cryptography dependency for pdf parsing
Update repo url
…1835) * Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this. I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations. * Removed prompt_style from llamacpp entirely * Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
…ocal computer (like LM Studio). (zylon-ai#1858) feat(llm): Improve settings of the OpenAILike LLM
…l) (zylon-ai#1920) * Allow parameterizing OpenAI embeddings component (api_base, key, model) * Update settings * Update description
* fix: mistral ignoring assistant messages * fix: typing * fix: fix tests
* Support for Google Gemini LLMs and Embeddings Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml) Install via poetry install --extras "llms-gemini embeddings-gemini" Notes: * had to bump llama-index-core to later version that supports Gemini * poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work... * fix: crash when gemini is not selected * docs: add gemini llm --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
…1883) * Added ClickHouse vector sotre support * port fix * updated lock file * fix: mypy * fix: mypy --------- Co-authored-by: Valery Denisov <valerydenisov@double.cloud> Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* Update settings.mdx * docs: add cmd --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
…i#1779) * Fix/update concepts.mdx referencing to installation page The link for `/installation` is broken in the "Main Concepts" page. The correct path would be `./installation` or maybe `/installation/getting-started/installation` * fix: docs --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
Co-authored-by: chdeskur <chdeskur@gmail.com>
* docs: update project links ... * docs: update citation
zylon-ai#1998) * docs: add troubleshooting * fix: pass HF token to setup script and prevent to download tokenizer when it is empty * fix: improve log and disable specific tokenizer by default * chore: change HF_TOKEN environment to be aligned with default config * ifx: mypy
* docs: add missing configurations * docs: change HF embeddings by ollama * docs: add disclaimer about Gradio UI * docs: improve readability in concepts * docs: reorder `Fully Local Setups` * docs: improve setup instructions * docs: prevent have duplicate documentation and use table to show different options * docs: rename privateGpt to PrivateGPT * docs: update ui image * docs: remove useless header * docs: convert to alerts ingestion disclaimers * docs: add UI alternatives * docs: reference UI alternatives in disclaimers * docs: fix table * chore: update doc preview version * chore: add permissions * chore: remove useless line * docs: fixes ...
* integrate Milvus into Private GPT * adjust milvus settings * update doc info and reformat * adjust milvus initialization * adjust import error * mionr update * adjust format * adjust the db storing path * update doc
* Update README.md Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT * Update README.md Update text to address the comments * Update README.md Improve text
* chore: add pull request template * chore: add issue templates * chore: require more information in bugs
* fix: ffmpy dependency * fix: block ffmpy to commit sha
* chore: update ollama (llm) * feat: allow to autopull ollama models * fix: mypy * chore: install always ollama client * refactor: check connection and pull ollama method to utils * docs: update ollama config with autopulling info
* fix: when two user messages were sent * fix: add source divider * fix: add favicon * fix: add zylon link * refactor: update label
* added llama3 prompt * more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14 * fix: new llama3 prompt --------- Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* feat: change ollama default model to llama3.1 * chore: bump versions * feat: Change default model in local mode to llama3.1 * chore: make sure last poetry version is used * fix: mypy * fix: do not add BOS (with last llamacpp-python version)
* feat: unify embedding model to nomic * docs: add embedding dimensions mismatch * docs: fix fern
* feat: add summary recipe * test: add summary tests * docs: move all recipes docs * docs: add recipes and summarize doc * docs: update openapi reference * refactor: split method in two method (summary) * feat: add initial summarize ui * feat: add mode explanation * fix: mypy * feat: allow to configure async property in summarize * refactor: move modes to enum and update mode explanations * docs: fix url * docs: remove list-llm pages * docs: remove double header * fix: summary description
* fix: allow to configure trust_remote_code based on: zylon-ai#1893 (comment) * fix: nomic hf embeddings
* docs: update Readme * style: refactor image * docs: change important to tip
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…i#2037) * chore: update docker-compose with profiles * docs: add quick start doc
Fixing the error I encountered while using the azopenai mode
* chore: update docker-compose with profiles * docs: add quick start doc * chore: generate docker release when new version is released * chore: add dockerhub image in docker-compose * docs: update quickstart with local/remote images * chore: update docker tag * chore: refactor dockerfile names * chore: update docker-compose names * docs: update llamacpp naming * fix: naming * docs: fix llamacpp command
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* chore: block matplotlib to fix installation in window machines * chore: remove workaround, just update poetry.lock * fix: update matplotlib to last version
* docs: add numpy issue to troubleshooting * fix: troubleshooting link ...
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ylon-ai#2062) * Fix: Rectify ffmpy 0.3.2 poetry config * keep optional set to false for ffmpy * Updating ffmpy to version 0.4.0 * Remove comment about a fix
* feat: add retry connection to ollama When Ollama is running in the docker-compose, traefik is not ready sometimes to route the request, and it fails * fix: mypy
* fix: missing depends_on * chore: update copy permissions * chore: update entrypoint * Revert "chore: update entrypoint" This reverts commit f73a36a. * Revert "chore: update copy permissions" This reverts commit fabc3f6. * style: fix docker warning * fix: multiples fixes * fix: user permissions writing local_data folder
* Adding MistralAI mode * Update embedding_component.py * Update ui.py * Update settings.py * Update embedding_component.py * Update settings.py * Update settings.py * Update settings-mistral.yaml * Update llm_component.py * Update settings-mistral.yaml * Update settings.py * Update settings.py * Update ui.py * Update embedding_component.py * Delete settings-mistral.yaml --------- Co-authored-by: SkiingIsFun123 <101684827+SkiingIsFun123@users.noreply.github.com> Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
* Add default mode option to settings * Revise default_mode to Literal (enum) and add to settings.yaml * Revise to pass make check/test * Default mode: RAG --------- Co-authored-by: Jason <jason@sowinsight.solutions>
* Sanitize null bytes before ingestion * Added comments
* chore: update libraries * fix: mypy * chore: more updates * fix: mypy/black * chore: fix docker warnings * fix: mypy * fix: black
When running private gpt with external ollama API, ollama service returns 503 on startup because ollama service (traefik) might not be ready. - Add healthcheck to ollama service to test for connection to external ollama - private-gpt-ollama service depends on ollama being service_healthy Co-authored-by: Koh Meng Hui <kohmh@duck.com>
Can you explain which changes are there in this PR? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Test Configuration:
Checklist:
make check; make test
to ensure mypy and tests pass