Skip to content

Commit

Permalink
Add llm integration with Intel Gaudi in llama-index-llms-gaudi (#16308)
Browse files Browse the repository at this point in the history
  • Loading branch information
jeanyu-habana authored Oct 8, 2024
1 parent f16200b commit 55c50a2
Show file tree
Hide file tree
Showing 12 changed files with 1,735 additions and 0 deletions.
153 changes: 153 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-gaudi/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
llama_index/_static
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
bin/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
etc/
include/
lib/
lib64/
parts/
sdist/
share/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.ruff_cache

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints
notebooks/

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyvenv.cfg

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Jetbrains
.idea
modules/
*.swp

# VsCode
.vscode

# pipenv
Pipfile
Pipfile.lock

# pyright
pyrightconfig.json
3 changes: 3 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-gaudi/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
poetry_requirements(
name="poetry",
)
17 changes: 17 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-gaudi/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
GIT_ROOT ?= $(shell git rev-parse --show-toplevel)

help: ## Show all Makefile targets.
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}'

format: ## Run code autoformatters (black).
pre-commit install
git ls-files | xargs pre-commit run black --files

lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files

test: ## Run tests via pytest.
pytest tests

watch-docs: ## Build and watch documentation.
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/
55 changes: 55 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-gaudi/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# LlamaIndex Llms Integration with Intel Gaudi

## Installation

```bash
pip install --upgrade-strategy eager optimum[habana]
pip install llama-index-llms-gaudi
pip install llama-index-llms-huggingface
```

## Usage

```python
import argparse
import os, logging
from llama_index.llms.gaudi import GaudiLLM


def setup_parser(parser):
parser.add_argument(...)
args = parser.parse_args()
return args


if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="GaudiLLM Basic Usage Example"
)
args = setup_parser(parser)
args.model_name_or_path = "HuggingFaceH4/zephyr-7b-alpha"

llm = GaudiLLM(
args=args,
logger=logger,
model_name="HuggingFaceH4/zephyr-7b-alpha",
tokenizer_name="HuggingFaceH4/zephyr-7b-alpha",
query_wrapper_prompt=PromptTemplate(
"<|system|>\n</s>\n<|user|>\n{query_str}</s>\n<|assistant|>\n"
),
context_window=3900,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95},
messages_to_prompt=messages_to_prompt,
device_map="auto",
)

query = "Is the ocean blue?"
print("\n----------------- Complete ------------------")
completion_response = llm.complete(query)
print(completion_response.text)
```

## Examples

- [More Examples](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/llms/llama-index-llms-gaudi/examples)
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
python_sources()
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# GaudiLLM Examples

This folder contains examples showcasing how to use LlamaIndex with `gaudi` LLM integration `llama_index.llms.gaudi.GaudiLLM`.

## Installation

### On Intel Gaudi

Install `llama-index-llms-gaudi`. This will also install `gaudi` and its dependencies.

```bash
pip install --upgrade-strategy eager optimum[habana]
```

## List of Examples

### Basic Example

The example [basic.py](./basic.py) shows how to run `GaudiLLM` on Intel Gaudi and conduct tasks such as text completion. Run the example as following:

```bash
python basic.py
```

> Please note that in this example we'll use [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) model for demonstration. It requires `transformers` and `tokenizers` packages.
>
> ```bash
> pip install -U transformers tokenizers
> ```
Loading

0 comments on commit 55c50a2

Please sign in to comment.