Skip to content

Latest commit

 

History

History
129 lines (89 loc) · 4.64 KB

README.md

File metadata and controls

129 lines (89 loc) · 4.64 KB

Testing

This document outlines tests related to the LeapfrogAI API and backends.

Please see the documentation in the LeapfrogAI UI sub-directory for Svelte UI Playwright tests.

API

For the unit and integration tests within this directory, the following components must be running and accessible:

If you are running everything in a UDS Kubernetes cluster, you must port-forward your model (e.g., Repeater, vLLM, etc.) using the following command:

# may be named repeater OR repeater-model depending on the rendered Helm manifests
uds zarf connect --name=repeater-model --namespace=leapfrogai --local-port=50051 --remote-port=50051

If running everything via Docker containers or in a local Python environment, then ensure they are accessible based on the test configurations in each testing target's sub-directory.

Please see the Makefile for more details on turning tests on/off and for setting test parameters like the default model to use. Below is a quick synopsis of the available Make targets that are run from the root of the entire repository:

# Install the python dependencies
make install

# create a test user for the tests
# prompts for a password and email
make test-user

# setup the environment variables for the tests
# prompts for the previous step's password and email
make test-env

# run the unit tests
make test-api-unit

# run the integration tests
# choices: vllm or llama-cpp-python
LEAPFROGAI_MODEL=vllm make test-api-integration
# OR
LEAPFROGAI_MODEL=llama-cpp-python make test-api-integration

Load Tests

Please see the Load Test documentation and directory for more details.

End-To-End Tests

End-to-End (E2E) tests are located in the e2e/ sub-directory. Each E2E test runs independently based on the model backend that is to be tested.

The E2E tests run in CI pipelines, with the exception of vLLM, which requires a GPU runner.

For the E2E tests, the following components must be running and accessible in a UDS Kubernetes cluster:

An example of running the vLLM E2E tests locally is as follows:

# Install the python dependencies
make install

# create a test user for the tests
# prompts for a password and email
make test-user

# setup the environment variables for the tests
# prompts for the previous step's password and email
make test-env

# run the e2e tests associated with a package
# below is a non-exhaustive list of example test runs
env $(cat .env | xargs) python -m pytest tests/e2e/test_api.py -vvv
env $(cat .env | xargs) LEAPFROGAI_MODEL=vllm python -m pytest tests/e2e/test_llm_generation.py -vvv

Running Tests

Run the tests on an existing UDS Kubernetes cluster with the applicable backend deployed to the cluster.

For example, the following sequence of commands runs test on the llama-cpp-python backend:

# Build and Deploy the LFAI API
make build-api
uds zarf package deploy packages/api/zarf-package-leapfrogai-api-*.tar.zst

# Build and Deploy the model backend you want to test.
# NOTE: In this case we are showing llama-cpp-python
make build-llama-cpp-python
uds zarf package deploy packages/llama-cpp-python/zarf-package-llama-cpp-python-*.tar.zst

# Install the python dependencies
python -m pip install ".[dev]"

# Run the tests!
# NOTE: Each model backend has its own e2e test files
python -m pytest tests/e2e/test_llama.py -v

# Cleanup after yourself
k3d cluster delete uds

Conformance Testing

We include a set of conformance tests to verify our spec against OpenAI to guarantee interoperability with tools that support OpenAI's API (MatterMost, Continue.dev, etc.) and SDKs (Vercel, Azure, etc.). To run these tests the environment variables need to be set:

LEAPFROGAI_API_KEY="<api key>" # this can be created via the LeapfrogAI UI or Supabase
LEAPFROGAI_API_URL="https://leapfrogai-api.uds.dev/openai/v1" # This is the default when using a UDS-bundle locally
LEAPFROGAI_MODEL="vllm" # or whatever model you have installed
OPENAI_API_KEY="<api key>" # you need a funded OpenAI account for this
OPENAI_MODEL="gpt-4o-mini" # or whatever model you prefer

To run the tests, from the root directory of the LeapfrogAI project:

make install # to ensure all python dependencies are installed

make test-conformance # runs the entire suite