Orchnex is a specialized orchestration system that combines the capabilities of Google's Gemini and Meta's Llama models. This Phase 1 release focuses on optimizing the interaction between these two powerful LLMs to provide enhanced results through prompt refinement and quality control.
System Initialization | Enhanced Prompt |
---|---|
Inital Response | Meta Feedback-1 |
Refined Response-1 | Meta Feedback-2 |
Refined Response-2 | Final Result |
- Google AI Python SDK
Setup
pip install google-generativeai
- Open AI
Setup
pip install openai
- GEMINI API = https://aistudio.google.com/app/apikey
- Login through your google account
- Create an API key
- Select your project
- Your api key will be create, it will look like : AIzaSxxxxxxxxxxh09xxLwCA
- Store it safe
- NIM API = https://build.nvidia.com/explore/discover
- Login or create an NVIDIA account to access
- Here, you can also explore other various models, but for our project its llama-3_1-8b-instruct
- llama-3_1-8b-instruct = https://build.nvidia.com/meta/llama-3_1-8b-instruct
- Click Build with this NIM
- Generate an API key, it will look like this : nvapi-056fxxxxxxxxxheZxxxxxxxxxxxxxxxxxxxxbg0
- Store it safe
- Choose a directory to clone
- Open Terminal, and
cd (your_directory)
- Clone the repository
git clone https://github.com/harshalmore31/orchnex.git
- Open in Code Editor, eg. VS Code
- Open the requirements.text and download the necessary requirements and dependencies
- To test in Action Run, .src/main.py
python src/main.py
- Enter the saved API keys
- Enter the prompt, go throught the whole flow, to view the interaction within two LLM
flowchart of orchnex of gemini and llama
- 🤖 Dual-LLM Orchestration: Seamless coordination between Gemini and Llama
- 🔄 PromptMaster Enhancement: Automatic prompt optimization using Llama
- ✨ Phoenix Response Generation: High-quality responses via Gemini
- 📊 Quality Control Loop: Automated refinement process
- 📈 Performance Metrics: Detailed orchestration insights
from orchnex import MultiLLMOrchestrator, LLMConfig
# Initialize with your API keys
config = LLMConfig(
gemini_api_key="your_gemini_key",
nvidia_api_key="your_nvidia_key" # For Llama access
)
# Create orchestrator
orchestrator = MultiLLMOrchestrator(config)
# Process input with visualization
result = orchestrator.process_input(
"Explain quantum computing",
verbose=True
)
print(result)
-
PromptMaster (Llama)
- Analyzes and enhances input prompts
- Provides structured enhancement strategies
-
Phoenix (Gemini)
- Generates high-quality initial responses
- Refines based on feedback
-
Quality Control Loop
- Llama analyzes response quality
- Gemini implements refinements
- Process continues until quality threshold met
- Currently supports only Gemini and Llama models
- Requires both API keys to function
- Optimized for specific use cases
Phase 2 will include:
- Support for additional LLM providers
- Flexible provider interface
- Custom orchestration patterns
- Advanced configuration options
Visit our documentation for:
- Detailed setup instructions
- API reference
- Usage examples
- Performance optimization guides