Skip to content

LM-Kit/lm-kit-net-samples

Repository files navigation

🎉 Claim Your Free Community License!

Get started with the LM-Kit Community Edition today and gain access to powerful AI tools for free. Whether you're a hobbyist, startup, or open-source developer, the Community Edition is here to help you innovate and experiment without limitations.

👉 Claim Your Free License Now!

🚀 NEW

Enterprise-Grade .NET SDK for Integrating Generative AI Capabilities | Demo repository

With LM-Kit.NET, integrating or building AI is no longer complex.

LM-Kit.NET is a state-of-the-art, cross-platform SDK that offers a wide range of advanced Generative AI capabilities.
It enables seamless orchestration of multiple AI models through a single API, tailored to meet specific business needs.
The SDK offers AI capabilities across a wide range of domains, including text completion, function calling, chat assistance, coding assistance, content retrieval, text analysis, translation, and more...

📘 Documentation and resources can be found in the LM-Kit Docs area hosted at: https://docs.lm-kit.com

Comprehensive Feature Set

LM-Kit.NET offers a collection of highly optimized low-level APIs, making it easier to develop fully tailored Large Language Model (LLM) inference pipelines.

Moreover, LM-Kit.NET provides a broad spectrum of high-level AI functionalities across multiple domains, such as:

  • 📝 Text Generation: Automatically produce coherent and contextually appropriate text.
  • 📋 Structured Output Generation: Extract structured information based on a JSON scheme.
  • Text Quality Evaluation: Evaluate the quality metrics of generated content.
  • 🔗 Function Calling: Dynamically trigger specific functions in your application.
  • 🌐 Language Detection: Accurately identify the language of text input.
  • 🔄 Text Translation: Seamlessly translate text across various languages.
  • ✍️ Text Correction: Correct grammar and spelling in texts of any length.
  • 🔄 Text Rewriting: Revise text to match a specific communication style.
  • 💻 Code Analysis: Execute various programming code processing tasks.
  • 🛠️ Model Fine-Tuning: Adapt pre-trained models to better meet specific needs.
  • ⚙️ Model Quantization: Optimize models for efficient inference.
  • 🔍 Retrieval-Augmented Generation (RAG): Boost text generation with information sourced from a large corpus.
  • 🔢 Text Embeddings: Convert text into numerical representations that capture semantic meanings.
  • Question Answering: Respond to queries with support for both single-turn and multi-turn interactions.
  • 🏷️ Custom Text Classification: Categorize text into predefined classes based on content.
  • 😊 Sentiment Analysis: Detect and interpret the emotional tone within text.
  • 😄 Emotion Detection: Identify specific emotions expressed in text.
  • 😏 Sarcasm Detection: Detect instances of sarcasm in written content.
  • 🚀 And More: Discover additional features to extend your application's capabilities.

These continuously evolving capabilities ensure that advanced AI solutions are seamlessly integrated, tailored to diverse needs through a single SDK.

Run Local LLMs on Any Device

The LM-Kit.NET model inference system is powered by llama.cpp, offering state-of-the-art performance across a wide array of hardware with minimal setup and zero dependencies.
LM-Kit.NET operates exclusively on-device (edge computing), providing full control and precise tuning of the inference process.
Additionally, LM-Kit.NET supports a growing list of model architectures, including LLaMA-2, LLaMA-3, Mistral, Falcon, Phi, and more.

Maximized Performance

1. 🚀 Optimized for a Variety of GPUs and CPUs

LM-Kit.NET is expertly designed to exploit the strengths of various hardware configurations, ensuring peak performance across all platforms. Its cross-platform optimization allows it to leverage each device's unique hardware capabilities. For instance, it automatically utilizes CUDA on NVIDIA GPUs to significantly speed up computations, Metal on Apple devices to enhance graphics and processing tasks, and Vulkan to efficiently tap into the power of multiple GPUs, including those from AMD, Intel, and NVIDIA, in diverse environments.

2. ⚙️ Advanced Architectural Foundations

LM-Kit.NET's core system has been meticulously optimized to efficiently handle a broad spectrum of scenarios.
Its sophisticated internal caching and recycling mechanisms are designed to maintain high performance levels, even under varied operational conditions.
Whether your application is running a single instance or multiple concurrent instances, LM-Kit.NET's advanced core system seamlessly orchestrates all requests, delivering fast performance while minimizing resource usage.

3. 🌟 Unmatched Performance

With LM-Kit.NET, experience model inference speeds up to 5x faster, thanks to its cutting-edge technologies that are constantly refined and benchmarked to keep you ahead of the curve.

Become an Early Adopter of Generative AI Innovations

LM-Kit.NET is developed by industry experts using a strategy of continuous innovation.
It is designed to swiftly address emerging market demands and introduce new capabilities to modernize existing applications.
By leveraging state-of-the-art AI technologies, LM-Kit.NET provides a modern, user-friendly, and intuitive API suite, making advanced AI accessible for all types of applications.

Retain Complete Control Over Your Data

Ensuring complete control over your data is essential for both privacy and security.
By using LM-Kit.NET, which performs model inference directly on-device, you ensure that sensitive data remains within your controlled environment, never crossing external networks.
Here are some key benefits of this approach:

1. 🔒 Enhanced Privacy

Since all data processing occurs locally on your device, there's no need to send data to a remote server.
This greatly reduces the risk of exposure or leakage of sensitive information, keeping your data secure.

2. 🛡️ Increased Security

With zero external data transmission, the risk of data interception during transmission is entirely eliminated.
This closed system approach minimizes vulnerabilities, offering a more secure solution.

3. ⚡ Faster Response Times

Processing data locally cuts down on the latency typically associated with sending data to a remote server and awaiting a response.
This results in faster model inferences, leading to quicker decision-making and a better user experience.

4. 📉 Reduced Bandwidth Usage

By eliminating the need to transfer large amounts of data over the internet, LM-Kit.NET significantly reduces bandwidth consumption.
This is especially beneficial in environments with limited or costly data connectivity.

5. ✅ Full Compliance with Data Regulations

Local processing helps ensure compliance with strict data protection regulations, such as GDPR or HIPAA, which often require certain types of data to be stored and processed within specific geographic locations.
By leveraging LM-Kit.NET's on-device processing, organizations can achieve higher levels of data autonomy and protection while benefiting from advanced computational models and real-time analytics.

Easy Integration and Simple Deployment

LM-Kit.NET features an exceptionally streamlined deployment model, packaged as a single NuGet for all supported platforms.
Integrating LM-Kit.NET into any .NET application is a straightforward process, typically requiring just a few clicks.
LM-Kit.NET combines C# and C++ coding, meticulously designed without dependencies to perfectly match its functionalities.

1. 🔧 Simplified Integration

LM-Kit.NET requires no external containers or complex deployment procedures, making integration incredibly straightforward.
This approach drastically reduces development time and simplifies the learning curve, enabling a wider range of developers to effectively deploy and use the technology.

2. 🚀 Streamlined Deployment

LM-Kit.NET is built for efficiency and simplicity. It runs directly within the same application process that invokes it, avoiding the complexities and resource demands commonly associated with containerized systems.
This direct integration enhances performance and simplifies incorporation into existing applications by eliminating the common hurdles associated with container use.

3. ⚙️ Efficient Resource Management

Operating in-process, LM-Kit.NET minimizes its impact on system resources, making it ideal for devices with limited capacity or scenarios where maximizing computing efficiency is crucial.

4. 🌟 Enhanced Reliability

By avoiding dependence on external services or containers, LM-Kit.NET provides more stable and predictable performance.
This reliability is critical for applications that require consistent, rapid data processing without external dependencies.

Supported Operating Systems

LM-Kit.NET is fully compatible with a broad range of operating systems, ensuring smooth and reliable performance on all supported platforms:

  • 🪟 Windows: Compatible with versions from Windows 7 up to the latest release.
  • 🍏 macOS: Supports macOS 11 and all subsequent versions.
  • 🐧 Linux: Optimized for distributions with glibc version 2.27 or newer.

Supported .NET Frameworks

LM-Kit.NET is compatible with various .NET frameworks, spanning from version 4.6.2 up to .NET 8.

Hugging Face Integration

The LM-Kit section on Hugging Face offers state-of-the-art quantized models that have been rigorously tested with the LM-Kit SDK. Additionally, LM-Kit allows you to load models directly from Hugging Face repositories using the Hugging Face API, making it easy to integrate and deploy the latest models into your applications.