diff --git a/README.md b/README.md
index e756c3b..76eb01f 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,11 @@
# OpenHosta
v2.1.4 - Open-Source Project
+ Basic Usage - Synthetic Data - local LLM (phi-4)
+
+ Simple AI Agent with gpt-4o
+
+
**- The future of development is human -**
Welcome to the OpenHosta documentation, a powerful tool that facilitates the integration LLM in the development environnement. OpenHosta is used to emulate functions using AI, while respecting Python's native paradygma and syntax.
diff --git a/docs/openhosta_agent.ipynb b/docs/openhosta_agent.ipynb
new file mode 100644
index 0000000..7e3dbc8
--- /dev/null
+++ b/docs/openhosta_agent.ipynb
@@ -0,0 +1,257 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# OpenHosta Agent with a Local Phi-4 LLM\n",
+ "\n",
+ "This colab demonstrate simple use cases of OpenHosta. You need an OpenAI key to run it"
+ ],
+ "metadata": {
+ "id": "2ywSpiksruBs"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Basic Usage of AI Agents with OpenHosta"
+ ],
+ "metadata": {
+ "id": "0x6AUM3osYK2"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!pip install OpenHosta"
+ ],
+ "metadata": {
+ "id": "L9EUaVsR8x7x",
+ "outputId": "f8345a96-13c8-4737-9730-ad292079ffb4",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ }
+ },
+ "execution_count": 5,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Requirement already satisfied: OpenHosta in /usr/local/lib/python3.10/dist-packages (2.1.4)\n",
+ "Requirement already satisfied: requests>=2.32.3 in /usr/local/lib/python3.10/dist-packages (from OpenHosta) (2.32.3)\n",
+ "Requirement already satisfied: typing_extensions>=4.12.2 in /usr/local/lib/python3.10/dist-packages (from OpenHosta) (4.12.2)\n",
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (3.4.1)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (3.10)\n",
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (2.3.0)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (2024.12.14)\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Configure the LLM that you want to use"
+ ],
+ "metadata": {
+ "id": "1ZEB71Ytsk4B"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from OpenHosta import config\n",
+ "\n",
+ "# If you have an OpenAI API key you can use it like this:\n",
+ "# Use default model (gpt-4o) through API key\n",
+ "# config.set_default_apiKey(<>)\n",
+ "\n",
+ "# Comment this if you use OpenAI models\n",
+ "# Use Microsoft local Phi-4 through ollama\n",
+ "# my_model=config.Model(\n",
+ "# base_url=\"http://localhost:11434/v1/chat/completions\",\n",
+ "# model=\"phi4\", api_key=\"none\", timeout=120\n",
+ "# )\n",
+ "# config.set_default_model(my_model)\n",
+ "\n",
+ "config.set_default_apiKey(\"<>\") # Ask me one through LinkedIn, I will provide one for testing purpose!"
+ ],
+ "metadata": {
+ "id": "SBC5-L0zqH1c"
+ },
+ "execution_count": 4,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Agent structure\n",
+ "\n",
+ "Ask a question then select best function to answer"
+ ],
+ "metadata": {
+ "id": "T62JVKqU0rMg"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Emulate functions using the seleted LLM"
+ ],
+ "metadata": {
+ "id": "FauFveKWsp3G"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from OpenHosta import emulate"
+ ],
+ "metadata": {
+ "id": "SEpo1gfJT8Bg"
+ },
+ "execution_count": 6,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Mix of python and Openhosta functions that the agent will use\n",
+ "def add(a:float, b:float)->float:\n",
+ " \"\"\"\n",
+ " add two numbers\n",
+ " \"\"\"\n",
+ " return a+b\n",
+ "\n",
+ "def number_to_string(number:float)->str:\n",
+ " \"\"\"\n",
+ " Convert a number to a string that represent the number in letters\n",
+ " \"\"\"\n",
+ " return emulate()\n",
+ "\n",
+ "def string_to_number(string:str)->float:\n",
+ " \"\"\"\n",
+ " Convert a string to a number\n",
+ " \"\"\"\n",
+ " return emulate()"
+ ],
+ "metadata": {
+ "id": "icgBCk9j00-2"
+ },
+ "execution_count": 7,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Agent main router\n",
+ "from typing import Dict, Literal, List, Tuple\n",
+ "\n",
+ "Actions = {\n",
+ " \"add\": add,\n",
+ " \"number_to_string\": number_to_string,\n",
+ " \"string_to_number\": string_to_number\n",
+ "}\n",
+ "ActionType = Literal[\"add\", \"number_to_string\", \"string_to_number\", \"Done\"]\n",
+ "\n",
+ "def find_best_step_to_execute(stack:List[Tuple[ActionType, list, str]], target)->Tuple[ActionType, list]:\n",
+ " \"\"\"\n",
+ " Select the best action to get elements needed to achieve target.\n",
+ "\n",
+ " You first print a strategy based on possible actions,\n",
+ " Then you look at already executed actions in stack,\n",
+ " Then you decide of the next action to take and its parameters\n",
+ "\n",
+ " :param stack: a list of already taken actions with (ActionType, list of params, Action returned value)\n",
+ " :param target: the overall target objective.\n",
+ "\n",
+ " :return: The next action to take and its parameters. Return Done if there is no more action to take\n",
+ " \"\"\"\n",
+ " return emulate()"
+ ],
+ "metadata": {
+ "id": "gZ02NiW_1ooR"
+ },
+ "execution_count": 8,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Agent main loop"
+ ],
+ "metadata": {
+ "id": "bVuS-CC6szQM"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "request = \"can you add twenty two and hundred thousant fifity five and print the result in letters\"\n",
+ "\n",
+ "stack=[]\n",
+ "output = \"\"\n",
+ "\n",
+ "Done=False\n",
+ "while not Done:\n",
+ " next_action, params = find_best_step_to_execute(stack, request)\n",
+ " if next_action == \"Done\":\n",
+ " Done=True\n",
+ " else:\n",
+ " next_function = Actions[next_action]\n",
+ " print(f\"Executing {next_action} with params {params}\")\n",
+ " output = next_function(*params)\n",
+ " stack.append([next_action, params, output])\n",
+ "\n",
+ "print(output)\n"
+ ],
+ "metadata": {
+ "id": "GXiKTdYm_GGp",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "9b04eaab-96f8-47ea-ceac-897362854481"
+ },
+ "execution_count": 9,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Executing string_to_number with params ['twenty two']\n",
+ "Executing string_to_number with params ['hundred thousant fifity five']\n",
+ "Executing add with params [22.0, 100055.0]\n",
+ "Executing number_to_string with params [100077.0]\n",
+ "one hundred thousand seventy-seven\n"
+ ]
+ }
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/docs/openhosta_phi4.ipynb b/docs/openhosta_phi4.ipynb
new file mode 100644
index 0000000..d9caf7c
--- /dev/null
+++ b/docs/openhosta_phi4.ipynb
@@ -0,0 +1,520 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "gpuType": "T4",
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# OpenHosta basic example with a Local Phi-4 or gpt-4o\n",
+ "\n",
+ "This colab demonstrate simple use cases of OpenHosta. If you do not have an OpenAI (or other) API KEY, you can run the first part **Install a local Phi-4**. Otherwise, directly jump to step 2: **Basic Usage**."
+ ],
+ "metadata": {
+ "id": "2ywSpiksruBs"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Install a local Phi-4"
+ ],
+ "metadata": {
+ "id": "1DjZKsvksUPf"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!apt install -y screen"
+ ],
+ "metadata": {
+ "id": "5NCvoTi3phgq",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "ea5c5e5e-b01b-4ace-e57f-64c7a0a690bb"
+ },
+ "execution_count": 55,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Reading package lists... Done\n",
+ "Building dependency tree... Done\n",
+ "Reading state information... Done\n",
+ "screen is already the newest version (4.9.0-1).\n",
+ "0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded.\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!curl -fsSL https://ollama.com/install.sh | sh"
+ ],
+ "metadata": {
+ "id": "dPQ1TFFGp4_w",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "cf8051bb-5e82-418c-8990-65a2aa7407a9"
+ },
+ "execution_count": 56,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ ">>> Cleaning up old version at /usr/local/lib/ollama\n",
+ ">>> Installing ollama to /usr/local\n",
+ ">>> Downloading Linux amd64 bundle\n",
+ "############################################################################################# 100.0%\n",
+ ">>> Adding ollama user to video group...\n",
+ ">>> Adding current user to ollama group...\n",
+ ">>> Creating ollama systemd service...\n",
+ "\u001b[1m\u001b[31mWARNING:\u001b[m systemd is not running\n",
+ "\u001b[1m\u001b[31mWARNING:\u001b[m Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.\n",
+ ">>> The Ollama API is now available at 127.0.0.1:11434.\n",
+ ">>> Install complete. Run \"ollama\" from the command line.\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!screen -dmS ollama ollama serve"
+ ],
+ "metadata": {
+ "id": "Rczlf_0Dp9b9"
+ },
+ "execution_count": 57,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Can take 5 minutes\n",
+ "!ollama run phi4 hello --verbose 2>&1 | grep -E \"([^0]0%|Bonjour|:)\""
+ ],
+ "metadata": {
+ "id": "ivzgDLEuqEcI",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "67d0bff2-fa2e-4425-a476-8b0a32519abd"
+ },
+ "execution_count": 54,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "pulling dde5aa3fc5ff... 0% ▕ ▏ 0 B/2.0 GB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 0% ▕ ▏ 0 B/2.0 GB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 0% ▕ ▏ 0 B/2.0 GB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 0% ▕ ▏ 334 KB/2.0 GB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 0% ▕ ▏ 4.9 MB/2.0 GB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 10% ▕█ ▏ 194 MB/2.0 GB 167 MB/s 10s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 10% ▕█ ▏ 200 MB/2.0 GB 167 MB/s 10s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 20% ▕███ ▏ 409 MB/2.0 GB 167 MB/s 9s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 30% ▕████ ▏ 599 MB/2.0 GB 241 MB/s 5s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 30% ▕████ ▏ 608 MB/2.0 GB 241 MB/s 5s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 40% ▕██████ ▏ 812 MB/2.0 GB 245 MB/s 4s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 70% ▕███████████ ▏ 1.4 GB/2.0 GB 248 MB/s 2s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling dde5aa3fc5ff... 80% ▕████████████ ▏ 1.6 GB/2.0 GB 242 MB/s 1s\u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 966de95ca8a6... 0% ▕ ▏ 0 B/1.4 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 966de95ca8a6... 0% ▕ ▏ 0 B/1.4 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 966de95ca8a6... 0% ▕ ▏ 0 B/1.4 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling fcc5a6bec9da... 0% ▕ ▏ 0 B/7.7 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling fcc5a6bec9da... 0% ▕ ▏ 0 B/7.7 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling a70ff7e570d9... 0% ▕ ▏ 0 B/6.0 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling a70ff7e570d9... 0% ▕ ▏ 0 B/6.0 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling a70ff7e570d9... 0% ▕ ▏ 0 B/6.0 KB \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 56bb8bd477a5... 0% ▕ ▏ 0 B/ 96 B \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 56bb8bd477a5... 0% ▕ ▏ 0 B/ 96 B \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "pulling 34bb5ab01051... 0% ▕ ▏ 0 B/ 561 B \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gpulling manifest \n",
+ "total duration: 2m9.758814725s\n",
+ "load duration: 2.475263371s\n",
+ "prompt eval count: 26 token(s)\n",
+ "prompt eval duration: 1m1.681s\n",
+ "prompt eval rate: 0.42 tokens/s\n",
+ "eval count: 25 token(s)\n",
+ "eval duration: 1m5.601s\n",
+ "eval rate: 0.38 tokens/s\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 58,
+ "metadata": {
+ "id": "onr1w8jbEcby",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "5318fb11-b3fa-4f6c-bf9e-a1173ef53a9e"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Requirement already satisfied: OpenHosta in /usr/local/lib/python3.10/dist-packages (2.1.4)\n",
+ "Requirement already satisfied: requests>=2.32.3 in /usr/local/lib/python3.10/dist-packages (from OpenHosta) (2.32.3)\n",
+ "Requirement already satisfied: typing_extensions>=4.12.2 in /usr/local/lib/python3.10/dist-packages (from OpenHosta) (4.12.2)\n",
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (3.4.1)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (3.10)\n",
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (2.3.0)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->OpenHosta) (2024.12.14)\n"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install OpenHosta"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Basic Usage of OpenHosta"
+ ],
+ "metadata": {
+ "id": "0x6AUM3osYK2"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Configure the LLM that you want to use"
+ ],
+ "metadata": {
+ "id": "1ZEB71Ytsk4B"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from OpenHosta import config\n",
+ "\n",
+ "# If you have an OpenAI API key you can use it like this:\n",
+ "# Use default model (gpt-4o) through API key\n",
+ "# config.set_default_apiKey(<>)\n",
+ "\n",
+ "# Comment this if you use OpenAI models\n",
+ "# Use Microsoft local Phi-4 through ollama\n",
+ "my_model=config.Model(\n",
+ " base_url=\"http://localhost:11434/v1/chat/completions\",\n",
+ " model=\"phi4\", api_key=\"none\", timeout=120\n",
+ ")\n",
+ "config.set_default_model(my_model)\n"
+ ],
+ "metadata": {
+ "id": "SBC5-L0zqH1c"
+ },
+ "execution_count": 59,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Emulate functions using the seleted LLM"
+ ],
+ "metadata": {
+ "id": "FauFveKWsp3G"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from OpenHosta import emulate"
+ ],
+ "metadata": {
+ "id": "SEpo1gfJT8Bg"
+ },
+ "execution_count": 60,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def translate(text:str, language:str)->str:\n",
+ " \"\"\"\n",
+ " This function translates the text in the “text” parameter into the language specified in the “language” parameter.\n",
+ " \"\"\"\n",
+ " return emulate()\n",
+ "\n",
+ "result = translate(\"Hello World!\", \"French\")\n",
+ "print(result)"
+ ],
+ "metadata": {
+ "id": "gE1ecyDzUVFE",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "1cc84320-72f3-4245-db78-92d1aa192a25"
+ },
+ "execution_count": 61,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Bonjour le monde!\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# You can select another model like this\n",
+ "#my_model = config.Model(\n",
+ "# model=\"gpt-4o-mini\",\n",
+ "# base_url=\"https://api.openai.com/v1/chat/completions\",\n",
+ "# api_key=<>\n",
+ "#)"
+ ],
+ "metadata": {
+ "id": "avGsfrxQUsCb"
+ },
+ "execution_count": 62,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def find_name_age(sentence:str, id:dict)->dict:\n",
+ " \"\"\"\n",
+ " This function find in a text the name and the age of a personn.\n",
+ "\n",
+ " Args:\n",
+ " sentence: The text in which to search for information\n",
+ " id: The dictionary to fill in with information\n",
+ "\n",
+ " Return:\n",
+ " A dictionary identical to the one passed in parameter, but filled with the information.\n",
+ " If the information is not found, fill with the values with “None”.\n",
+ " \"\"\"\n",
+ " return emulate(model=my_model)\n",
+ "\n",
+ "return_dict = {\"name\": \"\", \"age\": \"\"}\n",
+ "result = find_name_age(\"Hello, I'm John Wick, i'm 30 and I live in New York\", return_dict)\n",
+ "print(result)"
+ ],
+ "metadata": {
+ "id": "I_AJQzoKU192",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "98cdec58-8857-470a-d5e0-9216515b0e12"
+ },
+ "execution_count": 63,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "{'name': 'John Wick', 'age': '30'}\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Specify advanced return types"
+ ],
+ "metadata": {
+ "id": "bVuS-CC6szQM"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from typing import Dict, Tuple, List\n",
+ "\n",
+ "def analyze_text(text: str) -> Dict[str, List[Tuple[int, str]]]:\n",
+ " \"\"\"Analyze text to map each word to a list of tuples containing word length and word.\"\"\"\n",
+ " return emulate()\n",
+ "\n",
+ "# Example usage\n",
+ "analysis = analyze_text(\"Hello, World!\")\n",
+ "\n",
+ "print(analysis)\n",
+ "print(type(analysis))"
+ ],
+ "metadata": {
+ "id": "GXiKTdYm_GGp",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "854dd9c7-95fb-4c69-8da7-37549df9f355"
+ },
+ "execution_count": 64,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "{'hello': [[5, 'Hello']], 'world': [[5, 'World']]}\n",
+ "\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Specify pydantic return strucures\n",
+ "\n",
+ "OpenHosta is compatible with pydantic. You can specify pydantic input and output types and OpenHosta will propagate schema and Field documentation to the LLM."
+ ],
+ "metadata": {
+ "id": "R5qFAS06s7Eq"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!pip install pydantic"
+ ],
+ "metadata": {
+ "id": "-w2PyozHrklT",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "b15695d6-3111-4c3c-bca6-244f2671245e"
+ },
+ "execution_count": 65,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Requirement already satisfied: pydantic in /usr/local/lib/python3.10/dist-packages (2.10.4)\n",
+ "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from pydantic) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.10/dist-packages (from pydantic) (2.27.2)\n",
+ "Requirement already satisfied: typing-extensions>=4.12.2 in /usr/local/lib/python3.10/dist-packages (from pydantic) (4.12.2)\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from pydantic import BaseModel, Field\n",
+ "\n",
+ "class Personn(BaseModel):\n",
+ " name: str = Field(..., description = \"The full name\")\n",
+ " age: int\n",
+ "\n",
+ "def find_name_age_pydantic(sentence:str)->Personn:\n",
+ " \"\"\"\n",
+ " This function find in a text the name and the age of a personn.\n",
+ "\n",
+ " Args:\n",
+ " sentence: The text in which to search for information\n",
+ "\n",
+ " Return:\n",
+ " A Pydantic model, but filled with the information.\n",
+ " If the information is not found, fill with the values with “None”.\n",
+ " \"\"\"\n",
+ " return emulate()\n",
+ "\n",
+ "result = find_name_age_pydantic(\"Luke Skywalker is very surprising: he's only 27 when he becomes a Jedi.\")\n",
+ "print(result)"
+ ],
+ "metadata": {
+ "id": "wnn5ed4QWXYs",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "0f23ccc0-c6e3-4a4d-c9da-3b3a56822cf1"
+ },
+ "execution_count": 66,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "name='Luke Skywalker' age=27\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Limitations\n",
+ "\n",
+ "The emulation is limited by the LLM capabilities. Try to have it count r in strawberrry and you will get into troubles ;-).\n",
+ "Make sure the LLM is capable and not alucinating before you implement an emulated function."
+ ],
+ "metadata": {
+ "id": "VvcHBChjtyys"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def find_occurence_of_a_word(word :str, text: str) -> int:\n",
+ " \"\"\"\n",
+ " This function takes a word and a text and returns\n",
+ " the number of times the word appears in the text.\n",
+ " \"\"\"\n",
+ " return emulate()\n",
+ "\n",
+ "find_occurence_of_a_word(\"Hello\", \"Hello World Hello!\")\n"
+ ],
+ "metadata": {
+ "id": "E5eTPiUyXjXR",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "2b46e14d-b456-421e-dda0-2c6f9f1fbf65"
+ },
+ "execution_count": 67,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "2"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 67
+ }
+ ]
+ }
+ ]
+}
\ No newline at end of file