Skip to content

Latest commit

 

History

History
288 lines (250 loc) · 12.5 KB

14.NL solutions with AOAI Service.MD

File metadata and controls

288 lines (250 loc) · 12.5 KB

Build natural language solutions with Azure OpenAI Service

  • Choose and deploy a model

    1. Text or Generative Pre-trained Transformer (GPT) - Models that understand and generate NL and some code. These models are best at general tasks, conversations, and chat formats.
    2. Code - Code models are built on top of GPT models, and trained on millions of lines of code. These models can understand and generate code, including interpreting comments or natural language to generate code.
    3. Embeddings - These models can understand and use embeddings, which are a special format of data that can be used by machine learning models and algorithms.
  • Available endpoints

    1. Completion - model takes an input prompt, and generates one or more predicted completions
    2. ChatCompletion - model takes input in the form of a chat conversation (where roles are specified with the message they send), and the next chat completion is generated
    3. Embeddings - model takes input and returns a vector representation of that input
  • Use Azure OpenAI REST API

    1. AOAI offers a REST API for interacting and generating responses that developers can use to add AI functionality to their applications. YOUR_ENDPOINT_NAME (base endpoint, like sample.openai.azure.com) | YOUR_API_KEY | YOUR_DEPLOYMENT_NAME

    2. Completions - which generates the completion of your prompt.

          curl https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01\
            -H "Content-Type: application/json" \
            -H "api-key: YOUR_API_KEY" \
            -d "{
            \"prompt\": \"Your favorite Shakespeare is\",
            \"max_tokens\": 5
          }"
      
      The JSON response:
          {
              "id": "<id>",
              "object": "text_completion",
              "created": 1679001781,
              "model": "text-davinci-003",
              "choices": [
                  {
                      "text": "Macbeth",
                      "index": 0,
                      "logprobs": null,
                      "finish_reason": "stop"
                  }
              ]
          }
      
      1. choices[].text - The completion response within.
      2. finish_reason, which in this example is stop.
      3. Other possibilities for finish_reason include length, which means it used up the max_tokens specified in the request, or content_filter, which means the system detected harmful content was generated from the prompt. If harmful content is included in the prompt, the API request returns an error.
    3. Chat completions - chat/completions generates a completion to your prompt, but works best when that prompt is a chat exchange.

          curl https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-03-15-preview \
            -H "Content-Type: application/json" \
            -H "api-key: YOUR_API_KEY" \
            -d '{"messages":[{"role": "system", "content": "You are a helpful assistant, teaching people about AI."},
          {"role": "user", "content": "Does Azure OpenAI support multiple languages?"},
          {"role": "assistant", "content": "Yes, Azure OpenAI supports several languages, and can translate between them."},
          {"role": "user", "content": "Do other Azure Cognitive Services support translation too?"}]}'
      
      The JSON response:
          {
              "id": "chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
              "object": "chat.completion",
              "created": 1679001781,
              "model": "gpt-35-turbo",
              "usage": {
                  "prompt_tokens": 95,
                  "completion_tokens": 84,
                  "total_tokens": 179
              },
              "choices": [
                  {
                      "message":
                          {
                              "role": "assistant",
                              "content": "Yes, other Azure Cognitive Services also support translation. ..."
                          },
                      "finish_reason": "stop",
                      "index": 0
                  }
              ]
          }
      
    4. Both completion endpoints allow for specifying other optional input parameters, such as temperature, max_tokens and more.

    5. Embeddings - are helpful for specific formats that are easily consumed by ML models. To generate embeddings from the input text, POST a request to the embeddings endpoint.

          curl https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-12-01 \
            -H "Content-Type: application/json" \
            -H "api-key: YOUR_API_KEY" \
            -d "{\"input\": \"The food was delicious and the waiter...\"}"
      
      When generating embeddings, be sure to use a model in AOAI meant for embeddings. Those models start with text-embedding or text-similarity, depending on what functionality you're looking for.
      
      The JSON response:
        {
          "object": "list",
          "data": [
            {
              "object": "embedding",
              "embedding": [
                0.0172990688066482523,
                -0.0291879814639389515,
                ....
                0.0134544348834753042,
              ],
              "index": 0
            }
          ],
          "model": "text-embedding-ada:002"
        }
      

Use Azure OpenAI SDK

  • Python Code
      # Install libraries
      pip install openai
      
      # Configure app to access Azure OpenAI resource
      # Add OpenAI library
      import openai
    
      openai.api_key = '<YOUR_API_KEY>'
      openai.api_base =  '<YOUR_ENDPOINT_NAME>' 
      openai.api_type = 'azure' # Necessary for using the OpenAI library with Azure OpenAI
      openai.api_version = '2022-12-01' # This likely will change with future releases
    
      deployment_name = '<YOUR_DEPLOYMENT_NAME>' # SDK calls this "engine", but naming
                                                 # it "deployment_name" for clarity
                                                 
      # Call Azure OpenAI resource - send your prompt to any one Completion, ChatCompletion, or Embedding  endpoints.     
      prompt = 'What is Azure OpenAI?'
      response = openai.Completion.create(engine=deployment_name, prompt=prompt)
    
      print(response.choices[0].text)
      
      # If using ChatCompletion, sending the prompt is formed differently.
      response = openai.ChatCompletion.create(
    engine=deployment_name
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Azure OpenAI?"}
    ]
       )
    
      print(response['choices'][0]['message']['content'])
    
      # The response object contains several values, such as total_tokens and finish_reason. 
    
  • C# Code
        dotnet add package Azure.AI.OpenAI --prerelease
    
          // Add OpenAI library
          using Azure.AI.OpenAI;
    
          // Define parameters and initialize the client
          string endpoint = "<YOUR_ENDPOINT_NAME>";
          string key = "<YOUR_API_KEY>";
          string deploymentName = "<YOUR_DEPLOYMENT_NAME>"; //SDK calls this "engine", but naming
                                                            // it "deploymentName" for clarity
    
          OpenAIClient client = new OpenAIClient(new Uri(endpoint), new AzureKeyCredential(key));
    
          string prompt = "What is Azure OpenAI?";
    
          Response<Completions> completionsResponse = client.GetCompletions(deploymentName, prompt);
          string completion = completionsResponse.Value.Choices[0].Text;
          Console.WriteLine($"Chatbot: {completion}");
    
          // Build completion options object
          ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
          {
              Messages = 
              {
                  new ChatMessage(ChatRole.System, "You are a helpful AI bot."), 
                  new ChatMessage(ChatRole.User, "What is Azure OpenAI?")
              }
          };
    
          // Send request to Azure OpenAI model
          ChatCompletions chatCompletionsResponse = client.GetChatCompletions(
              deploymentName, 
              chatCompletionsOptions);
    
          ChatMessage completion = chatCompletionsResponse.Choices[0].Message;
          Console.WriteLine($"Chatbot: {completion.Content}");
    

Exercise - Integrate Azure OpenAI into your app

  • https://microsoftlearning.github.io/mslearn-openai/Instructions/Labs/02-natural-language-azure-openai.html

  • Cloud Shell terminal - Bash

    1. download the sample application
           rm -r azure-openai -f
           git clone https://github.com/MicrosoftLearning/mslearn-openai azure-openai
      
           cd azure-openai/Labfiles/02-nlp-azure-openai 
      
           code .
      
    2. configuration file for your language - C#: appsettings.json OR Python: .env
    3. Update configuration - endpoint and key from the Azure OpenAI resource, model name : text-turbo. Save the file.
    4. install the necessary packages
      # C#
      cd CSharp
      dotnet add package Azure.AI.OpenAI --prerelease
    
      # Python 
      cd Python
      pip install python-dotenv
      pip install openai
    
    1. Navigate to language folder, select the code file, and add the necessary libraries.

        // Add Azure OpenAI package
        # C#
        using Azure.AI.OpenAI;
      
        #python
        import openai
      
    2. specifies the various parameters for your model such as prompt and temperature.

        # c# 
          // Initialize the Azure OpenAI client
          OpenAIClient client = new OpenAIClient(new Uri(oaiEndpoint), new AzureKeyCredential(oaiKey));
      
         // Build completion options object
         ChatCompletionsOptions chatCompletionsOptions = new ChatCompletionsOptions()
         {
             Messages =
             {
                 new ChatMessage(ChatRole.System, "You are a helpful assistant. Summarize the following text in 60 words or less."),
                 new ChatMessage(ChatRole.User, text),
             },
             MaxTokens = 120,
             Temperature = 0.7f,
         };
      
         // Send request to Azure OpenAI model
         ChatCompletions response = client.GetChatCompletions(
             deploymentOrModelName: oaiModelName, 
             chatCompletionsOptions);
         string completion = response.Choices[0].Message.Content;
      
       Console.WriteLine("Summary: " + completion + "\n");
      
       #Python
        # Set OpenAI configuration settings
        openai.api_type = "azure"
        openai.api_base = azure_oai_endpoint
        openai.api_version = "2023-03-15-preview"
        openai.api_key = azure_oai_key
      
        # Send request to Azure OpenAI model
        print("Sending request for summary to Azure OpenAI endpoint...\n\n")
        response = openai.ChatCompletion.create(
           engine=azure_oai_model,
           temperature=0.7,
           max_tokens=120,
           messages=[
               {"role": "system", "content": "You are a helpful assistant. Summarize the following text in 60 words or less."},
               {"role": "user", "content": text}
           ]
        )
      
        print("Summary: " + response.choices[0].message.content + "\n")
      
    3. Run your application

          C#: dotnet run
          Python: python test-openai-model.py
      
    4. Observe the summarization of the sample text file. Change the temperature value to 1. Save the file. Run the application again, and observe the output.

    5. Increasing the temperature often causes the summary to vary, even when provided the same text, due to the increased randomness.

Quiz

  1. What resource values are required to make requests to your Azure OpenAI resource?
  • Chat, Embedding, and Completion
  • Key, Endpoint, and Deployment name
  • Summary, Deployment name, and Endpoint
  1. What are the three available endpoints for interacting with a deployed Azure OpenAI model?
  • Completion, ChatCompletion, and Translation
  • Completion, ChatCompletion, and Embeddings
  • Deployment, Summary, and Similarity
  1. What is the best available endpoint to model the next completion of a conversation in Azure OpenAI?
  • ChatCompletion
  • Embeddings
  • TranslateCompletion