Skip to content

Files

Latest commit

 

History

History
 
 

11-integrating-with-function-calling

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Integrating with function calling

Integrating with function calling

You've learned a fair bit so far in the previous lessons. However, we can improve further. Some things we can address are how we can get a more consistent response format to make it easier to work with the response downstream. Also, we might want to add data from other sources to further enrich our application.

The above mentioned problems are what this chapter is looking to address.

Introduction

This lesson will cover:

  • Explain what is function calling and its use cases.
  • Creating a function call using Azure OpenAI.
  • How to integrate a function call into an application.

Learning Goals

After completing this lesson you will be able to:

  • Explain the purpose of using function calling.
  • Setup Function Call using the Azure OpenAI Service.
  • Design effective function calls for your application's use case.

Scenario: improving our chatbot with functions

For this lesson, we want to build a feature for our education startup that allows users to use a chatbot to find technical courses. We will recommend courses that fit their skill level, current role and technology of interest.

To complete this scenario we will use a combination of:

  • Azure OpenAI to create a chat experience for the user.
  • Microsoft Learn Catalog API to help users find courses based on the request of the user.
  • Function Calling to take the user's query and send it to a function to make the API request.

To get started, let's look at why we would want to use function calling in the first place:

Why Function Calling

Before function calling, responses from an LLM were unstructured and inconsistent. Developers were required to write complex validation code to make sure they are able to handle each variation of a response. Users could not get answers like "What is the current weather in Stockholm?". This is because models were limited to the time the data was trained on.

Function Calling is a feature of the Azure OpenAI Service to overcome to the following limitations:

  • Consistent response format. If we can better control the response format we can more easily integrate the response downstream to other systems.
  • External data. Ability to use data from other sources of an application in a chat context.

Illustrating the problem through a scenario

We recommend you to use the included notebook if you want to run the below scenario. You can also just read along as we're trying to illustrate a problem where functions can help to address the problem.

Let's look at the example that illustrates the response format problem:

Let's say we want to create a database of student data so we can suggest the right course to them. Below we have two descriptions of students that are very similar in the data they contain.

  1. Create a connection to our Azure OpenAI resource:

    import os
    import json
    from openai import AzureOpenAI
    from dotenv import load_dotenv
    load_dotenv()
    
    client = AzureOpenAI(
    api_key=os.environ['AZURE_OPENAI_KEY'],  # this is also the default, it can be omitted
    api_version = "2023-07-01-preview"
    )
    
    deployment=os.environ['AZURE_OPENAI_DEPLOYMENT']

    Below is some Python code for configuring our connection to Azure OpenAI where we set api_type, api_base, api_version and api_key.

  2. Creating two student descriptions using variables student_1_description and student_2_description.

    student_1_description="Emily Johnson is a sophomore majoring in computer science at Duke University. She has a 3.7 GPA. Emily is an active member of the university's Chess Club and Debate Team. She hopes to pursue a career in software engineering after graduating."
    
    student_2_description = "Michael Lee is a sophomore majoring in computer science at Stanford University. He has a 3.8 GPA. Michael is known for his programming skills and is an active member of the university's Robotics Club. He hopes to pursue a career in artificial intelligence after finishing his studies."

    We want to send the above student descriptions to an LLM to parse the data. This data can later be used in our application and be sent to an API or stored in a database.

  3. Let's create two identical prompts in which we instruct the LLM on what information we are interested in:

    prompt1 = f'''
    Please extract the following information from the given text and return it as a JSON object:
    
    name
    major
    school
    grades
    club
    
    This is the body of text to extract the information from:
    {student_1_description}
    '''
    
    prompt2 = f'''
    Please extract the following information from the given text and return it as a JSON object:
    
    name
    major
    school
    grades
    club
    
    This is the body of text to extract the information from:
    {student_2_description}
    '''

    The above prompts instruct the LLM to extract information and return the response in JSON format.

  4. After setting up the prompts and the connection to Azure OpenAI, we will now send the prompts to the LLM by using openai.ChatCompletion. We store the prompt in the messages variable and assign the role to user. This is to mimic a message from a user being written to a chatbot.

    # response from prompt one
    openai_response1 = client.chat.completions.create(
    model=deployment,
    messages = [{'role': 'user', 'content': prompt1}]
    )
    openai_response1.choices[0].message.content
    
    # response from prompt two
    openai_response2 = client.chat.completions.create(
    model=deployment,
    messages = [{'role': 'user', 'content': prompt2}]
    )
    openai_response2.choices[0].message.content

Now we can send both requests to the LLM and examine the response we receive by finding it like so openai_response1['choices'][0]['message']['content'].

  1. Lastly, we can convert the response to JSON format by calling json.loads:

    # Loading the response as a JSON object
    json_response1 = json.loads(openai_response1.choices[0].message.content)
    json_response1

    Response 1:

    {
      "name": "Emily Johnson",
      "major": "computer science",
      "school": "Duke University",
      "grades": "3.7",
      "club": "Chess Club"
    }

    Response 2:

    {
      "name": "Michael Lee",
      "major": "computer science",
      "school": "Stanford University",
      "grades": "3.8 GPA",
      "club": "Robotics Club"
    }

    Even though the prompts are the same and the descriptions are similar, we see values of the Grades property formatted differently as we can sometimes get the format 3.7 or 3.7 GPA for example.

    This result is because the LLM takes unstructured data in the form of the written prompt and returns also unstructured data. We need to have a structured format so that we know what to expect when storing or using this data

So how do we solve the formatting problem then? By using functional calling, we can make sure that we receive structured data back. When using function calling, the LLM does not actually call or run any functions. Instead, we create a structure for the LLM to follow for its responses. We then use those structured responses to know what function to run in our applications.

function flow

We can then take what is returned from the function and send this back to the LLM. The LLM will then respond using natural language to answer the user's query.

Use Cases for using function calls

There are many different use cases where function calls can improve your app like:

  • Calling External Tools. Chatbots are great at providing answers to questions from users. By using function calling, the chatbots can use messages from users to complete certain tasks. For example, a student can ask the chatbot to "Send email to my instructor saying I need more assistance with this subject". This can make a function call to send_email(to: string, body: string)

  • Create API or Database Queries. Users can find information using natural language that gets converted into a formatted query or API request. An example of this could be a teacher who requests "Who are the students that completed the last assignment" which could call a function named get_completed(student_name: string, assignment: int, current_status: string)

  • Creating Structured Data. Users can take a block of text or CSV and use the LLM to extract important information from it. For example, a student can convert a Wikipedia article about peace agreements to create AI flash cards. This can be done by using a function called get_important_facts(agreement_name: string, date_signed: string, parties_involved: list)

Creating Your First Function Call

The process of creating a function call includes 3 main steps:

  1. Calling the Chat Completions API with a list of your functions and a user message.
  2. Reading the model's response to perform an action ie execute a function or API Call.
  3. Making another call to Chat Completions API with the response from your function to use that information to create a response to the user.

LLM Flow

Step 1 - creating messages

The first step is to create a user message. This can be dynamically assigned by taking the value of a text input or you can assign a value here. If this is your first time working with the Chat Completions API, we need to define the role and the content of the message.

The role can be either system (creating rules), assistant (the model) or user (the end-user). For function calling, we will assign this as user and an example question.

messages= [ {"role": "user", "content": "Find me a good course for a beginner student to learn Azure."} ]

By assigning different roles, it's made clear to the LLM if it's the system saying something or the user, which helps to build a conversation history that the LLM can build upon.

Step 2 - creating functions

Next, we will define a function and the parameters of that function. We will use just one function here called search_courses but you can create multiple functions.

Important : Functions are included in the system message to the LLM and will be included in the amount of available tokens you have available.

Below, we create the functions as an array of items. Each item is a function and has properties name, description and parameters:

functions = [
   {
      "name":"search_courses",
      "description":"Retrieves courses from the search index based on the parameters provided",
      "parameters":{
         "type":"object",
         "properties":{
            "role":{
               "type":"string",
               "description":"The role of the learner (i.e. developer, data scientist, student, etc.)"
            },
            "product":{
               "type":"string",
               "description":"The product that the lesson is covering (i.e. Azure, Power BI, etc.)"
            },
            "level":{
               "type":"string",
               "description":"The level of experience the learner has prior to taking the course (i.e. beginner, intermediate, advanced)"
            }
         },
         "required":[
            "role"
         ]
      }
   }
]

Let's describe each function instance more in detail below:

  • name - The name of the function that we want to have called.
  • description - This is the description of how the function works. Here it's important to be specific and clear.
  • parameters - A list of values and format that you want the model to produce in its response. The parameters array consists of items where item have the following properties:
    1. type - The data type of the properties will be stored in.
    2. properties - List of the specific values that the model will use for its response
      1. name - The key is the name of the property that the model will use in its formatted response, for example, product.
      2. type - The data type of this property, for example, string.
      3. description - Description of the specific property.

There's also an optional property required - required property for the function call to be completed.

Step 3 - Making the function call

After defining a function, we now need to include it in the call to the Chat Completion API. We do this by adding functions to the request. In this case functions=functions.

There is also an option to set function_call to auto. This means we will let the LLM decide which function should be called based on the user message rather than assigning it ourselves.

Here's some code below where we call ChatCompletion.create, note how we set functions=functions and function_call="auto" and thereby giving the LLM the choice when to call the functions we provide it:

response = client.chat.completions.create(model=deployment,
                                        messages=messages,
                                        functions=functions,
                                        function_call="auto")

print(response.choices[0].message)

The response coming back now looks like so:

{
  "role": "assistant",
  "function_call": {
    "name": "search_courses",
    "arguments": "{\n  \"role\": \"student\",\n  \"product\": \"Azure\",\n  \"level\": \"beginner\"\n}"
  }
}

Here we can see how the function search_courses was called and with what arguments, as listed in the arguments property in the JSON response.

The conclusion the LLM was able to find the data to fit the arguments of the function as it was extracting it from the value provided to the messages parameter in the chat completion call. Below is a reminder of the messages value:

messages= [ {"role": "user", "content": "Find me a good course for a beginner student to learn Azure."} ]

As you can see, student, Azure and beginner was extracted from messages and set as input to the function. Using functions this way is a great way to extract information from a prompt but also to provide structure to the LLM and have reusable functionality.

Next, we need to see how we can use this in our app.

Integrating Function Calls into an Application

After we have tested the formatted response from the LLM, now we can integrate this into an application.

Managing the flow

To integrate this into our application, let's take the following steps:

  1. First, let's make the call to the Open AI services and store the message in a variable called response_message.

    response_message = response.choices[0].message
  2. Now we will define the function that will call the Microsoft Learn API to get a list of courses:

    import requests
    
    def search_courses(role, product, level):
      url = "https://learn.microsoft.com/api/catalog/"
      params = {
         "role": role,
         "product": product,
         "level": level
      }
      response = requests.get(url, params=params)
      modules = response.json()["modules"]
      results = []
      for module in modules[:5]:
         title = module["title"]
         url = module["url"]
         results.append({"title": title, "url": url})
      return str(results)

    Note how we now create an actual Python function that maps to the function names introduced in the functions variable. We're also making real external API calls to fetch the data we need. In this case, we go against the Microsoft Learn API to search for training modules.

Ok, so we created functions variables and a corresponding Python function, how do we tell the LLM how to map these two together so our Python function is called?

  1. To see if we need to call a Python function, we need to look into the LLM response and see if function_call is part of it and call the pointed out function. Here's how you can make the mentioned check below:

    # Check if the model wants to call a function
    if response_message.function_call.name:
     print("Recommended Function call:")
     print(response_message.function_call.name)
     print()
    
     # Call the function.
     function_name = response_message.function_call.name
    
     available_functions = {
             "search_courses": search_courses,
     }
     function_to_call = available_functions[function_name]
    
     function_args = json.loads(response_message.function_call.arguments)
     function_response = function_to_call(**function_args)
    
     print("Output of function call:")
     print(function_response)
     print(type(function_response))
    
    
     # Add the assistant response and function response to the messages
     messages.append( # adding assistant response to messages
         {
             "role": response_message.role,
             "function_call": {
                 "name": function_name,
                 "arguments": response_message.function_call.arguments,
             },
             "content": None
         }
     )
     messages.append( # adding function response to messages
         {
             "role": "function",
             "name": function_name,
             "content":function_response,
         }
     )

    These three lines, ensure we extract the function name, the arguments and make the call:

    function_to_call = available_functions[function_name]
    
    function_args = json.loads(response_message.function_call.arguments)
    function_response = function_to_call(**function_args)

    Below is the output from running our code:

    Output

    {
      "name": "search_courses",
      "arguments": "{\n  \"role\": \"student\",\n  \"product\": \"Azure\",\n  \"level\": \"beginner\"\n}"
    }
    
    Output of function call:
    [{'title': 'Describe concepts of cryptography', 'url': 'https://learn.microsoft.com/training/modules/describe-concepts-of-cryptography/?
    WT.mc_id=api_CatalogApi'}, {'title': 'Introduction to audio classification with TensorFlow', 'url': 'https://learn.microsoft.com/en-
    us/training/modules/intro-audio-classification-tensorflow/?WT.mc_id=api_CatalogApi'}, {'title': 'Design a Performant Data Model in Azure SQL
    Database with Azure Data Studio', 'url': 'https://learn.microsoft.com/training/modules/design-a-data-model-with-ads/?
    WT.mc_id=api_CatalogApi'}, {'title': 'Getting started with the Microsoft Cloud Adoption Framework for Azure', 'url':
    'https://learn.microsoft.com/training/modules/cloud-adoption-framework-getting-started/?WT.mc_id=api_CatalogApi'}, {'title': 'Set up the
    Rust development environment', 'url': 'https://learn.microsoft.com/training/modules/rust-set-up-environment/?WT.mc_id=api_CatalogApi'}]
    <class 'str'>
    
  2. Now we will send the updated message, messages to the LLM so we can receive a natural language response instead of an API JSON formatted response.

    print("Messages in next request:")
    print(messages)
    print()
    
    second_response = client.chat.completions.create(
       messages=messages,
       model=deployment,
       function_call="auto",
       functions=functions,
       temperature=0
          )  # get a new response from GPT where it can see the function response
    
    
    print(second_response.choices[0].message)

    Output

    {
      "role": "assistant",
      "content": "I found some good courses for beginner students to learn Azure:\n\n1. [Describe concepts of cryptography] (https://learn.microsoft.com/training/modules/describe-concepts-of-cryptography/?WT.mc_id=api_CatalogApi)\n2. [Introduction to audio classification with TensorFlow](https://learn.microsoft.com/training/modules/intro-audio-classification-tensorflow/?WT.mc_id=api_CatalogApi)\n3. [Design a Performant Data Model in Azure SQL Database with Azure Data Studio](https://learn.microsoft.com/training/modules/design-a-data-model-with-ads/?WT.mc_id=api_CatalogApi)\n4. [Getting started with the Microsoft Cloud Adoption Framework for Azure](https://learn.microsoft.com/training/modules/cloud-adoption-framework-getting-started/?WT.mc_id=api_CatalogApi)\n5. [Set up the Rust development environment](https://learn.microsoft.com/training/modules/rust-set-up-environment/?WT.mc_id=api_CatalogApi)\n\nYou can click on the links to access the courses."
    }

Assignment

To continue your learning of Azure OpenAI Function Calling you can build:

  • More parameters of the function that might help learners find more courses.
  • Create another function call that takes more information from the learner like their native language
  • Create error handling when the function call and/or API call does not return any suitable courses

Hint: Follow the Learn API reference documentation page to see how and where this data is available.

Great Work! Continue the Journey

After completing this lesson, check out our Generative AI Learning collection to continue leveling up your Generative AI knowledge!

Head over to Lesson 12 where we will look at how to design UX for AI applications!