Skip to content

Commit

Permalink
Code quality improvements. Added documentation to README file.
Browse files Browse the repository at this point in the history
  • Loading branch information
goventur committed Oct 12, 2024
1 parent c2f8947 commit 5972249
Show file tree
Hide file tree
Showing 2 changed files with 206 additions and 80 deletions.
120 changes: 120 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -359,6 +359,126 @@ Configure your settings using the table below.
|AZURE_COSMOSDB_ENABLE_FEEDBACK|No|False|Whether or not to enable message feedback on chat history messages|


#### Enable Azure OpenAI function calling via Azure Functions

Refer to this article to learn more about [function calling with Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling).

1. Update the `AZURE_OPENAI_*` environment variables as described in the [basic chat experience](#basic-chat-experience) above.

2. Add any additional configuration (described in previous sections) needed for chatting with data, if required.

3. To enable function calling via remote Azure Functions, you will need to set up an Azure Function resource. Refer to this [instruction guide](https://learn.microsoft.com/azure/azure-functions/functions-create-function-app-portal?pivots=programming-language-python) to create an Azure Function resource.

4. You will need to create the following Azure Functions to implement function calling logic:

* Create one function with routing, e.g. /tools, that will return a JSON array with the function definitions.
* Create a second function with routing, e.g. /tool, that will execute the functions with the given arguments.
The request body will be a JSON structure with the function name and arguments of the function to be executed.
Use this sample as function request body to test your function call:

```
{
"tool_name" : "get_current_weather",
"tool_arguments" : {"location":"Lamego"}
}
```

* Create functions without routing to implement all the functions defined in the JSON definition.

Sample code for the Azure Functions:

```
import azure.functions as func
import logging
import json
import random
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
azure_openai_tools_json = """[{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g. San Francisco"
}
},
"required": ["location"]
}
}
}]"""
azure_openai_available_tools = ["get_current_weather"]
@app.route(route="tools")
def tools(req: func.HttpRequest) -> func.HttpResponse:
logging.info('tools function processed a request.')
return func.HttpResponse(
azure_openai_tools_json,
status_code=200
)
@app.route(route="tool")
def tool(req: func.HttpRequest) -> func.HttpResponse:
logging.info('tool function processed a request.')
tool_name = req.params.get('tool_name')
if not tool_name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
tool_name = req_body.get('tool_name')
tool_arguments = req.params.get('tool_arguments')
if not tool_arguments:
try:
req_body = req.get_json()
except ValueError:
pass
else:
tool_arguments = req_body.get('tool_arguments')
if tool_name and tool_arguments:
if tool_name in azure_openai_available_tools:
logging.info('tool function: tool_name and tool_arguments are valid.')
result = globals()[tool_name](**tool_arguments)
return func.HttpResponse(
result,
status_code = 200
)
logging.info('tool function: tool_name or tool_arguments are invalid.')
return func.HttpResponse(
"The tool function we executed successfully but the tool name or arguments were invalid. ",
status_code=400
)
def get_current_weather(location: str) -> str:
logging.info('get_current_weather function processed a request.')
temperature = random.randint(10, 30)
weather = random.choice(["sunny", "cloudy", "rainy", "windy"])
return f"The current weather in {location} is {temperature}°C and {weather}."
```
4. Configure data source settings as described in the table below:
| App Setting | Required? | Default Value | Note |
| ----------- | --------- | ------------- | ---- |
| AZURE_OPENAI_FUNCTION_CALL_AZURE_FUNCTIONS_ENABLED | No | | |
| AZURE_OPENAI_FUNCTION_CALL_AZURE_FUNCTIONS_TOOL_BASE_URL | Only if using function calling | | The base URL of your Azure Function "tool", e.g. [https://<azure-function-name>.azurewebsites.net/api/tool]() |
| AZURE_OPENAI_FUNCTION_CALL_AZURE_FUNCTIONS_TOOL_KEY | Only if using function calling | | The function key used to access the Azure Function "tool" |
| AZURE_OPENAI_FUNCTION_CALL_AZURE_FUNCTIONS_TOOLS_BASE_URL | Only if using function calling | | The base URL of your Azure Function "tools", e.g. [https://<azure-function-name>.azurewebsites.net/api/tools]() |
| AZURE_OPENAI_FUNCTION_CALL_AZURE_FUNCTIONS_TOOLS_KEY | Only if using function calling | | The function key used to access the Azure Function "tools" |
#### Common Customization Scenarios (e.g. updating the default chat logo and headers)
The interface allows for easy adaptation of the UI by modifying certain elements, such as the title and logo, through the use of the following environment variables.
Expand Down
166 changes: 86 additions & 80 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ def process_function_call(response):
return messages

return None

async def send_chat_request(request_body, request_headers):
filtered_messages = []
messages = request_body.get("messages", [])
Expand Down Expand Up @@ -462,92 +462,98 @@ async def complete_chat_request(request_body, request_headers):

return non_streaming_response

class AzureOpenaiFunctionCallStreamState():
def __init__(self):
self.tool_calls = []
self.current_tool_call = None
self.tool_arguments_stream = ""
self.function_messages = []
self.tool_name = ""
self.tool_call_streaming_state = "INITIAL"


def process_function_call_stream(completionChunk, function_call_stream_state, request_body, request_headers, history_metadata, apim_request_id):
if hasattr(completionChunk, "choices") and len(completionChunk.choices) > 0:
response_message = completionChunk.choices[0].delta

# Function calling stream processing
if response_message.tool_calls and function_call_stream_state.tool_call_streaming_state in ["INITIAL", "STREAMING"]:
function_call_stream_state.tool_call_streaming_state = "STREAMING"
for tool_call_chunk in response_message.tool_calls:
# New tool call
if tool_call_chunk.id:
if function_call_stream_state.current_tool_call:
function_call_stream_state.tool_arguments_stream += tool_call_chunk.function.arguments if tool_call_chunk.function.arguments else ""
function_call_stream_state.current_tool_call["tool_arguments"] = function_call_stream_state.tool_arguments_stream
function_call_stream_state.tool_arguments_stream = ""
function_call_stream_state.tool_name = ""
function_call_stream_state.tool_calls.append(function_call_stream_state.current_tool_call)

function_call_stream_state.current_tool_call = {
"tool_id": tool_call_chunk.id,
"tool_name": tool_call_chunk.function.name if function_call_stream_state.tool_name == "" else function_call_stream_state.tool_name
}
else:
function_call_stream_state.tool_arguments_stream += tool_call_chunk.function.arguments if tool_call_chunk.function.arguments else ""

# Function call - Streaming completed
elif response_message.tool_calls is None and function_call_stream_state.tool_call_streaming_state == "STREAMING":
function_call_stream_state.current_tool_call["tool_arguments"] = function_call_stream_state.tool_arguments_stream
function_call_stream_state.tool_calls.append(function_call_stream_state.current_tool_call)

for tool_call in function_call_stream_state.tool_calls:
tool_response = openai_remote_azure_function_call(tool_call["tool_name"], tool_call["tool_arguments"])

function_call_stream_state.function_messages.append({
"role": "assistant",
"function_call": {
"name" : tool_call["tool_name"],
"arguments": tool_call["tool_arguments"]
},
"content": None
})
function_call_stream_state.function_messages.append({
"tool_call_id": tool_call["tool_id"],
"role": "function",
"name": tool_call["tool_name"],
"content": tool_response,
})

function_call_stream_state.tool_call_streaming_state = "COMPLETED"
return function_call_stream_state.tool_call_streaming_state

else:
return function_call_stream_state.tool_call_streaming_state


async def stream_chat_request(request_body, request_headers):
response, apim_request_id = await send_chat_request(request_body, request_headers)
history_metadata = request_body.get("history_metadata", {})

messages = []

async def generate(apim_request_id, history_metadata):
tool_calls = []
current_tool_call = None
tool_arguments_stream = ""
function_messages = []
tool_name = ""
tool_call_streaming_state = "INITIAL"

async for completionChunk in response:
if app_settings.azure_openai.function_call_azure_functions_enabled:
if hasattr(completionChunk, "choices") and len(completionChunk.choices) > 0:
response_message = completionChunk.choices[0].delta

# Function calling stream processing
if response_message.tool_calls and tool_call_streaming_state in ["INITIAL", "STREAMING"]:
tool_call_streaming_state = "STREAMING"
for tool_call_chunk in response_message.tool_calls:
# New tool call
if tool_call_chunk.id:
if current_tool_call:
tool_arguments_stream += tool_call_chunk.function.arguments if tool_call_chunk.function.arguments else ""
current_tool_call["tool_arguments"] = tool_arguments_stream
tool_arguments_stream = ""
tool_name = ""
tool_calls.append(current_tool_call)

current_tool_call = {
"tool_id": tool_call_chunk.id,
"tool_name": tool_call_chunk.function.name if tool_name == "" else tool_name
}
else:
tool_arguments_stream += tool_call_chunk.function.arguments if tool_call_chunk.function.arguments else ""

# Function call - Streaming completed
elif response_message.tool_calls is None and tool_call_streaming_state == "STREAMING":
current_tool_call["tool_arguments"] = tool_arguments_stream
tool_calls.append(current_tool_call)

for tool_call in tool_calls:
tool_response = openai_remote_azure_function_call(tool_call["tool_name"], tool_call["tool_arguments"])

function_messages.append({
"role": "assistant",
"function_call": {
"name" : tool_call["tool_name"],
"arguments": tool_call["tool_arguments"]
},
"content": None
})
function_messages.append({
"tool_call_id": tool_call["tool_id"],
"role": "function",
"name": tool_call["tool_name"],
"content": tool_response,
})

# Reset for the next tool call
messages = function_messages
function_messages = []
tool_calls = []
current_tool_call = None
tool_arguments_stream = ""
tool_name = ""
tool_id = None
tool_call_streaming_state = "COMPLETED"

request_body["messages"].extend(messages)

function_response, apim_request_id = await send_chat_request(request_body, request_headers)

async for functionCompletionChunk in function_response:
yield format_stream_response(functionCompletionChunk, history_metadata, apim_request_id)

else:
# No function call, asistant response
yield format_stream_response(completionChunk, history_metadata, apim_request_id)

else:
if app_settings.azure_openai.function_call_azure_functions_enabled:
# Maintain state during function call streaming
function_call_stream_state = AzureOpenaiFunctionCallStreamState()

async for completionChunk in response:
stream_state = process_function_call_stream(completionChunk, function_call_stream_state, request_body, request_headers, history_metadata, apim_request_id)

# No function call, asistant response
if stream_state == "INITIAL":
yield format_stream_response(completionChunk, history_metadata, apim_request_id)

# Function call stream completed, functions were executed.
# Append function calls and results to history and send to OpenAI, to stream the final answer.
if stream_state == "COMPLETED":
request_body["messages"].extend(function_call_stream_state.function_messages)
function_response, apim_request_id = await send_chat_request(request_body, request_headers)
async for functionCompletionChunk in function_response:
yield format_stream_response(functionCompletionChunk, history_metadata, apim_request_id)

else:
async for completionChunk in response:
yield format_stream_response(completionChunk, history_metadata, apim_request_id)

return generate(apim_request_id=apim_request_id, history_metadata=history_metadata)


Expand Down

0 comments on commit 5972249

Please sign in to comment.