Skip to content

Commit

Permalink
Add Qwen handler and fix mean_latency calculation error for OSS models (
Browse files Browse the repository at this point in the history
ShishirPatil#642)

Hello, there may be an anomaly in the mean latency calculation of the
OSS model, and I have attempted to fix it.

---------

Co-authored-by: ai_user <ai@digitalchina.com>
Co-authored-by: Huanzhi (Hans) Mao <huanzhimao@gmail.com>
  • Loading branch information
3 people authored and VishnuSuresh27 committed Nov 11, 2024
1 parent d7c791d commit b471497
Show file tree
Hide file tree
Showing 5 changed files with 56 additions and 0 deletions.
5 changes: 5 additions & 0 deletions berkeley-function-call-leaderboard/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

All notable changes to the Berkeley Function Calling Leaderboard will be documented in this file.

- [Oct 5, 2024] [#642](https://github.com/ShishirPatil/gorilla/pull/642): Add the following new models to the leaderboard:
- `Qwen/Qwen2.5-7B-Instruct`
- `Qwen/Qwen2.5-1.5B-Instruct`
- `Qwen/Qwen2-7B-Instruct`
- `Qwen/Qwen2-1.5B-Instruct`
- [Oct 4, 2024] [#653](https://github.com/ShishirPatil/gorilla/pull/653): Add new model `Team-ACE/ToolACE-8B` to the leaderboard.
- [Oct 4, 2024] [#671](https://github.com/ShishirPatil/gorilla/pull/671): Speed up locally-hosted model's inference process by parallelizing the inference requests.
- [Sept 27, 2024] [#640](https://github.com/ShishirPatil/gorilla/pull/640): Add the following new models to the leaderboard:
Expand Down
2 changes: 2 additions & 0 deletions berkeley-function-call-leaderboard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,8 @@ Below is _a table of models we support_ to run our leaderboard evaluation agains
|ibm-granite/granite-20b-functioncalling 💻| Function Calling|
|yi-large-fc | Function Calling|
|MadeAgents/Hammer-7b 💻| Function Calling|
|Qwen/Qwen2.5-{1.5B,7B}-Instruct 💻| Prompt|
|Qwen/Qwen2-{1.5B,7B}-Instruct 💻| Prompt|
|Team-ACE/ToolACE-8B 💻| Function Calling|

Here {MODEL} 💻 means the model needs to be hosted locally and called by vllm, {MODEL} means the models that are called API calls. For models with a trailing `-FC`, it means that the model supports function-calling feature. You can check out the table summarizing feature supports among different models [here](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html#prompt).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -581,6 +581,30 @@
"Microsoft",
"MIT",
],
"Qwen/Qwen2-1.5B-Instruct": [
"Qwen2-1.5B-Instruct (Prompt)",
"https://huggingface.co/Qwen/Qwen2-1.5B-Instruct",
"Qwen",
"apache-2.0",
],
"Qwen/Qwen2-7B-Instruct": [
"Qwen2-7B-Instruct (Prompt)",
"https://huggingface.co/Qwen/Qwen2-7B-Instruct",
"Qwen",
"apache-2.0",
],
"Qwen/Qwen2.5-1.5B-Instruct": [
"Qwen2.5-1.5B-Instruct (Prompt)",
"https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct",
"Qwen",
"apache-2.0",
],
"Qwen/Qwen2.5-7B-Instruct": [
"Qwen2.5-7B-Instruct (Prompt)",
"https://huggingface.co/Qwen/Qwen2.5-7B-Instruct",
"Qwen",
"apache-2.0",
],
"Team-ACE/ToolACE-8B": [
"ToolACE-8B (FC)",
"https://huggingface.co/Team-ACE/ToolACE-8B",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from bfcl.model_handler.oss_model.llama_fc import LlamaFCHandler
from bfcl.model_handler.oss_model.phi import PhiHandler
from bfcl.model_handler.oss_model.salesforce import SalesforceHandler
from bfcl.model_handler.oss_model.qwen import QwenHandler
from bfcl.model_handler.proprietary_model.claude import ClaudeHandler
from bfcl.model_handler.proprietary_model.cohere import CohereHandler
from bfcl.model_handler.proprietary_model.databricks import DatabricksHandler
Expand Down Expand Up @@ -108,6 +109,10 @@
"ibm-granite/granite-20b-functioncalling": GraniteHandler,
# "MadeAgents/Hammer-7b": HammerHandler, # TODO: Update handler once they have a multi-turn format
"THUDM/glm-4-9b-chat": GLMHandler,
"Qwen/Qwen2-1.5B-Instruct": QwenHandler,
"Qwen/Qwen2-7B-Instruct": QwenHandler,
"Qwen/Qwen2.5-1.5B-Instruct": QwenHandler,
"Qwen/Qwen2.5-7B-Instruct": QwenHandler,
"Team-ACE/ToolACE-8B": LlamaHandler,

# Deprecated/outdated models, no longer on the leaderboard
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from bfcl.model_handler.oss_model.base_oss_handler import OSSHandler


class QwenHandler(OSSHandler):
def __init__(self, model_name, temperature) -> None:
super().__init__(model_name, temperature)

def _format_prompt(self, messages, function):
# Qwen is using its prompting mode, not the tool use mode
"""
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
"""
formatted_prompt = ""

for message in messages:
formatted_prompt += f"<|im_start|>{message['role']}\n{message['content']}<|im_end|>\n"

formatted_prompt += "<|im_start|>assistant\n"

return formatted_prompt

0 comments on commit b471497

Please sign in to comment.