[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
-
Updated
Jul 10, 2024 - Python
[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
A sample app to demonstrate Function calling using the latest format in Chat Completions API and also in Assistants API.
Add a description, image, and links to the parallel-function-call topic page so that developers can more easily learn about it.
To associate your repository with the parallel-function-call topic, visit your repo's landing page and select "manage topics."