Releases: svilupp/PromptingTools.jl
Releases · svilupp/PromptingTools.jl
v0.62.0
PromptingTools v0.62.0
Added
- Added a new Claude 3.5 Haiku model (
claude-3-5-haiku-latest
) and updated the aliasclaudeh
with it. - Added support for XAI's Grok 2 beta model (
grok-beta
) and updated the aliasgrok
with it. Set your ENV api keyXAI_API_KEY
to use it.
Commits
Merged pull requests:
v0.61.0
PromptingTools v0.61.0
Added
- Added a new
extras
field toToolRef
to enable additional parameters in the tool signature (eg,display_width_px
,display_height_px
for the:computer
tool). - Added a new kwarg
unused_as_kwargs
toexecute_tool
to enable passing unused args as kwargs (see?execute_tool
for more information). Helps with using kwarg-based functions.
Updated
- Updated the compat bounds for
StreamCallbacks
to enable both v0.4 and v0.5 (Fixes Julia 1.9 compatibility). - Updated the return type of
tool_call_signature
toDict{String, AbstractTool}
to enable better interoperability with different tool types.
Commits
Merged pull requests:
v0.60.0
PromptingTools v0.60.0
Added
- Added new Claude 3.5 Sonnet model (
claude-3-5-sonnet-latest
) and updated the aliasclaude
andclaudes
with it. - Added support for Ollama streaming with schema
OllamaSchema
(see?StreamCallback
for more information). SchemaOllamaManaged
is NOT supported (it's legacy and will be removed in the future). - Moved the implementation of streaming callbacks to a new
StreamCallbacks
package. - Added new error types for tool execution to enable better error handling and reporting (see
?AbstractToolError
). - Added support for Anthropic's new pre-trained tools via
ToolRef
(see?ToolRef
), to enable the feature, use the:computer_use
beta header (eg,aitools(..., betas = [:computer_use])
).
Fixed
- Fixed a bug in
call_cost
where the cost was not calculated if any non-AIMessages were provided in the conversation.
Commits
Merged pull requests:
v0.59.1
PromptingTools v0.59.1
### Fixed
- Fixed a bug in multi-turn tool calls for OpenAI models where an empty tools array could have been, which causes an API error.
Commits
Merged pull requests:
v0.59.0
PromptingTools v0.59.0
Breaking Changes
- New field
name
introduced inAbstractChatMessage
andAIToolRequest
messages to enable role-based workflows. It initializes tonothing
, so it is backward compatible.
Added
- Extends support for structured extraction with multiple "tools" definitions (see
?aiextract
). - Added new primitives
Tool
(to re-use tool definitions) and a functionaitools
to support mixed structured and non-structured workflows, eg, agentic workflows (see?aitools
). - Added a field
name
toAbstractChatMessage
andAIToolRequest
messages to enable role-based workflows. - Added a support for partial argument execution with
execute_tool
function (provide your own context to override the arg values). - Added support for SambaNova hosted models (set your ENV
SAMBANOVA_API_KEY
). - Added many new models from Mistral, Groq, Sambanova, OpenAI.
Updated
- Renamed
function_call_signature
totool_call_signature
to better reflect that it's used for tools, but kept a link to the old name for back-compatibility. - Improves structured extraction for Anthropic models (now you can use
tool_choice
keyword argument to specify which tool to use or re-use your parsed tools). - When log probs are requested, we will now also log the raw information in
AIMessage.extras[:log_prob]
field (previously we logged only the full sum). This enables more nuanced log-probability calculations for individual tokens.
Commits
Merged pull requests:
v0.58.0
PromptingTools v0.58.0
Added
- Added support for Cerebras hosted models (set your ENV
CEREBRAS_API_KEY
). Available model aliases:cl3
(Llama3.1 8bn),cl70
(Llama3.1 70bn). - Added a kwarg to
aiclassify
to provide a custom token ID mapping (token_ids_map
) to work with custom tokenizers.
Updated
- Improved the implementation of
airetry!
to concatenate feedback from all ancestor nodes ONLY IFfeedback_inplace=true
(because otherwise LLM can see it in the message history).
Fixed
- Fixed a potential bug in
airetry!
where theaicall
object was not properly validated to ensure it has beenrun!
first.
Commits
Merged pull requests:
v0.57.0
PromptingTools v0.57.0
Added
- Support for Azure OpenAI API. Requires two environment variables to be st:
AZURE_OPENAI_API_KEY
andAZURE_OPENAI_HOST
(i.e. https://.openai.azure.com). Thanks to @pabvald !
Commits
Merged pull requests:
v0.56.1
v0.56.0
PromptingTools v0.56.0
Updated
- Enabled Streaming for OpenAI-compatible APIs (eg, DeepSeek Coder)
- If streaming to stdout, also print a newline at the end of streaming (to separate multiple outputs).
Fixed
- Relaxed the type-assertions in
StreamCallback
to allow for more flexibility.
Commits
Merged pull requests:
- Tidy up streaming callbacks (#209) (@svilupp)
- Enable Streaming for OpenAI-compatible models (#210) (@svilupp)
Closed issues:
- Implement Prompt Caching Feature for Anthropic API Calls (#196)
v0.55.0
PromptingTools v0.55.0
Added
- Added support for OpenAI's JSON mode for
aiextract
(just provide kwargjson_mode=true
). Reference Structured Outputs. - Added support for OpenRouter's API (you must set ENV
OPENROUTER_API_KEY
) to provide access to more models like Cohere Command R+ and OpenAI's o1 series. Reference OpenRouter. - Added new OpenRouter hosted models to the model registry (prefixed with
or
):oro1
(OpenAI's o1-preview),oro1m
(OpenAI's o1-mini),orcop
(Cohere's command-r-plus),orco
(Cohere's command-r). Theor
prefix is to avoid conflicts with existing models and OpenAI's aliases, then the goal is to provide 2 letters for each model and 1 letter for additional qualifier (eg, "p" for plus, "m" for mini) ->orcop
(OpenRouter cohere's COmmand-r-Plus).
Updated
- Updated FAQ with instructions on how to access new OpenAI o1 models via OpenRouter.
- Updated FAQ with instructions on how to add custom APIs (with an example
examples/adding_custom_API.jl
).
Fixed
- Fixed a bug in
aiclassify
for the OpenAI GPT4o models that have a different tokenizer. Unknown model IDs will throw an error.
Commits
Merged pull requests: