Skip to content

Commit

Permalink
Merge branch 'main' of github.com:box/developer.box.com into trusted-…
Browse files Browse the repository at this point in the history
…models
  • Loading branch information
bszwarc committed Oct 28, 2024
2 parents eb39c79 + 862d791 commit 5489ae0
Show file tree
Hide file tree
Showing 32 changed files with 1,463 additions and 403 deletions.
4 changes: 3 additions & 1 deletion .spelling
Original file line number Diff line number Diff line change
Expand Up @@ -331,4 +331,6 @@ summarization
GPT-4o
Anthropic
GPT-4o-2024-05-13
text-embedding-ada-002
text-embedding-ada-002
params
GPT-4o-mini
4 changes: 4 additions & 0 deletions content/guides/api-calls/api-versioning-strategy.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,10 @@ Breaking changes in the Box API occur within versioned releases, typically accom
We use [oasdiff](https://github.com/Tufin/oasdiff/blob/main/BREAKING-CHANGES-EXAMPLES.md) tool to detect most of the possible breaking changes.
</Message>

## AI agent configuration versioning

[AI agent](g://box-ai/ai-agents) versioning gives the developers more control over model version management and ensures consistent responses. For details, see [AI agent configuration versioning guide](g://box-ai/ai-agents/ai-agent-versioning).

## Support policy and deprecation information

When new versions of the Box APIs and Box SDKs are released, earlier versions will be retired. Box marks a version as `deprecated` at least 24 months before retiring it. In other words, a deprecated version cannot become end-of-life
Expand Down
1 change: 1 addition & 0 deletions content/guides/api-calls/pagination/marker-based.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ rank: 2
related_endpoints:
- get_folders_id_items
- get_files_id_collaborations
- get_folders_id_collaborations
- get_webhooks
- get_metadata_templates_enterprise
- get_recent_items
Expand Down
212 changes: 212 additions & 0 deletions content/guides/box-ai/ai-agents/ai-agent-versioning.md

Large diffs are not rendered by default.

237 changes: 222 additions & 15 deletions content/guides/box-ai/ai-agents/get-agent-default-config.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
rank: 7
rank: 2
related_endpoints:
- get_ai_agent_default
- post_ai_text_gen
Expand All @@ -8,23 +8,18 @@ related_guides:
- box-ai/prerequisites
- box-ai/ask-questions
- box-ai/generate-text
- box-ai/extract-metadata
- box-ai/extract-metadata-structured
---

# Get default AI agent configuration

<Message type="notice">
Box AI API is currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.

Endpoints related to metadata extraction are currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
</Message>

The `GET /2.0/ai_agent_default` endpoint allows you to fetch the default configuration for AI services.
Once you get the configuration details, you can override them using the `ai_agent` parameter available in the [`POST /2.0/ai/ask`][ask] and [`POST /2.0/ai/text_gen`][text-gen] requests.

Override examples include:

* Replacing the default LLM with a custom one based on your organization's needs.
* Tweaking the base prompt to allow a more customized user experience.
* Changing a parameter, such as `temperature`, to make the results more or less creative.
Once you get the configuration details you can override them using the [`ai_agent`][ai-agent-config] parameter.

## Send a request

Expand All @@ -44,10 +39,222 @@ To make a call, you must pass the following parameters. Mandatory parameters are
| Parameter| Description| Example|
|--------|--------|-------|
|`language`| The language code the agent configuration is returned for. If the language is not supported, the default configuration is returned. | `ja-JP`|
|**`mode`**|The mode used to filter the agent configuration. The value can be `ask` or `text_gen`. |`ask`|
|`model`|The model you want to get the configuration for. To make sure your chosen model is supported, see the [list of models][models].| `openai__gpt_3_5_turbo`|
|**`mode`**|The mode used to filter the agent configuration. The value can be `ask`, `text_gen`, `extract`, or `extract_structured` depending on the result you want to achieve. |`ask`|
|`model`|The model you want to get the configuration for. To make sure your chosen model is supported, see the [list of models][models].| `azure__openai__gpt_3_5_turbo_16k`|

## Responses

The responses to the call may vary depending on the `mode` parameter value you choose.

<Tabs>

<Tab title='Ask'>

When you set the `mode` parameter to `ask` the response will be as follows:

```sh
{
"type": "ai_agent_ask",
"basic_text": {
"model": "azure__openai__gpt_4o_mini",
"system_message": "",
"prompt_template": "prompt_template": "{user_question}Write it in an informal way.{content}"
},
"num_tokens_for_completion": 6000,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 1.5,
"stop": "<|im_end|>",
"type": "openai_params"
}
},
"long_text": {
"model": "azure__openai__gpt_4o_mini",
"system_message": "",
"prompt_template": "prompt_template": "{user_question}Write it in an informal way.{content}"
},
"num_tokens_for_completion": 6000,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 1.5,
"stop": "<|im_end|>",
"type": "openai_params"
},
"embeddings": {
"model": "azure__openai__text_embedding_ada_002",
"strategy": {
"id": "basic",
"num_tokens_per_chunk": 64
}
}
},
"basic_text_multi": {
"model": "azure__openai__gpt_4o_mini",
"system_message": "",
"prompt_template": "Current date: {current_date}\n\nTEXT FROM DOCUMENTS STARTS\n{content}\nTEXT FROM DOCUMENTS ENDS\n\nHere is how I need help from you: {user_question}\n.",
"num_tokens_for_completion": 6000,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 1.5,
"stop": "<|im_end|>",
"type": "openai_params"
}
},
"long_text_multi": {
"model": "azure__openai__gpt_4o_mini",
"system_message": "Role and Goal: You are an assistant designed to analyze and answer a question based on provided snippets from multiple documents, which can include business-oriented documents like docs, presentations, PDFs, etc. The assistant will respond concisely, using only the information from the provided documents.\n\nConstraints: The assistant should avoid engaging in chatty or extensive conversational interactions and focus on providing direct answers. It should also avoid making assumptions or inferences not supported by the provided document snippets.\n\nGuidelines: When answering, the assistant should consider the file's name and path to assess relevance to the question. In cases of conflicting information from multiple documents, it should list the different answers with citations. For summarization or comparison tasks, it should concisely answer with the key points. It should also consider the current date to be the date given.\n\nPersonalization: The assistant's tone should be formal and to-the-point, suitable for handling business-related documents and queries.\n",
"prompt_template": "Current date: {current_date}\n\nTEXT FROM DOCUMENTS STARTS\n{content}\nTEXT FROM DOCUMENTS ENDS\n\nHere is how I need help from you: {user_question}\n.",
"num_tokens_for_completion": 6000,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 1.5,
"stop": "<|im_end|>",
"type": "openai_params"
},
"embeddings": {
"model": "azure__openai__text_embedding_ada_002",
"strategy": {
"id": "basic",
"num_tokens_per_chunk": 64
}
}
}
}
```

</Tab>

<Tab title='Text gen'>

When you set the `mode` parameter to `text_gen` the response will be as follows:

```sh
{
"type": "ai_agent_text_gen",
"basic_gen": {
"model": "azure__openai__gpt_3_5_turbo_16k",
"system_message": "\nIf you need to know today's date to respond, it is {current_date}.\nThe user is working in a collaborative document creation editor called Box Notes.\nAssume that you are helping a business user create documents or to help the user revise existing text.\nYou can help the user in creating templates to be reused or update existing documents, you can respond with text that the user can use to place in the document that the user is editing.\nIf the user simply asks to \"improve\" the text, then simplify the language and remove jargon, unless the user specifies otherwise.\nDo not open with a preamble to the response, just respond.\n",
"prompt_template": "{user_question}",
"num_tokens_for_completion": 12000,
"llm_endpoint_params": {
"temperature": 0.1,
"top_p": 1,
"frequency_penalty": 0.75,
"presence_penalty": 0.75,
"stop": "<|im_end|>",
"type": "openai_params"
},
"embeddings": {
"model": "azure__openai__text_embedding_ada_002",
"strategy": {
"id": "basic",
"num_tokens_per_chunk": 64
}
},
"content_template": "`````{content}`````"
}
}
```

</Tab>

<Tab title='Extract'>

When you set the `mode` parameter to `extract` the response will be as follows:

```sh
{
"type": "ai_agent_extract",
"basic_text": {
"model": "google__gemini_1_5_flash_001",
"system_message": "Respond only in valid json. You are extracting metadata that is name, value pairs from a document. Only output the metadata in valid json form, as {\"name1\":\"value1\",\"name2\":\"value2\"} and nothing else. You will be given the document data and the schema for the metadata, that defines the name, description and type of each of the fields you will be extracting. Schema is of the form {\"fields\": [{\"key\": \"key_name\", \"displayName\": \"key display name\", \"type\": \"string\", \"description\": \"key description\"}]}. Leverage key description and key display name to identify where the key and value pairs are in the document. In certain cases, key description can also indicate the instructions to perform on the document to obtain the value. Prompt will be in the form of Schema is ``schema`` \n document is ````document````",
"prompt_template": "If you need to know today's date to respond, it is {current_date}. Schema is ``{user_question}`` \n document is ````{content}````",
"num_tokens_for_completion": 4096,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"top_k": null,
"type": "google_params"
}
},
"long_text": {
"model": "google__gemini_1_5_flash_001",
"system_message": "Respond only in valid json. You are extracting metadata that is name, value pairs from a document. Only output the metadata in valid json form, as {\"name1\":\"value1\",\"name2\":\"value2\"} and nothing else. You will be given the document data and the schema for the metadata, that defines the name, description and type of each of the fields you will be extracting. Schema is of the form {\"fields\": [{\"key\": \"key_name\", \"displayName\": \"key display name\", \"type\": \"string\", \"description\": \"key description\"}]}. Leverage key description and key display name to identify where the key and value pairs are in the document. In certain cases, key description can also indicate the instructions to perform on the document to obtain the value. Prompt will be in the form of Schema is ``schema`` \n document is ````document````",
"prompt_template": "If you need to know today's date to respond, it is {current_date}. Schema is ``{user_question}`` \n document is ````{content}````",
"num_tokens_for_completion": 4096,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"top_k": null,
"type": "google_params"
},
"embeddings": {
"model": "azure__openai__text_embedding_ada_002",
"strategy": {
"id": "basic",
"num_tokens_per_chunk": 64
}
}
}
}
```

</Tab>

<Tab title='Extract structured'>

When you set the `mode` parameter to `extract_structured` the response will be as follows:

```sh
{
"type": "ai_agent_extract_structured",
"basic_text": {
"model": "google__gemini_1_5_flash_001",
"system_message": "Respond only in valid json. You are extracting metadata that is name, value pairs from a document. Only output the metadata in valid json form, as {\"name1\":\"value1\",\"name2\":\"value2\"} and nothing else. You will be given the document data and the schema for the metadata, that defines the name, description and type of each of the fields you will be extracting. Schema is of the form {\"fields\": [{\"key\": \"key_name\", \"prompt\": \"prompt to extract the value\", \"type\": \"date\"}]}. Leverage prompt for each key to identify where the key and value pairs are in the document. In certain cases, prompt can also indicate the instructions to perform on the document to obtain the value. Prompt will be in the form of Schema is ``schema`` \n document is ````document````",
"prompt_template": "If you need to know today's date to respond, it is {current_date}. Schema is ``{user_question}`` \n document is ````{content}````",
"num_tokens_for_completion": 4096,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"top_k": null,
"type": "google_params"
}
},
"long_text": {
"model": "google__gemini_1_5_flash_001",
"system_message": "Respond only in valid json. You are extracting metadata that is name, value pairs from a document. Only output the metadata in valid json form, as {\"name1\":\"value1\",\"name2\":\"value2\"} and nothing else. You will be given the document data and the schema for the metadata, that defines the name, description and type of each of the fields you will be extracting. Schema is of the form {\"fields\": [{\"key\": \"key_name\", \"prompt\": \"prompt to extract the value\", \"type\": \"date\"}]}. Leverage prompt for each key to identify where the key and value pairs are in the document. In certain cases, prompt can also indicate the instructions to perform on the document to obtain the value. Prompt will be in the form of Schema is ``schema`` \n document is ````document````",
"prompt_template": "If you need to know today's date to respond, it is {current_date}. Schema is ``{user_question}`` \n document is ````{content}````",
"num_tokens_for_completion": 4096,
"llm_endpoint_params": {
"temperature": 0,
"top_p": 1,
"top_k": null,
"type": "google_params"
},
"embeddings": {
"model": "google__textembedding_gecko_003",
"strategy": {
"id": "basic",
"num_tokens_per_chunk": 64
}
}
}
}
```

</Tab>

</Tabs>

[prereq]: g://box-ai/prerequisites
[ask]: e://post_ai_ask#param_ai_agent
[text-gen]: e://post_ai_text_gen#param_ai_agent
[models]: g://box-ai/ai-models/index
[models]: g://box-ai/ai-models
[ai-agent-config]: g://box-ai/ai-agents/overrides-tutorial
[override-tutorials]: g://box-ai/ai-agents/overrides-tutorial
27 changes: 22 additions & 5 deletions content/guides/box-ai/ai-agents/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,33 @@
rank: 1
related_endpoints:
- get_ai_agent_default
- post_ai_text_gen
- post_ai_ask
- post_ai_extract
- post_ai_extract_structured
related_guides:
- box-ai/index
- box-ai/ai-agents/get-agent-default-config
- box-ai/ai-agents/overrides-tutorial
---

# AI agent configuration
# AI model overrides

You can use the `ai_agent` parameter available in the [`POST /2.0/ai/ask`][ask] and [`POST /2.0/ai/text_gen`][text-gen] requests to override the default agent configuration and introduce your own custom settings.
<Message type="notice">
Endpoints related to metadata extraction are currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
</Message>

For details, see [AI agent default configuration][agent-default].
Box updates the default models across the endpoints on a regular basis to stay up to date with the most advanced options.

If your implementation is based on Box AI, a new default model might alter the results in a way that could break or change a downstream process. Switching to a specific version may prevent encountering any issues.

Selecting a specific model may also bring better results to your use case. This is why, you can switch to any model included in the [supported models][models] list.

Apart from switching models, you can pass options to further customize the agents used in Box AI implementation and get the responses that suit your use case.

To see specific use cases, check the [overrides tutorial][overrides].

[ask]: e://post_ai_ask#param_ai_agent
[text-gen]: e://post_ai_text_gen#param_ai_agent
[agent-default]: g://box-ai/ai-agents/get-agent-default-config
[agent-default]: g://box-ai/ai-agents/get-agent-default-config
[overrides]: g://box-ai/ai-agents/overrides-tutorial
[models]: g://box-ai/supported-models
Loading

0 comments on commit 5489ae0

Please sign in to comment.