Skip to content

Commit

Permalink
Merge branch 'main' into DDOC-1163-embed-hubs
Browse files Browse the repository at this point in the history
  • Loading branch information
bszwarc authored Oct 25, 2024
2 parents e201a05 + 38faad8 commit e91bdd9
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 11 deletions.
7 changes: 4 additions & 3 deletions content/guides/box-ai/ai-agents/overrides-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,11 @@ The set of parameters available for `ask`, `text_gen`, `extract`, `extract_struc

### LLM endpoint params

The `llm_endpoint_params` configuration options differ depending on the overall AI model being [Google][google-params] or [OpenAI][openai-params] based.
The `llm_endpoint_params` configuration options differ depending on the overall AI model being [Google][google-params], [OpenAI][openai-params] or [AWS][aws-params] based.

For example, both `llm_endpoint_params` objects accept a `temperature` parameter, but the outcome differs depending on the model.

For Google models, the [`temperature`][google-temp] is used for sampling during response generation, which occurs when `top-P` and `top-K` are applied. Temperature controls the degree of randomness in the token selection.
For Google and AWS models, the [`temperature`][google-temp] is used for sampling during response generation, which occurs when `top-P` and `top-K` are applied. Temperature controls the degree of randomness in the token selection.

For OpenAI models, [`temperature`][openai-temp] is the sampling temperature with values between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. When introducing your own configuration, use `temperature` or or `top_p` but not both.

Expand Down Expand Up @@ -353,4 +353,5 @@ Using this model results in a response listing more metadata entries:
[openai-tokens]: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
[agent]: e://get_ai_agent_default
[google-temp]: https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters
[openai-temp]: https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
[openai-temp]: https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
[aws-params]: r://ai-llm-endpoint-params-aws
12 changes: 7 additions & 5 deletions content/guides/box-ai/supported-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,11 @@ Make sure you use **two underscores** after the provider name.

<Message type='notice'>
The list may change depending on the model availability.
**Preview** means you can use the model, but the access to all its features
may be limited.
Models offered in **Preview** mode have not been fully performance-tested at scale and are made available on an as-is basis. You may experience variability in model/output quality, availability, and accuracy
</Message>

| Provider | Family |Availability| API Name | External documentation | Capability |
| --------------- | ------ |-----| --------------------------------------- | ----------------------------------------------------------------------- | ---------- |
| Microsoft Azure | GPT |available| `azure__openai__gpt_3_5_turbo_16k` | [Azure OpenAI GPT-3.5 model documentation][azure-ai-model-gpt35] | Chat |
| Microsoft Azure | GPT |available| `azure__openai__gpt_4o_mini` | [Azure OpenAI GPT-4o-mini model documentation][azure-ai-model-gpt40] | Chat |
| Microsoft Azure | GPT |available| `azure__openai__text_embedding_ada_002` | [Azure OpenAI embeddings models documentation][azure-ai-embeddings] | Embeddings |
| GCP Vertex | Gecko | available |`google__textembedding_gecko` | [Google Vertex AI embeddings models documentation][vertex-ai-model] | Embeddings |
Expand All @@ -36,12 +34,14 @@ may be limited.
| GCP Vertex | PaLM | available |`google__text_unicorn` | [Google PaLM 2 for Text model documentation][vertex-text-models] | Chat |
| GCP Vertex | PaLM | available |`google__text_bison` | [Google PaLM 2 for Text model documentation][vertex-text-models] | Chat |
| GCP Vertex | PaLM |available| `google__text_bison_32k` | [Google PaLM 2 for Text model documentation][vertex-text-models] | Chat |
| AWS | Claude |preview | `aws__claude_3_haiku` | [Amazon Claude model documentation][aws-claude] | Chat |
| AWS | Claude |preview | `aws__claude_3_sonnet` | [Amazon Claude model documentation][aws-claude] | Chat |
| AWS | Claude |preview | `aws__claude_3_5_sonnet` | [Amazon Claude model documentation][aws-claude] | Chat |
| AWS | Titan |preview | `aws__titan_text_lite` | [Amazon Titan model documentation][aws-titan] | Chat |

[ask]: e://post_ai_ask
[text-gen]: e://post_ai_text_gen
[agent]: e://get_ai_agent_default
[openai-gpt-3-5-model]: https://platform.openai.com/docs/models/gpt-3-5-turbo
[azure-ai-model-gpt35]: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35
[azure-ai-model-gpt40]: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4o-and-gpt-4-turbo
[vertex-ai-model]: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#models
[vertex-ai-gemini-models]: https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models
Expand All @@ -50,3 +50,5 @@ may be limited.
[azure-ai-embeddings]: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#embeddings
[openai-embeddings]: https://platform.openai.com/docs/models/embeddings
[ai-model]: e://get-ai-agent-default#param-model
[aws-claude]: https://aws.amazon.com/bedrock/claude/
[aws-titan]: https://aws.amazon.com/bedrock/titan/
3 changes: 1 addition & 2 deletions content/guides/embed/ui-elements/preview.md
Original file line number Diff line number Diff line change
Expand Up @@ -597,10 +597,9 @@ more, see [Dedicated Scopes for Box UI Elements][scopes].
[buie]: https://github.com/box/box-ui-elements/releases/tag/v16.0.0
[annotationsguide]: g://embed/ui-elements/annotations.md
[previewlib]: https://github.com/box/box-content-preview
[ainpm]: https://www.npmjs.com/package/box-ui-elements/v/19.0.0-beta.34
[expiredembed]: r://file--full/#param-expiring_embed_link
[token]: g://authentication/tokens/developer-tokens
[aipackage]: https://github.com/box/box-ui-elements/releases/tag/v20.0.0-beta.17
[aipackage]: https://www.npmjs.com/package/box-ui-elements/v/22.0.0
[installation]: g://embed/ui-elements/installation
[blueprint-web]: https://www.npmjs.com/package/@box/blueprint-web
[box-ai-content-answers]: https://www.npmjs.com/package/@box/box-ai-content-answers
Expand Down
8 changes: 7 additions & 1 deletion content/pages/ai-dev-zone/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,12 @@ view sample code, explore Box AI use cases, and more!
href="https://python.langchain.com/v0.2/docs/integrations/providers/box/">
Include Box content in your LLM workflows with Box loader for LangChain.

<strong style="background-color: #e8e8e8">New</strong>
</Tile>
<Tile type="box-brown" title="Pinecone"
href="https://medium.com/box-developer-blog/demo-box-pinecone-f03783c412bb">
Connect Box and Pinecone to customize vector embeddings and get more relevant answers from LLM.

<strong style="background-color: #e8e8e8">New</strong>
</Tile>
</TileGrid>
Expand Down Expand Up @@ -147,4 +153,4 @@ view sample code, explore Box AI use cases, and more!
<More secondary="true" to='https://www.youtube.com/watch?v=amhOj0YRVRQ&list=PLCSEWOlbcUyI2ta24oRr75_4igvMzKJ9q' center>
View all videos
</More>
</Centered>
</Centered>

0 comments on commit e91bdd9

Please sign in to comment.