Skip to content

Commit

Permalink
Minor adjustments to the documentation
Browse files Browse the repository at this point in the history
- Avoid Quarkus prefix and double extension name in UI (this was
  discussed with Holly a long time ago but I fixed it in
  the template only yesterday)
- Various minor adjustments/fixes
  • Loading branch information
gsmet committed Nov 14, 2023
1 parent ee88f87 commit 353041d
Show file tree
Hide file tree
Showing 12 changed files with 32 additions and 32 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[![Version](https://img.shields.io/maven-central/v/io.quarkiverse.langchain4j/quarkus-langchain4j?logo=apache-maven&style=flat-square)](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j)

This repository contains Quarkus extensions that facilitate seamless integration between Quarkus and [Langchain4j](https://github.com/langchain4j/langchain4j), enabling easy incorporation of Large Language Models (LLMs) into your Quarkus applications.
This repository contains Quarkus extensions that facilitate seamless integration between Quarkus and [LangChain4j](https://github.com/langchain4j/langchain4j), enabling easy incorporation of Large Language Models (LLMs) into your Quarkus applications.

## Features

Expand Down Expand Up @@ -46,7 +46,7 @@ or, to use hugging face:
</dependency>
```

Make sure to replace {latest-version} with the most recent release version available on [Maven Central](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j).
Make sure to replace `{latest-version}` with the most recent release version available on [Maven Central](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j).

## Contributing

Expand Down
2 changes: 1 addition & 1 deletion docs/antora.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: quarkus-langchain4j
title: Quarkus Langchain4j
title: LangChain4j
version: dev
nav:
- modules/ROOT/nav.adoc
2 changes: 1 addition & 1 deletion docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* xref:index.adoc[Quarkus Langchain4j]
* xref:index.adoc[Getting started]
** AI Services
*** xref:ai-services.adoc[AI Services]
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/agent-and-tools.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ However, ensure each method name is unique among all declared tools.

== Providing Tool Access

In the link:./ai-services.adoc[AI Service], you can specify the tools accessible to the agent. By default, **no tools** are available. Hence, ensure to define the list of tools you wish to make accessible:
In the xref:ai-services.adoc[AI Service], you can specify the tools accessible to the agent. By default, **no tools** are available. Hence, ensure to define the list of tools you wish to make accessible:

[source, java]
----
Expand Down
5 changes: 3 additions & 2 deletions docs/modules/ROOT/pages/ai-services.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ String writeAPoem(String topic, int lines);
[#_system_message]
=== System Message

The `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM.It delineates the AI service's role in the interaction:
The `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction:

[source,java]
----
Expand All @@ -52,6 +52,7 @@ The `@SystemMessage` annotation defines the scope and initial instructions, serv
"""
)
----

=== User Message (Prompt)

The `@UserMessage` annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format:
Expand Down Expand Up @@ -165,7 +166,7 @@ public class MyChatModelSupplier implements Supplier<ChatLanguageModel> {
[#memory]
== Configuring the Context (Memory)

As LLMs are stateless, the memorycomprising the interaction contextmust be exchanged each time. To prevent storing excessive messages, it's crucial to evict older messages.
As LLMs are stateless, the memorycomprising the interaction contextmust be exchanged each time. To prevent storing excessive messages, it's crucial to evict older messages.

The `chatMemoryProviderSupplier` attribute of the `@RegisterAiService` annotation enables configuring the memory provider:

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/chroma-store.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

include::./includes/attributes.adoc[]

When implementing Retrieval Augmented Generation (RAG), a robust document store is crucial. This guide demonstrates how to leverage a [Chroma](https://www.trychroma.com/) database as the document store.
When implementing Retrieval Augmented Generation (RAG), a robust document store is crucial. This guide demonstrates how to leverage a https://www.trychroma.com/[Chroma] database as the document store.

== Leveraging the Chroma Document Store

Expand Down
17 changes: 8 additions & 9 deletions docs/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ include::./includes/attributes.adoc[]

_Large Language Models_ (LLMs) are AI-based systems designed to understand, generate, and manipulate human language, showcasing advanced natural language processing capabilities.
The dynamic LLM landscape is reshaping our interactions with applications and the very construction of these applications.
The Quarkus Langchain4j extension seamlessly integrates LLMs into Quarkus applications, enabling the harnessing of LLM capabilities for the development of more intelligent applications.
The Quarkus LangChain4j extension seamlessly integrates LLMs into Quarkus applications, enabling the harnessing of LLM capabilities for the development of more intelligent applications.

For instance, an application utilizing this extension can:

Expand All @@ -13,7 +13,7 @@ For instance, an application utilizing this extension can:
- Construct chatbots for system interaction
- Generate personalized text such as emails or reports
This extension is built upon the https://github.com/langchain4j/langchain4j[langchain4j library].
This extension is built upon the https://github.com/langchain4j/langchain4j[LangChain4j library].
It offers a declarative approach to interact with diverse LLMs like OpenAI, Hugging Face, or Ollama. It facilitates LLM-invoked functions within Quarkus applications and allows document loading within the LLM "context".

== Quick Overview
Expand Down Expand Up @@ -58,11 +58,10 @@ Once you've added the dependency and configuration, the next step involves creat
----
include::{examples-dir}/io/quarkiverse/langchain4j/samples/MyAiService.java[]
----

1. The `@RegisterAiService` annotation registers the _AI service_.
2. The `chatMemoryProviderSupplier` attribute specifies the _chat memory_ provider, managing how the LLM retains conversation history (the "context").
3. The `tools` attribute defines the _tools_ the LLM can employ.
<1> The `@RegisterAiService` annotation registers the _AI service_.
<2> The `chatMemoryProviderSupplier` attribute specifies the _chat memory_ provider, managing how the LLM retains conversation history (the "context").
<3> The `tools` attribute defines the _tools_ the LLM can employ.
During interaction, the LLM can invoke these tools and reflect on their output.
4. The `@SystemMessage` annotation registers a _system message_, setting the initial context or "scope".
5. The `@UserMessage` annotation serves as the _prompt_.
6. The method invokes the LLM, initiating an exchange between the LLM and the application, beginning with the system message and then the user message. Your application triggers this method and receives the response.
<4> The `@SystemMessage` annotation registers a _system message_, setting the initial context or "scope".
<5> The `@UserMessage` annotation serves as the _prompt_.
<6> The method invokes the LLM, initiating an exchange between the LLM and the application, beginning with the system message and then the user message. Your application triggers this method and receives the response.
8 changes: 4 additions & 4 deletions docs/modules/ROOT/pages/llms.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ Continued research and development in LLMs are constantly pushing the boundaries

LLMs are a core component of the Quarkus LangChain4j extension.
The extension does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs such as OpenAI GPT-3/4, Hugging Face, and Ollama.
This link:../ai-services.adoc[interface] is designed to be simple and intuitive, allowing developers to quickly integrate LLMs into their applications.
This xref:ai-services.adoc[interface] is designed to be simple and intuitive, allowing developers to quickly integrate LLMs into their applications.

Note that each LLM has a different feature set.
Please check the specific documentation for the LLM you are using to see what features are available:

- link:../openai.adoc[OpenAI (GT-3/4)]
- link:../hugging-face.adoc[Hugging Face]
- link:../ollama.adoc[Ollama]
- xref:openai.adoc[OpenAI (GT-3/4)]
- xref:hugging-face.adoc[Hugging Face]
- xref:ollama.adoc[Ollama]
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/openai.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ To employ OpenAI LLMs, integrate the following dependency into your project:
</dependency>
----

If no other LLM extension is installed, link:../ai-services.adoc[AI Services] will automatically utilize the configured OpenAI model.
If no other LLM extension is installed, xref:ai-services.adoc[AI Services] will automatically utilize the configured OpenAI model.

=== Configuration

Expand Down
16 changes: 8 additions & 8 deletions docs/modules/ROOT/pages/prompt-engineering.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This page lists a few techniques that can be used to guide the model in performi

Input delimiters play a vital role in structuring the instructions for the language model. They mark the boundaries between different sections or segments of the prompt, guiding the model in processing distinct pieces of information.

For instance, hyphens (---) or similar markers delineate various components within the prompt, aiding the model in discerning user input, intermediate steps, and the final actions to be executed.
For instance, hyphens (`---`) or similar markers delineate various components within the prompt, aiding the model in discerning user input, intermediate steps, and the final actions to be executed.

[source, text]
----
Expand Down Expand Up @@ -37,7 +37,7 @@ Using a few-shot technique guides a model to understand sentiment analysis based
Provide sentiment labels for the statements delimited by ---
The response must be either 'Positive', 'Neutral', or 'Negative'.
Here is a few examples:
Here are a few examples:
- 'I love this product' - Positive
- 'Not bad, but could be better' - Neutral
- 'I'm thoroughly disappointed' - Negative
Expand All @@ -47,11 +47,11 @@ Here is a few examples:
---
----

In this example, the model is presented with a few labeled statements to learn sentiment analysis, followed by a new \{text} to analyze.
In this example, the model is presented with a few labeled statements to learn sentiment analysis, followed by a new `\{text}` to analyze.

== Passing a list of actions

To instruct a language model to perform a sequence of actions, a carefully structured prompt containing a list of actions is essential. This involves delineating each action along with its associated link:./agent-and-tools.adoc[tool] and parameters within the prompt.
To instruct a language model to perform a sequence of actions, a carefully structured prompt containing a list of actions is essential. This involves delineating each action along with its associated xref:agent-and-tools.adoc[tool] and parameters within the prompt.

For example, a structured list could entail a set of instructions such as:

Expand Down Expand Up @@ -99,12 +99,12 @@ JSON Structure:
}
}
Provided Data: John Doe, 30, john@example.com, 123 Main St, New York, 10001"
Provided Data: John Doe, 30, john@example.com, 123 Main St, New York, 10001
----

In this example, the prompt specifies the expected structure for the JSON response, comprising keys like 'name', 'age', 'email', and 'address', with their respective value types. The 'Provided Data' segment serves as the input to be formatted into the requested JSON structure.
In this example, the prompt specifies the expected structure for the JSON response, comprising keys like `name`, `age`, `email`, and `address`, with their respective value types. The `Provided Data` segment serves as the input to be formatted into the requested JSON structure.

Being able to describe the JSON output is essential when an xref:./ai-services.adoc#_ai_method_return_type[AI method] return an object.
Being able to describe the JSON output is essential when an xref:ai-services.adoc#_ai_method_return_type[AI method] returns an object.
The Quarkus LangChain4j extension will use the JSON structure to create an instance of the return type.

== Control tokens and prefixes
Expand Down Expand Up @@ -134,7 +134,7 @@ These techniques offer additional methods to direct language models effectively,
== Giving a role to the AI

In relation to the previous technique, giving a role to the AI is a technique that involves assigning a specific role to the language model, such as a teacher, student, or assistant.
This is generally done in the xref:./ai-services.adoc#_system_message[system message] to guide the model's behavior:
This is generally done in the xref:ai-services.adoc#_system_message[system message] to guide the model's behavior:

[source, text]
----
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/redis-store.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ To utilize the Redis document store, you'll need to include the following depend

This extension relies on the Quarkus Redis client. Ensure the default Redis datasource is configured appropriately. For detailed guidance, refer to the link:https://quarkus.io/guides/redis[Quarkus Redis Quickstart] and the link:https://quarkus.io/guides/redis-reference[Quarkus Redis Reference].

IMPORTANT: The Redis document store's functionality is built on the Redis JSON and Redis Search modules. Ensure these modules are installed, or consider using the Redis Stack. If you want to use the Quarkus DevServices for Redis, make sure to configure `quarkus.redis.devservices.image-name=redis/redis-stack:latest` in your `application.properties` file.
IMPORTANT: The Redis document store's functionality is built on the Redis JSON and Redis Search modules. Ensure these modules are installed, or consider using the Redis Stack. If you want to use the Quarkus Dev Services for Redis, make sure to configure `quarkus.redis.devservices.image-name=redis/redis-stack:latest` in your `application.properties` file.

Upon installing the extension, you can utilize the Redis document store using the following code:

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/retrievers.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ A more complex scenario involves creating a Document from a CSV line:
include::{examples-dir}/io/quarkiverse/langchain4j/samples/DocumentCreationExample.java[]
----

Following document creation, the documents need to be ingested. The Quarkus Lang4J extension offers _ingestor_ components for database storage.
Following document creation, the documents need to be ingested. The Quarkus LangChain4j extension offers _ingestor_ components for database storage.
For instance, quarkus-langchain4j-redis stores data in a Redis database, while quarkus-langchain4j-chroma uses a Chroma database.

The following code demonstrates document ingestion in a Redis database:
Expand Down

0 comments on commit 353041d

Please sign in to comment.