diff --git a/README.md b/README.md index b41a8ca8f..f8bc47283 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ [![Version](https://img.shields.io/maven-central/v/io.quarkiverse.langchain4j/quarkus-langchain4j?logo=apache-maven&style=flat-square)](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j) -This repository contains Quarkus extensions that facilitate seamless integration between Quarkus and [Langchain4j](https://github.com/langchain4j/langchain4j), enabling easy incorporation of Large Language Models (LLMs) into your Quarkus applications. +This repository contains Quarkus extensions that facilitate seamless integration between Quarkus and [LangChain4j](https://github.com/langchain4j/langchain4j), enabling easy incorporation of Large Language Models (LLMs) into your Quarkus applications. ## Features @@ -46,7 +46,7 @@ or, to use hugging face: ``` -Make sure to replace {latest-version} with the most recent release version available on [Maven Central](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j). +Make sure to replace `{latest-version}` with the most recent release version available on [Maven Central](https://search.maven.org/artifact/io.quarkiverse.langchain4j/quarkus-langchain4j). ## Contributing diff --git a/docs/antora.yml b/docs/antora.yml index 0f1a92b5d..268d4c093 100644 --- a/docs/antora.yml +++ b/docs/antora.yml @@ -1,5 +1,5 @@ name: quarkus-langchain4j -title: Quarkus Langchain4j +title: LangChain4j version: dev nav: - modules/ROOT/nav.adoc diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc index 1f4a5a448..85281c160 100644 --- a/docs/modules/ROOT/nav.adoc +++ b/docs/modules/ROOT/nav.adoc @@ -1,4 +1,4 @@ -* xref:index.adoc[Quarkus Langchain4j] +* xref:index.adoc[Getting started] ** AI Services *** xref:ai-services.adoc[AI Services] diff --git a/docs/modules/ROOT/pages/agent-and-tools.adoc b/docs/modules/ROOT/pages/agent-and-tools.adoc index 89bb1ebaf..8e3f2fab5 100644 --- a/docs/modules/ROOT/pages/agent-and-tools.adoc +++ b/docs/modules/ROOT/pages/agent-and-tools.adoc @@ -71,7 +71,7 @@ However, ensure each method name is unique among all declared tools. == Providing Tool Access -In the link:./ai-services.adoc[AI Service], you can specify the tools accessible to the agent. By default, **no tools** are available. Hence, ensure to define the list of tools you wish to make accessible: +In the xref:ai-services.adoc[AI Service], you can specify the tools accessible to the agent. By default, **no tools** are available. Hence, ensure to define the list of tools you wish to make accessible: [source, java] ---- diff --git a/docs/modules/ROOT/pages/ai-services.adoc b/docs/modules/ROOT/pages/ai-services.adoc index 26655f7b8..8e0d12bd1 100644 --- a/docs/modules/ROOT/pages/ai-services.adoc +++ b/docs/modules/ROOT/pages/ai-services.adoc @@ -43,7 +43,7 @@ String writeAPoem(String topic, int lines); [#_system_message] === System Message -The `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM.It delineates the AI service's role in the interaction: +The `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction: [source,java] ---- @@ -52,6 +52,7 @@ The `@SystemMessage` annotation defines the scope and initial instructions, serv """ ) ---- + === User Message (Prompt) The `@UserMessage` annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format: @@ -165,7 +166,7 @@ public class MyChatModelSupplier implements Supplier { [#memory] == Configuring the Context (Memory) -As LLMs are stateless, the memory—comprising the interaction context—must be exchanged each time. To prevent storing excessive messages, it's crucial to evict older messages. +As LLMs are stateless, the memory — comprising the interaction context — must be exchanged each time. To prevent storing excessive messages, it's crucial to evict older messages. The `chatMemoryProviderSupplier` attribute of the `@RegisterAiService` annotation enables configuring the memory provider: diff --git a/docs/modules/ROOT/pages/chroma-store.adoc b/docs/modules/ROOT/pages/chroma-store.adoc index df3278323..55da9377c 100644 --- a/docs/modules/ROOT/pages/chroma-store.adoc +++ b/docs/modules/ROOT/pages/chroma-store.adoc @@ -2,7 +2,7 @@ include::./includes/attributes.adoc[] -When implementing Retrieval Augmented Generation (RAG), a robust document store is crucial. This guide demonstrates how to leverage a [Chroma](https://www.trychroma.com/) database as the document store. +When implementing Retrieval Augmented Generation (RAG), a robust document store is crucial. This guide demonstrates how to leverage a https://www.trychroma.com/[Chroma] database as the document store. == Leveraging the Chroma Document Store diff --git a/docs/modules/ROOT/pages/index.adoc b/docs/modules/ROOT/pages/index.adoc index 3151a764a..791806a82 100644 --- a/docs/modules/ROOT/pages/index.adoc +++ b/docs/modules/ROOT/pages/index.adoc @@ -4,7 +4,7 @@ include::./includes/attributes.adoc[] _Large Language Models_ (LLMs) are AI-based systems designed to understand, generate, and manipulate human language, showcasing advanced natural language processing capabilities. The dynamic LLM landscape is reshaping our interactions with applications and the very construction of these applications. -The Quarkus Langchain4j extension seamlessly integrates LLMs into Quarkus applications, enabling the harnessing of LLM capabilities for the development of more intelligent applications. +The Quarkus LangChain4j extension seamlessly integrates LLMs into Quarkus applications, enabling the harnessing of LLM capabilities for the development of more intelligent applications. For instance, an application utilizing this extension can: @@ -13,7 +13,7 @@ For instance, an application utilizing this extension can: - Construct chatbots for system interaction - Generate personalized text such as emails or reports -This extension is built upon the https://github.com/langchain4j/langchain4j[langchain4j library]. +This extension is built upon the https://github.com/langchain4j/langchain4j[LangChain4j library]. It offers a declarative approach to interact with diverse LLMs like OpenAI, Hugging Face, or Ollama. It facilitates LLM-invoked functions within Quarkus applications and allows document loading within the LLM "context". == Quick Overview @@ -58,11 +58,10 @@ Once you've added the dependency and configuration, the next step involves creat ---- include::{examples-dir}/io/quarkiverse/langchain4j/samples/MyAiService.java[] ---- - -1. The `@RegisterAiService` annotation registers the _AI service_. -2. The `chatMemoryProviderSupplier` attribute specifies the _chat memory_ provider, managing how the LLM retains conversation history (the "context"). -3. The `tools` attribute defines the _tools_ the LLM can employ. +<1> The `@RegisterAiService` annotation registers the _AI service_. +<2> The `chatMemoryProviderSupplier` attribute specifies the _chat memory_ provider, managing how the LLM retains conversation history (the "context"). +<3> The `tools` attribute defines the _tools_ the LLM can employ. During interaction, the LLM can invoke these tools and reflect on their output. -4. The `@SystemMessage` annotation registers a _system message_, setting the initial context or "scope". -5. The `@UserMessage` annotation serves as the _prompt_. -6. The method invokes the LLM, initiating an exchange between the LLM and the application, beginning with the system message and then the user message. Your application triggers this method and receives the response. +<4> The `@SystemMessage` annotation registers a _system message_, setting the initial context or "scope". +<5> The `@UserMessage` annotation serves as the _prompt_. +<6> The method invokes the LLM, initiating an exchange between the LLM and the application, beginning with the system message and then the user message. Your application triggers this method and receives the response. diff --git a/docs/modules/ROOT/pages/llms.adoc b/docs/modules/ROOT/pages/llms.adoc index 70e21c893..28cec48d8 100644 --- a/docs/modules/ROOT/pages/llms.adoc +++ b/docs/modules/ROOT/pages/llms.adoc @@ -16,11 +16,11 @@ Continued research and development in LLMs are constantly pushing the boundaries LLMs are a core component of the Quarkus LangChain4j extension. The extension does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs such as OpenAI GPT-3/4, Hugging Face, and Ollama. -This link:../ai-services.adoc[interface] is designed to be simple and intuitive, allowing developers to quickly integrate LLMs into their applications. +This xref:ai-services.adoc[interface] is designed to be simple and intuitive, allowing developers to quickly integrate LLMs into their applications. Note that each LLM has a different feature set. Please check the specific documentation for the LLM you are using to see what features are available: -- link:../openai.adoc[OpenAI (GT-3/4)] -- link:../hugging-face.adoc[Hugging Face] -- link:../ollama.adoc[Ollama] \ No newline at end of file +- xref:openai.adoc[OpenAI (GT-3/4)] +- xref:hugging-face.adoc[Hugging Face] +- xref:ollama.adoc[Ollama] \ No newline at end of file diff --git a/docs/modules/ROOT/pages/openai.adoc b/docs/modules/ROOT/pages/openai.adoc index 40a616ff4..d5f371de1 100644 --- a/docs/modules/ROOT/pages/openai.adoc +++ b/docs/modules/ROOT/pages/openai.adoc @@ -22,7 +22,7 @@ To employ OpenAI LLMs, integrate the following dependency into your project: ---- -If no other LLM extension is installed, link:../ai-services.adoc[AI Services] will automatically utilize the configured OpenAI model. +If no other LLM extension is installed, xref:ai-services.adoc[AI Services] will automatically utilize the configured OpenAI model. === Configuration diff --git a/docs/modules/ROOT/pages/prompt-engineering.adoc b/docs/modules/ROOT/pages/prompt-engineering.adoc index 9eb8e61e4..f6526a4e5 100644 --- a/docs/modules/ROOT/pages/prompt-engineering.adoc +++ b/docs/modules/ROOT/pages/prompt-engineering.adoc @@ -9,7 +9,7 @@ This page lists a few techniques that can be used to guide the model in performi Input delimiters play a vital role in structuring the instructions for the language model. They mark the boundaries between different sections or segments of the prompt, guiding the model in processing distinct pieces of information. -For instance, hyphens (---) or similar markers delineate various components within the prompt, aiding the model in discerning user input, intermediate steps, and the final actions to be executed. +For instance, hyphens (`---`) or similar markers delineate various components within the prompt, aiding the model in discerning user input, intermediate steps, and the final actions to be executed. [source, text] ---- @@ -37,7 +37,7 @@ Using a few-shot technique guides a model to understand sentiment analysis based Provide sentiment labels for the statements delimited by --- The response must be either 'Positive', 'Neutral', or 'Negative'. -Here is a few examples: +Here are a few examples: - 'I love this product' - Positive - 'Not bad, but could be better' - Neutral - 'I'm thoroughly disappointed' - Negative @@ -47,11 +47,11 @@ Here is a few examples: --- ---- -In this example, the model is presented with a few labeled statements to learn sentiment analysis, followed by a new \{text} to analyze. +In this example, the model is presented with a few labeled statements to learn sentiment analysis, followed by a new `\{text}` to analyze. == Passing a list of actions -To instruct a language model to perform a sequence of actions, a carefully structured prompt containing a list of actions is essential. This involves delineating each action along with its associated link:./agent-and-tools.adoc[tool] and parameters within the prompt. +To instruct a language model to perform a sequence of actions, a carefully structured prompt containing a list of actions is essential. This involves delineating each action along with its associated xref:agent-and-tools.adoc[tool] and parameters within the prompt. For example, a structured list could entail a set of instructions such as: @@ -99,12 +99,12 @@ JSON Structure: } } -Provided Data: John Doe, 30, john@example.com, 123 Main St, New York, 10001" +Provided Data: John Doe, 30, john@example.com, 123 Main St, New York, 10001 ---- -In this example, the prompt specifies the expected structure for the JSON response, comprising keys like 'name', 'age', 'email', and 'address', with their respective value types. The 'Provided Data' segment serves as the input to be formatted into the requested JSON structure. +In this example, the prompt specifies the expected structure for the JSON response, comprising keys like `name`, `age`, `email`, and `address`, with their respective value types. The `Provided Data` segment serves as the input to be formatted into the requested JSON structure. -Being able to describe the JSON output is essential when an xref:./ai-services.adoc#_ai_method_return_type[AI method] return an object. +Being able to describe the JSON output is essential when an xref:ai-services.adoc#_ai_method_return_type[AI method] returns an object. The Quarkus LangChain4j extension will use the JSON structure to create an instance of the return type. == Control tokens and prefixes @@ -134,7 +134,7 @@ These techniques offer additional methods to direct language models effectively, == Giving a role to the AI In relation to the previous technique, giving a role to the AI is a technique that involves assigning a specific role to the language model, such as a teacher, student, or assistant. -This is generally done in the xref:./ai-services.adoc#_system_message[system message] to guide the model's behavior: +This is generally done in the xref:ai-services.adoc#_system_message[system message] to guide the model's behavior: [source, text] ---- diff --git a/docs/modules/ROOT/pages/redis-store.adoc b/docs/modules/ROOT/pages/redis-store.adoc index 5f7854d66..7e52da56d 100644 --- a/docs/modules/ROOT/pages/redis-store.adoc +++ b/docs/modules/ROOT/pages/redis-store.adoc @@ -19,7 +19,7 @@ To utilize the Redis document store, you'll need to include the following depend This extension relies on the Quarkus Redis client. Ensure the default Redis datasource is configured appropriately. For detailed guidance, refer to the link:https://quarkus.io/guides/redis[Quarkus Redis Quickstart] and the link:https://quarkus.io/guides/redis-reference[Quarkus Redis Reference]. -IMPORTANT: The Redis document store's functionality is built on the Redis JSON and Redis Search modules. Ensure these modules are installed, or consider using the Redis Stack. If you want to use the Quarkus DevServices for Redis, make sure to configure `quarkus.redis.devservices.image-name=redis/redis-stack:latest` in your `application.properties` file. +IMPORTANT: The Redis document store's functionality is built on the Redis JSON and Redis Search modules. Ensure these modules are installed, or consider using the Redis Stack. If you want to use the Quarkus Dev Services for Redis, make sure to configure `quarkus.redis.devservices.image-name=redis/redis-stack:latest` in your `application.properties` file. Upon installing the extension, you can utilize the Redis document store using the following code: diff --git a/docs/modules/ROOT/pages/retrievers.adoc b/docs/modules/ROOT/pages/retrievers.adoc index b69b2c2a9..f724b851e 100644 --- a/docs/modules/ROOT/pages/retrievers.adoc +++ b/docs/modules/ROOT/pages/retrievers.adoc @@ -38,7 +38,7 @@ A more complex scenario involves creating a Document from a CSV line: include::{examples-dir}/io/quarkiverse/langchain4j/samples/DocumentCreationExample.java[] ---- -Following document creation, the documents need to be ingested. The Quarkus Lang4J extension offers _ingestor_ components for database storage. +Following document creation, the documents need to be ingested. The Quarkus LangChain4j extension offers _ingestor_ components for database storage. For instance, quarkus-langchain4j-redis stores data in a Redis database, while quarkus-langchain4j-chroma uses a Chroma database. The following code demonstrates document ingestion in a Redis database: