This project is a Question Answering application with Large Language Models (LLMs) and Amazon DocumentDB (with MongoDB Compatibility). An application using the RAG(Retrieval Augmented Generation) approach retrieves information most relevant to the user’s request from the enterprise knowledge base or content, bundles it as context along with the user’s request as a prompt, and then sends it to the LLM to get a GenAI response.
LLMs have limitations around the maximum word count for the input prompt, therefore choosing the right passages among thousands or millions of documents in the enterprise, has a direct impact on the LLM’s accuracy.
In this project, Amazon DocumentDB (with MongoDB Compatibility) is used for knowledge base.
The overall architecture is like this:
- Deploy the cdk stacks (For more information, see here).
- An Amazon DocumentDB (with MongoDB Compatibility) to store embeddings.
- An SageMaker Studio for RAG application and data ingestion to the Amazon DocumentDB (with MongoDB Compatibility).
- Open JupyterLab in SageMaker Studio and then open a new terminal.
- Run the following commands on the terminal to clone the code repository for this project:
git clone --depth=1 https://github.com/aws-samples/rag-with-amazon-bedrock-and-documentdb.git
- Open
data_ingestion_to_documentdb.ipynb
notebook and Run it. (For more information, see here) - Run Streamlit application. (For more information, see here)
- Vector search for Amazon DocumentDB
- Vector search for Amazon DocumentDB (with MongoDB compatibility) is now generally available (2023-11-29)
- Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChain (2023-05-25)
- Build Streamlit apps in Amazon SageMaker Studio (2023-04-11)
- LangChain - A framework for developing applications powered by language models.
- Streamlit - A faster way to build and share data apps
- rag-with-amazon-kendra - Question Answering application with Large Language Models (LLMs) and Amazon Kendra
- rag-with-amazon-postgresql-using-pgvector - Question Answering application with Large Language Models (LLMs) and Amazon Aurora Postgresql
- rag-with-amazon-opensearch - Question Answering application with Large Language Models (LLMs) and Amazon OpenSearch Service with LangChain
- rag-with-amazon-opensearch-serverless - Question Answering application with Large Language Models (LLMs) and Amazon OpenSearch Service Serverless with LangChain
- rag-with-haystack-and-amazon-opensearch - Question Answering application with Large Language Models (LLMs) and Amazon OpenSearch Service with Haystack
- rag-with-amazon-documentdb-and-sagemaker - Question Answering application with Large Language Models (LLMs) and Amazon DocumentDB(with MongoDB Compatibility) with LangChain
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.