AI-Research-Paper-Explainer transforms complex research papers into clear, digestible explanations using advanced Large Language Models (LLMs).
- ๐ PDF Upload: Upload and process research papers in PDF format.
- ๐ง Multi-LLM Support: Choose from a wide range of LLMs including:
- OpenAI models (GPT-4, GPT-3.5-turbo)
- Anthropic models (Claude-3 Opus, Sonnet, Haiku)
- Google models (Gemini Pro, Gemini 1.5 Pro, Gemini 1.5 Flash)
- Groq models (Mixtral, LLaMA variants)
- Mistral AI models
- ๐ Adaptive Explanations: Tailor explanations from High School to Expert level.
- ๐ Comprehensive Insights:
- Main ideas and methodology
- Concrete examples
- Prerequisites
- Mathematical concepts
- โก Efficient Processing:
- Async mode for faster processing (with compatible LLMs)
- Non-async mode with optional sleep for rate-limited APIs
- ๐ฅ๏ธ Intuitive Streamlit Interface
- ๐ Flexible Summarization: Choose between map-reduce, refine, or stuff methods
- ๐ Customizable Execution:
- Async/Non-async processing
- Sleep option for rate limiting (useful for free-tier API usage)
The starting page of the UI, showing a summary of the uploaded paper.
Display of a specific chunk from the paper.
Explanation of prerequisites for understanding the current chunk.
Detailed explanation of the content within the chunk.
Breakdown of mathematical concepts present in the chunk.
Practical examples related to the concepts in the chunk.
-
Clone the repository:
git clone https://github.com/rd-serendipity/ai-research-paper-explainer.git cd ai-research-paper-explainer
-
Set up and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env
Edit
.env
and add your LLM API keys. -
Run the app:
streamlit run src/app.py
-
Open
http://localhost:8501
in your browser.
-
Upload Your Paper: Use the sidebar to upload a PDF of the research paper.
-
Choose Your LLM: Select from a variety of LLM providers and models.
-
Configure Explanation Options:
- Set the difficulty level (High School to Expert)
- Choose to include examples, prerequisites, and mathematical explanations
- (Future feature) Option to find and summarize similar papers
-
Advanced Settings:
- Execution Mode:
- Async: Faster processing, ideal for paid API tiers
- Non-Async: Suitable for free API tiers or rate-limited usage
- Sleep Option: When using Non-Async mode, enable this to add pauses between API calls (helps with rate limits)
- Summarization Method: Choose between map_reduce, refine, or stuff algorithms
- Execution Mode:
-
Process the Paper: Click "Process Paper" to start the explanation generation.
-
Review Results: The app will display:
- A summary of the entire paper
- Chunk-by-chunk breakdowns including:
- Original text
- Prerequisites (if selected)
- Detailed explanation
- Examples (if selected)
- Mathematical concepts (if selected and present in the chunk)
- Free Tier Users: If you're using free tier APIs (e.g., Groq or Gemini):
- Select "Non-Async" execution mode
- Enable the "Include Sleeps" option in Additional Settings
- This will add pauses between API calls to help manage rate limits
- Paid Tier Users: For faster processing, use the "Async" execution mode.
- Difficulty Level: Adjust based on your target audience or personal understanding.
- Summarization Method:
map_reduce
: Good for longer papers, summarizes in parts then combinesrefine
: Iteratively refines the summary, good for nuanced papersstuff
: Best for shorter papers, processes all text at once
- Find and summarize related papers
- Improved prompt engineering
- Enhanced visualizations
- User feedback integration
We welcome contributions! See our CONTRIBUTING.md for guidelines on how to get involved.
This project is licensed under the MIT License - see the LICENSE file for details.
โญ๏ธ If you find this project useful, please consider giving it a star on GitHub!