Skip to content

AI-powered tool that transforms complex research papers into clear, customized explanations. Uses LLMs (GPT-4, Claude, Gemini) to provide adaptive insights from high school to expert levels. Built with Python, Streamlit, and LangChain. Ideal for students, researchers, and lifelong learners. ๐ŸŒŸ Star us for updates! ๐Ÿค Contributions welcome!

License

Notifications You must be signed in to change notification settings

rd-serendipity/ai-research-paper-explainer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AI-Research-Paper-Explainer ๐Ÿ“š๐Ÿ”ฌ

Python Version Streamlit LangChain License: MIT

AI-Research-Paper-Explainer transforms complex research papers into clear, digestible explanations using advanced Large Language Models (LLMs).

๐ŸŒŸ Features

  • ๐Ÿ“„ PDF Upload: Upload and process research papers in PDF format.
  • ๐Ÿง  Multi-LLM Support: Choose from a wide range of LLMs including:
    • OpenAI models (GPT-4, GPT-3.5-turbo)
    • Anthropic models (Claude-3 Opus, Sonnet, Haiku)
    • Google models (Gemini Pro, Gemini 1.5 Pro, Gemini 1.5 Flash)
    • Groq models (Mixtral, LLaMA variants)
    • Mistral AI models
  • ๐ŸŽ“ Adaptive Explanations: Tailor explanations from High School to Expert level.
  • ๐Ÿ” Comprehensive Insights:
    • Main ideas and methodology
    • Concrete examples
    • Prerequisites
    • Mathematical concepts
  • โšก Efficient Processing:
    • Async mode for faster processing (with compatible LLMs)
    • Non-async mode with optional sleep for rate-limited APIs
  • ๐Ÿ–ฅ๏ธ Intuitive Streamlit Interface
  • ๐Ÿ“Š Flexible Summarization: Choose between map-reduce, refine, or stuff methods
  • ๐Ÿ”„ Customizable Execution:
    • Async/Non-async processing
    • Sleep option for rate limiting (useful for free-tier API usage)

๐Ÿ“ธ Screenshots

Initial UI with Paper Summary

Front Page The starting page of the UI, showing a summary of the uploaded paper.

Chunk Display

Chunk Display Display of a specific chunk from the paper.

Prerequisites Explanation

Prerequisites Explanation of prerequisites for understanding the current chunk.

Content Explanation

Content Explanation Detailed explanation of the content within the chunk.

Mathematical Concepts

Math Concepts Breakdown of mathematical concepts present in the chunk.

Examples

Examples Practical examples related to the concepts in the chunk.

๐Ÿš€ Quick Start

  1. Clone the repository:

    git clone https://github.com/rd-serendipity/ai-research-paper-explainer.git
    cd ai-research-paper-explainer
    
  2. Set up and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
    
  3. Install dependencies:

    pip install -r requirements.txt
    
  4. Set up environment variables:

    cp .env.example .env
    

    Edit .env and add your LLM API keys.

  5. Run the app:

    streamlit run src/app.py
    
  6. Open http://localhost:8501 in your browser.

๐Ÿ“Š Using the Explainer

  1. Upload Your Paper: Use the sidebar to upload a PDF of the research paper.

  2. Choose Your LLM: Select from a variety of LLM providers and models.

  3. Configure Explanation Options:

    • Set the difficulty level (High School to Expert)
    • Choose to include examples, prerequisites, and mathematical explanations
    • (Future feature) Option to find and summarize similar papers
  4. Advanced Settings:

    • Execution Mode:
      • Async: Faster processing, ideal for paid API tiers
      • Non-Async: Suitable for free API tiers or rate-limited usage
    • Sleep Option: When using Non-Async mode, enable this to add pauses between API calls (helps with rate limits)
    • Summarization Method: Choose between map_reduce, refine, or stuff algorithms
  5. Process the Paper: Click "Process Paper" to start the explanation generation.

  6. Review Results: The app will display:

    • A summary of the entire paper
    • Chunk-by-chunk breakdowns including:
      • Original text
      • Prerequisites (if selected)
      • Detailed explanation
      • Examples (if selected)
      • Mathematical concepts (if selected and present in the chunk)

๐Ÿ’ก Tips for Usage

  • Free Tier Users: If you're using free tier APIs (e.g., Groq or Gemini):
    • Select "Non-Async" execution mode
    • Enable the "Include Sleeps" option in Additional Settings
    • This will add pauses between API calls to help manage rate limits
  • Paid Tier Users: For faster processing, use the "Async" execution mode.
  • Difficulty Level: Adjust based on your target audience or personal understanding.
  • Summarization Method:
    • map_reduce: Good for longer papers, summarizes in parts then combines
    • refine: Iteratively refines the summary, good for nuanced papers
    • stuff: Best for shorter papers, processes all text at once

๐Ÿ”ฎ Future Developments

  • Find and summarize related papers
  • Improved prompt engineering
  • Enhanced visualizations
  • User feedback integration

๐Ÿค Contributing

We welcome contributions! See our CONTRIBUTING.md for guidelines on how to get involved.

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


โญ๏ธ If you find this project useful, please consider giving it a star on GitHub!

Report Bug โ€ข Request Feature

About

AI-powered tool that transforms complex research papers into clear, customized explanations. Uses LLMs (GPT-4, Claude, Gemini) to provide adaptive insights from high school to expert levels. Built with Python, Streamlit, and LangChain. Ideal for students, researchers, and lifelong learners. ๐ŸŒŸ Star us for updates! ๐Ÿค Contributions welcome!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages