Skip to content

Employbl/diffbot-llm-inference

 
 

Repository files navigation

Diffbot GraphRAG LLM

1. Introduction

Recently, large language models (LLMs) have been trained with more and more data, leading to an increase in the number of parameters and the compute power needed. But what if, instead of feeding the model more data, we purposefully trained it to rely less on its pretraining data and more on it's ability to find external knowledge?

To test this idea, we fine-tuned LLama 3.3 70B to be an expert tool user of a real-time Knowledge Graph API, providing the first open-source implementation of a GraphRAG system that outperforms Google Gemini and ChatGPT.

2. Features

Real-time web URL extraction

extract example

As a RAG system, Diffbot LLM can summarize a web document in real-time, appropriately crediting the original source.

Expert Retriever of Factual citations

Mission statement of the FAA

Diffbot LLM is explicitly trained to align the cited text with the reference source.

Knowledge Graph Querying

which state contains J?

Diffbot LLM is an expert tool user of the Diffbot (Knowledge Graph) Query Language.

Image Entailment

How to draw baby shark

Diffbot LLM an also entail images.

Code Interpreter Tool Use

strawberry problem

Instead of relying on the model weights for performing empirical calculations, Diffbot LLM is an expert tool user of a Javascript interpreter that it can use to inform it's response.

is 9.11 or 9.9 larger

Fun stuff

weather in Menlo park

Diffbot LLM is an expert maker of ASCII-art weather forecasts, grounded in real sources.

3. Model Download

Available on HuggingFace at:

4. Accuracy Benchmarks

FreshQA Dataset

Accuracy for FreshQA 2024 queries

FreshQA is a benchmark that measures real-time accuracy for search RAG systems. Diffbot LLM outperforms gpt-4o (no web access), ChatGPT (with web access), Google Gemini, and Perplexity on real-time factual accuracy.

In this evaluation, we focus on 130 FreshQA questions whose answer have changed in 2024, which is after the knowledge cutoff for all evaluated models as of December 2024.

MMLU-Pro

MMLU-Pro is a more difficult version of the MMLU benchmark that tests for static knowledge of 57 academic subjects using a 10-choice multiple-choice questions. MMLU-Pro Leaderboard.

Below shows the MMLU-Pro scores of diffbot-small and diffbot-small-xl over the base models it was fine-tuned from.

Model Accuracy (CoT 5-shot)
diffbot-small-xl 72.89
Llama-3.3-70B Instruct 65.92
Model Accuracy (CoT 5-shot)
diffbot-small 48.64
Llama-3.1-8B Instruct 44.25

Note: This is a measurement of the Diffbot GraphRAG LLM API end-to-end, not a measure of the knowledge contained in the weights. The lift in its performance over the base model comes from its ability to access external tools.

5. Demo

Try Diffbot LLM using the demo app at https://diffy.chat

6. Running Locally

Tested minimum hardware configurations:

  • Nvidia A100 40G for diffbot-small
  • Nvidia 2XH100 80G for diffbot-small-xl @ FP8

Using Docker image and models in huggingface

  1. Pull docker image: docker pull docker.io/diffbot/diffbot-llm-inference:latest
  2. Run docker image. Note: The model weights will be automatically downloaded from huggingface. This might take a few minutes.

Model: diffbot-small

docker run --runtime nvidia --gpus all -p 8001:8001 --ipc=host -e VLLM_OPTIONS="--model diffbot/Llama-3.1-Diffbot-Small-2412 --served-model-name diffbot-small --enable-prefix-caching"  docker.io/diffbot/diffbot-llm-inference:latest 

Model: diffbot-small-xl

docker run --runtime nvidia --gpus all -p 8001:8001 --ipc=host -e VLLM_OPTIONS="--model diffbot/Llama-3.3-Diffbot-Small-XL-2412 --served-model-name diffbot-small-xl --enable-prefix-caching --quantization fp8 --tensor-parallel-size 2"  docker.io/diffbot/diffbot-llm-inference:latest 

The Diffbot server leverages vLLM to serve the model, and it is ready to receive requests once vLLM outputs the following message:

INFO:  Application startup complete.
INFO:  Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

You can now use the endpoint http://localhost:8001/rag/v1, which works exactly like the Serverless API below.

7. Using the Serverless API

Get a free Diffbot developer token at https://app.diffbot.com/get-started

from openai import OpenAI

client = OpenAI(
    base_url = "https://llm.diffbot.com/rag/v1",
    api_key  = "<diffbot_token>" 
)

completion = client.chat.completions.create(
    model="diffbot-small-xl",
    temperature=0,
    messages=[
        {
            "role": "user",
            "content": "What is the Diffbot Knowledge Graph?"
        }
    ]
)
print (completion)

Contact support@diffbot.com if need more credits or higher limits.

8. Adding Custom Tools

To extend the Diffbot LLM Inference Server with new tools, please refer to this tutorial.

About

DIffbot LLM Inference Server

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Other 0.7%