Skip to content

Commit

Permalink
feat(context managers): added Context Managers to help with tracing (#83
Browse files Browse the repository at this point in the history
)

todo
- [x] greenlight this with shahul about the UI

Added a context manager to group together runs in order to make it
easier to visualise what is happening inside ragas.

future steps
- connect the output values along with the context too
  • Loading branch information
jjmachan authored Aug 2, 2023
1 parent 186ccfa commit 40ad4f7
Show file tree
Hide file tree
Showing 12 changed files with 504 additions and 223 deletions.
Binary file added docs/assets/langsmith-tracing-faithfullness.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/langsmith-tracing-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
176 changes: 176 additions & 0 deletions docs/integrations/langsmith.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,176 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "98727749",
"metadata": {},
"source": [
"# Langsmith Integrations\n",
"\n",
"[Langsmith](https://docs.smith.langchain.com/) in a platform for building production-grade LLM applications from the langchain team. It helps you with tracing, debugging and evaluting LLM applications.\n",
"\n",
"The langsmith + ragas integrations offer 2 features\n",
"1. View the traces of ragas `evaluator` \n",
"2. Use ragas metrics in langchain evaluation - (soon)\n",
"\n",
"\n",
"### Tracing ragas metrics\n",
"\n",
"since ragas uses langchain under the hood all you have to do is setup langsmith and your traces will be logged.\n",
"\n",
"to setup langsmith make sure the following env-vars are set (you can read more in the [langsmith docs](https://docs.smith.langchain.com/#quick-start)\n",
"\n",
"```bash\n",
"export LANGCHAIN_TRACING_V2=true\n",
"export LANGCHAIN_ENDPOINT=https://api.smith.langchain.com\n",
"export LANGCHAIN_API_KEY=<your-api-key>\n",
"export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to \"default\"\n",
"```\n",
"\n",
"Once langsmith is setup, just run the evaluations as your normally would"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "27947474",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Found cached dataset fiqa (/home/jjmachan/.cache/huggingface/datasets/explodinggradients___fiqa/ragas_eval/1.0.0/3dc7b639f5b4b16509a3299a2ceb78bf5fe98ee6b5fee25e7d5e4d290c88efb8)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "dc5a62b3aebb45d690d9f0dcc783deea",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/1 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"evaluating with [context_relavency]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.90s/it]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"evaluating with [faithfulness]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|████████████████████████████████████████████████████████████| 1/1 [00:21<00:00, 21.01s/it]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"evaluating with [answer_relevancy]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|████████████████████████████████████████████████████████████| 1/1 [00:07<00:00, 7.36s/it]\n"
]
},
{
"data": {
"text/plain": [
"{'ragas_score': 0.1837, 'context_relavency': 0.0707, 'faithfulness': 0.8889, 'answer_relevancy': 0.9403}"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from datasets import load_dataset\n",
"from ragas.metrics import context_relevancy, answer_relevancy, faithfulness\n",
"from ragas import evaluate\n",
"\n",
"\n",
"fiqa_eval = load_dataset(\"explodinggradients/fiqa\", \"ragas_eval\")\n",
"\n",
"result = evaluate(\n",
" fiqa_eval[\"baseline\"].select(range(3)), \n",
" metrics=[context_relevancy, faithfulness, answer_relevancy]\n",
")\n",
"\n",
"result"
]
},
{
"cell_type": "markdown",
"id": "0b862b5d",
"metadata": {},
"source": [
"Voila! Now you can head over to your project and see the traces\n",
"\n",
"![](../assets/langsmith-tracing-overview.png)\n",
"this shows the langsmith tracing dashboard overview\n",
"\n",
"![](../assets/langsmith-tracing-faithfullness.png)\n",
"this shows the traces for the faithfullness metrics. As you can see being able to view the reasons why "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "febeef63",
"metadata": {},
"outputs": [],
"source": [
"\"../assets/langsmith-tracing-overview.png\"\n",
"\"../assets/langsmith-tracing-faithfullness.png\""
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
16 changes: 11 additions & 5 deletions docs/metrics.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Metrics

### `Faithfulness`

1. `faithfulness` : measures the factual consistency of the generated answer against the given context. This is done using a multi step paradigm that includes creation of statements from the generated answer followed by verifying each of these statements against the context. The answer is scaled to (0,1) range. Higher the better.
This measures the factual consistency of the generated answer against the given context. This is done using a multi step paradigm that includes creation of statements from the generated answer followed by verifying each of these statements against the context. The answer is scaled to (0,1) range. Higher the better.
```python
from ragas.metrics.factuality import Faithfulness
faithfulness = Faithfulness()
Expand All @@ -14,8 +15,9 @@ dataset: Dataset

results = faithfulness.score(dataset)
```
### `ContextRelevancy`

2. `context_relevancy`: measures how relevant is the retrieved context to the prompt. This is done using a combination of OpenAI models and cross-encoder models. To improve the score one can try to optimize the amount of information present in the retrieved context.
This measures how relevant is the retrieved context to the prompt. This is done using a combination of OpenAI models and cross-encoder models. To improve the score one can try to optimize the amount of information present in the retrieved context.
```python
from ragas.metrics.context_relevancy import ContextRelevancy
context_rel = ContextRelevancy(strictness=3)
Expand All @@ -28,7 +30,9 @@ dataset: Dataset
results = context_rel.score(dataset)
```

3. `answer_relevancy`: measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.
### `AnswerRelevancy`

This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.
```python
from ragas.metrics.answer_relevancy import AnswerRelevancy
answer_relevancy = AnswerRelevancy(model_name="t5-small")
Expand All @@ -42,7 +46,9 @@ results = answer_relevancy.score(dataset)
```


4. `Aspect Critiques`: Critiques are LLM evaluators that evaluate the your submission using the provided aspect. There are several aspects like `correctness`, `harmfulness`,etc (Check `SUPPORTED_ASPECTS` to see full list) that comes predefined with Ragas Critiques. If you wish to define your own aspect you can also do this. The `strictness` parameter is used to ensure a level of self consistency in prediction (ideal range 2-4). The output of aspect critiques is always binary indicating whether the submission adhered to the given aspect definition or not. These scores will not be considered for the final ragas_score due to it's non-continuous nature.
### `AspectCritique`

`Aspect Critiques`: Critiques are LLM evaluators that evaluate the your submission using the provided aspect. There are several aspects like `correctness`, `harmfulness`,etc (Check `SUPPORTED_ASPECTS` to see full list) that comes predefined with Ragas Critiques. If you wish to define your own aspect you can also do this. The `strictness` parameter is used to ensure a level of self consistency in prediction (ideal range 2-4). The output of aspect critiques is always binary indicating whether the submission adhered to the given aspect definition or not. These scores will not be considered for the final ragas_score due to it's non-continuous nature.
- List of predefined aspects:
`correctness`,`harmfulness`,`coherence`,`conciseness`,`maliciousness`

Expand Down Expand Up @@ -76,4 +82,4 @@ LLM like GPT 3.5 struggle when it comes to scoring generated text directly. For
src="./assets/bar-graph.svg">
</h1>

Take a look at our experiments [here](/experiments/assesments/metrics_assesments.ipynb)
Take a look at our experiments [here](/experiments/assesments/metrics_assesments.ipynb)
Loading

0 comments on commit 40ad4f7

Please sign in to comment.