Skip to content

Commit

Permalink
docs: add context_precision (#236)
Browse files Browse the repository at this point in the history
  • Loading branch information
shahules786 authored Oct 30, 2023
1 parent 92fb854 commit c1ed36e
Show file tree
Hide file tree
Showing 3 changed files with 49 additions and 21 deletions.
30 changes: 9 additions & 21 deletions docs/concepts/metrics/context_precision.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,24 @@
# Context Precision


This metric gauges the precision of the retrieved context, calculated based on both the `question` and `contexts`. The values fall within the range of (0, 1), with higher values indicating better precision.

Ideally, the retrieved context should exclusively contain essential information to address the provided query. To compute this, we initially estimate the value of $|S|$ by identifying sentences within the retrieved context that are relevant for answering the given question. The final score is determined by the following formula:
Context Precision is a metric that evaluates whether all of the ground-truth relevant items present in the `contexts` are ranked higher or not. Ideally all the relevant chunks must appear at the top ranks. This metric is computed using the `question` and the `contexts`, with values ranging between 0 and 1, where higher scores indicate better precision.

```{math}
:label: context_precision
\text{context precision} = {|S| \over |\text{Total number of sentences in retrived context}|}
```

```{hint}
Question: What is the capital of France?
\text{Context Precision@k} = {\sum {\text{precision@k}} \over \text{total number of relevant items in the top K results}}
````
High context precision: France, in Western Europe, encompasses medieval cities, alpine villages and Mediterranean beaches. Paris, its capital, is famed for its fashion houses, classical art museums including the Louvre and monuments like the Eiffel Tower.
Low context precision: France, in Western Europe, encompasses medieval cities, alpine villages and Mediterranean beaches. Paris, its capital, is famed for its fashion houses, classical art museums including the Louvre and monuments like the Eiffel Tower. The country is also renowned for its wines and sophisticated cuisine. Lascaux’s ancient cave drawings, Lyon’s Roman theater and the vast Palace of Versailles attest to its rich history.
```
```{math}
\text{Precision@k} = {\text{true positives@k} \over (\text{true positives@k} + \text{false positives@k})}
````
Where k is the total number of chunks in `contexts`
## Example
```{code-block} python
:caption: Context precision using cross-encoder/nli-deberta-v3-xsmall
:caption: Context precision
from ragas.metrics import ContextPrecision
context_precision = ContextPrecision(
model_name="cross-encoder/nli-deberta-v3-xsmall"
)
context_precision = ContextPrecision()
# run init models to load the models used
context_precision.init_model()
# Dataset({
# features: ['question','contexts'],
Expand Down
38 changes: 38 additions & 0 deletions docs/concepts/metrics/context_relevancy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Context Relevancy


This metric gauges the relevancy of the retrieved context, calculated based on both the `question` and `contexts`. The values fall within the range of (0, 1), with higher values indicating better relevancy.

Ideally, the retrieved context should exclusively contain essential information to address the provided query. To compute this, we initially estimate the value of $|S|$ by identifying sentences within the retrieved context that are relevant for answering the given question. The final score is determined by the following formula:

```{math}
:label: context_relevancy
\text{context relevancy} = {|S| \over |\text{Total number of sentences in retrived context}|}
```

```{hint}
Question: What is the capital of France?
High context relevancy: France, in Western Europe, encompasses medieval cities, alpine villages and Mediterranean beaches. Paris, its capital, is famed for its fashion houses, classical art museums including the Louvre and monuments like the Eiffel Tower.
Low context relevancy: France, in Western Europe, encompasses medieval cities, alpine villages and Mediterranean beaches. Paris, its capital, is famed for its fashion houses, classical art museums including the Louvre and monuments like the Eiffel Tower. The country is also renowned for its wines and sophisticated cuisine. Lascaux’s ancient cave drawings, Lyon’s Roman theater and the vast Palace of Versailles attest to its rich history.
```


## Example

```{code-block} python
:caption: Context relevancy
from ragas.metrics import ContextRelevance
context_relevancy = ContextRelevance()
# Dataset({
# features: ['question','contexts'],
# num_rows: 25
# })
dataset: Dataset
results = context_relevancy.score(dataset)
```
2 changes: 2 additions & 0 deletions docs/concepts/metrics/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Just like in any machine learning system, the performance of individual componen
- [Answer relevancy](answer_relevance.md)
- [Context recall](context_recall.md)
- [Context precision](context_precision.md)
- [Context relevancy](context_relevancy.md)

## End-to-End Evaluation

Expand All @@ -28,6 +29,7 @@ Evaluating the end-to-end performance of a pipeline is also crucial, as it direc
faithfulness
answer_relevance
context_precision
context_relevancy
context_recall
semantic_similarity
answer_correctness
Expand Down

0 comments on commit c1ed36e

Please sign in to comment.