Skip to content

Commit

Permalink
docs: fix _arize.md (#1643)
Browse files Browse the repository at this point in the history
Co-authored-by: Jithin James <jamesjithin97@gmail.com>
  • Loading branch information
suekou and jjmachan authored Nov 12, 2024
1 parent 74998f2 commit 74bb47a
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 14 deletions.
22 changes: 8 additions & 14 deletions docs/howtos/integrations/_arize.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,26 +78,20 @@ An ideal test dataset should contain data points of high quality and diverse nat


```python
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from ragas.testset import TestsetGenerator
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

TEST_SIZE = 25

# generator with openai models
generator_llm = ChatOpenAI(model="gpt-3.5-turbo-16k")
critic_llm = ChatOpenAI(model="gpt-4")
generator_llm = ChatOpenAI(model="gpt-4o-mini")
critic_llm = ChatOpenAI(model="gpt-4o")
embeddings = OpenAIEmbeddings()

generator = TestsetGenerator.from_langchain(generator_llm, critic_llm, embeddings)

# set question type distribution
distribution = {simple: 0.5, reasoning: 0.25, multi_context: 0.25}

# generate testset
testset = generator.generate_with_llamaindex_docs(
documents, test_size=TEST_SIZE, distributions=distribution
)
testset = generator.generate_with_llamaindex_docs(documents, test_size=TEST_SIZE)
test_df = testset.to_pandas()
test_df.head()
```
Expand All @@ -123,8 +117,8 @@ Build your query engine.


```python
from llama_index import VectorStoreIndex, ServiceContext
from llama_index.embeddings import OpenAIEmbedding
from llama_index.core import VectorStoreIndex, ServiceContext
from llama_index.embeddings.openai import OpenAIEmbedding


def build_query_engine(documents):
Expand All @@ -144,7 +138,7 @@ If you check Phoenix, you should see embedding spans from when your corpus data


```python
from phoenix.trace.dsl.helpers import SpanQuery
from phoenix.trace.dsl import SpanQuery

client = px.Client()
corpus_df = px.Client().query_spans(
Expand Down Expand Up @@ -240,7 +234,7 @@ Ragas uses LangChain to evaluate your LLM application data. Let's instrument Lan


```python
from phoenix.trace.langchain import LangChainInstrumentor
from openinference.instrumentation.langchain import LangChainInstrumentor

LangChainInstrumentor().instrument()
```
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ nav:
- Integrations:
- howtos/integrations/index.md
- LlamaIndex: howtos/integrations/_llamaindex.md
- Arize: howtos/integrations/_arize.md
- LangGraph: howtos/integrations/_langgraph_agent_evaluation.md
- Migrations:
- From v0.1 to v0.2: howtos/migrations/migrate_from_v01_to_v02.md
Expand Down

0 comments on commit 74bb47a

Please sign in to comment.