-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] No easy way to change gRPC port on the client side #5955
Comments
Hey @HKozubek thanks for reaching out! Let me take a look at this and get back to you soon |
I was able to replicate this issue, and it does not quite work how I would expect either. This is in fact quite suspicious. phoenix/packages/phoenix-otel/src/phoenix/otel/otel.py Lines 390 to 393 in 8854a20
Any thoughts @axiomofjoy @RogerHYang ? Here is a minimal reproduction
# docker-compose.yml
services:
phoenix:
image: arizephoenix/phoenix:latest
ports:
- 6006:6006
- 4999:4999
environment:
- PHOENIX_GRPC_PORT=4999
- PHOENIX_COLLECTOR_ENDPOINT=http://localhost:4999
# instrument.py
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "arize-phoenix",
# "arize-phoenix-otel",
# "llama-index",
# "openinference-instrumentation-llama-index",
# "opentelemetry-exporter-otlp",
# "opentelemetry-proto>=1.12.0",
# "opentelemetry-sdk",
# ]
# ///
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from phoenix.otel import register
import os
# these match the env in the docker-compose.yml
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:4999"
os.environ["PHOENIX_GRPC_PORT"] = "4999"
# without specifying the endpoint, it defaults to localhost:4317 despite documentation saying otherwise
tracer_provider = register(endpoint=os.environ["PHOENIX_COLLECTOR_ENDPOINT"])
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider) You can see the tracer provider emit the configured grpc endpoint, but with the transport configured as HTTP. |
As a workaround for the time being, you can fall back to raw otel by defining import os
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
port = os.environ["PHOENIX_GRPC_PORT"]
endpoint = f"http://localhost:{port}"
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) |
@HKozubek agree this is confusing and we should be respecting the GRPC config. We actually designed it so that you can bail out of the defaults and specify a gRPC exporter. https://github.com/Arize-ai/phoenix/blob/main/packages/phoenix-otel/README.md Something like from phoenix.otel import TracerProvider, BatchSpanProcessor, GRPCSpanExporter
tracer_provider = TracerProvider()
batch_processor = BatchSpanProcessor(
span_exporter=GRPCSpanExporter(endpoint="http://custom-endpoint.com:6789")
)
tracer_provider.add_span_processor(batch_processor) |
Describe the bug
Correct me if I'm wrong but currently there doesn't exists an easy way to change gRPC port on the client side of the app. The Collector Endpoint is either treated as HTTP by default, or as gRPC, but the port is forced to be 4317.
To Reproduce
Steps to reproduce the behavior:
Check for gRPC port:
phoenix/packages/phoenix-otel/src/phoenix/otel/otel.py
Lines 390 to 393 in 8854a20
2. This one will get the endpoint from env variable and will default to gRPC, but will change the port from 8188 to 4317
The code overwrites the port from env variable with _DEFAULT_GRPC_PORT variable
phoenix/packages/phoenix-otel/src/phoenix/otel/otel.py
Lines 415 to 416 in 8854a20
Expected behavior
The port shouldn't be overwritten, or there should be an easy way to define a port as gRPC using env variables
Environment (please complete the following information):
Additional context
Currently the only way I found to change this behavior is to overwrite the "_maybe_grpc_endpoint" function
This will set the Collector endpoint to http://localhost:8188 and the type to gRPC
The text was updated successfully, but these errors were encountered: