This repository has been archived by the owner on Sep 12, 2024. It is now read-only.
-
Has anybody managed to get autollm working with google gemini-pro with lancedb? Can you please share code for how to configure the llm_model, embed_model, etc. parameters within query_engine = AutoQueryEngine.from_defaults(.....). Also unsure who to specify the vertex AI project name and location. |
Beta Was this translation helpful? Give feedback.
Answered by
fcakyon
Jan 6, 2024
Replies: 1 comment 5 replies
-
easy as this: import os
from autollm import AutoQueryEngine
os.environ["VERTEXAI_PROJECT"] = "hardy-device-38811" # Your Project ID
os.environ["VERTEXAI_LOCATION"] = "us-central1" # Your Location
llm_model = "gemini-pro"
AutoQueryEngine.from_defaults(
documents='...',
llm_model=llm_model,
) |
Beta Was this translation helpful? Give feedback.
5 replies
Answer selected by
fcakyon
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
easy as this: