Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

Does autoLLM support HuggingFace LLM's #96

Answered by fcakyon
0xSaurabhx asked this question in Q&A
Discussion options

You must be logged in to vote

@SeeknnDestroy Talha, appreciate the quick update regarding HF models. Is HuggingFace's TGI supported as a backend? Specifically, if I have a HF model hosted a local TGI server, can I interact with it via AutoLLM? E.g.

from autollm import AutoQueryEngine

model = "meta-llama/Llama-2-7b-chat-hf"
api_base = "http://localhost:1234" #IP to TGI server

llm_params = {"model": model, "api_base": api_base, ...}
etc

Hello @dcruiz01, current HuggignFace examples support local and cloud TGI's. Your given code snippet should work :)

Replies: 4 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by fcakyon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants
Converted from issue

This discussion was converted from issue #69 on November 03, 2023 12:44.