Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 bug: not work with LocalAI backend #143

Open
johnsmzr opened this issue Feb 28, 2024 · 15 comments
Open

🐛 bug: not work with LocalAI backend #143

johnsmzr opened this issue Feb 28, 2024 · 15 comments
Labels
bug Something isn't working

Comments

@johnsmzr
Copy link

Description

The AI-plugin currently does not work with LocalAI backend. It somehow cannot read the response from Localai-api correctly.

LocalAI version:
2.9.0 (latest)

AI Plugin version:
0.6.0 (latest)

Steps to reproduce

Call Localai API:

curl http://{LOCALAI}/v1/chat/completions -H "Content-Type: application/json" -d '{
     "messages": [{"role": "user", "content": "hello"}],
     "temperature": 0.9,
     "stream": true
   }'

Response:

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"","content":"H"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"","content":"e"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"","content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"","content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709115341,"object":"chat.completion.chunk","id":"d8073704-b273-4340-a57b-6280a7fece33","choices":[{"index":0,"finish_reason":"","delta":{"role":"","content":"o"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

...

As you can see the response "Hello ..." is generated in "content" object, but the plugin does not read it.
There is no response in mattermost and also no error log in mattermost Server Logs(System Console).

@johnsmzr johnsmzr added the bug Something isn't working label Feb 28, 2024
@crspeller
Copy link
Member

@johnsmzr Could you post the section in the Mattermost config.json related to the AI plugin. Search for mattermost-ai it should have a "config" section directly underneath it (not the one that just says enabled)
What do you see in Mattermost? no response at all or an error response?

@johnsmzr
Copy link
Author

johnsmzr commented Feb 29, 2024

@crspeller Thank you for the reply!
I do not find the config.json, do you mean plugin.json?

image

What do you see in Mattermost? no response at all or an error response?

I can see nothing in mattermost ai-chat. Yes, no response at all.

image

@johnsmzr
Copy link
Author

update:

I think this is the config you want:

"Plugins": {
            "mattermost-ai": {
                "config": {
                    "allowPrivateChannels": true,
                    "allowedTeamIds": "",
                    "enableLLMTrace": true,
                    "enableUserRestrictions": false,
                    "imageGeneratorBackend": "gpt",
                    "llmBackend": "gpt",
                    "onlyUsersOnTeam": "",
                    "services": [
                        {
                            "apiKey": "",
                            "defaultModel": "",
                            "id": "8hoeqet22qd",
                            "name": "gpt",
                            "password": "",
                            "serviceName": "openaicompatible",
                            "tokenLimit": 0,
                            "url": "http://host.docker.internal:8080/v1",
                            "username": ""
                        }
                    ],
                    "transcriptBackend": "gpt"
                }
            },

@TheMasterFX
Copy link

I had the same issue with Ollama. Then I entered a valid default model and it works
image
I think in LocalAI they only return the you requested but not load it.

@johnsmzr
Copy link
Author

johnsmzr commented Mar 2, 2024

@TheMasterFX
Thanks for the help. I tried it ( i.e. entering a valid default model ), it does not work.

The LocalAI backend loaded the model and generated the complete answer.
(The response I posted before is just a part of the whole response, because it's too long.)

Call Localai API:

curl http://{LOCALAI}/v1/chat/completions -H "Content-Type: application/json" -d '{
     "messages": [{"role": "user", "content": "hello"}],
     "temperature": 0.9,
     "stream": true
   }'

whole response:

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"H"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"e"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"o"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":","}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"h"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"o"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"w"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"m"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"a"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"y"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"I"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"a"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"s"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"s"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"i"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"s"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"t"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"y"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"o"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"u"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"","delta":{"content":"?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: {"created":1709113748,"object":"chat.completion.chunk","id":"ac63af57-ba76-4c3d-a455-0269249f5e04","choices":[{"index":0,"finish_reason":"stop","delta":{"content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

data: [DONE]

@Bazze
Copy link

Bazze commented Mar 8, 2024

I’m seeing the exact same issue with LocalAI in my setup. Same versions as mentioned above.

@crspeller
Copy link
Member

@johnsmzr Not seeing the same behavior when I try it. I don't see anything wrong with your configuration.
I have merged a few PRs to add some more resiliance and errors to this code path. If you would be willing to try master again and tell me what errors you see that might be helpful.
It also might be useful to know exactly how you are using LocalAI. Though docker? What model? etc.

@johnsmzr
Copy link
Author

johnsmzr commented Mar 14, 2024

@crspeller
Thanks for the reply!

How I use LocalAI:

I build the LocalAI as binary and run it locally on Macbook Pro M3.

What model?

I updated the mattermost-ai-plugin to 0.6.2. which contains major fixes and tested again.

Now there are two types of errors:

image

part of the LocalAI debug info:

4:25PM DBG Prompt (after templating): The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
Write a short title for the following request. Include only the title and nothing else, no quotations. Request:
how are you?
### Response:

[127.0.0.1]:62612 200 - POST /v1/chat/completions
4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Model already loaded in memory: ggml-gpt4all-j
4:25PM DBG Model 'ggml-gpt4all-j' already loaded
4:25PM DBG Function return: I am a virtual assistant named "AI Copiplot". I am a copy of human assistants that respond automatically to users' requests on the Mattermost chat server. map[]
4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"I"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"'"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"m"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"d"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"o"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"i"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"n"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"g"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":" "}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"w"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"e"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":"l"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk: {"created":1710429813,"object":"chat.completion.chunk","id":"86948288-d2b3-427e-8f34-ab3d012126e5","model":"ggml-gpt4all-j","choices":[{"index":0,"finish_reason":"","delta":{"content":","}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

4:25PM DBG Sending chunk failed: connection closed
Error rpc error: code = Canceled desc = context canceled

error from mattermost server log:

{
  "caller": "app/plugin_api.go:976",
  "level": "error",
  "msg": "LLM closed stream with no result",
  "plugin_id": "mattermost-ai",
  "timestamp": "2024-03-14 15:30:11.571 Z"
}

I hope these information could be helpful.

@xiangzebo
Copy link

I encountered a similar issue, but I'm using a third-party relay API.

{
  "caller": "app/plugin_api.go:1000",
  "level": "info",
  "msg": "LLM Call",
  "plugin_id": "mattermost-ai",
  "prompt": "\n--- Conversation ---\n--- User ---\nWrite a short title for the following request. Include only the title and nothing else, no quotations. Request:\nBrainstorm ideas about \n--- Tools ---\n\n--- Context ---\nTime: \nServerName: \nCompanyName: \nPromptParameters:\n",
  "timestamp": "2024-03-29 10:16:37.594 +08:00"
}

@BarakStout
Copy link

Same. Can't get it to work with LocalAI.

Call
Screen Shot 2024-04-07 at 7 58 39 PM

Error
Screen Shot 2024-04-07 at 7 58 49 PM

@spamverdacht
Copy link

same issue with Ollama as a backend

2024-09-14 01:57:07 {"timestamp":"2024-09-13 23:57:07.812 Z","level":"info","msg":"LLM Call","caller":"app/plugin_api.go:973","plugin_id":"mattermost-ai","prompt":"\n--- Conversation ---\n--- System ---\nYou are a helpful assistant called "Copilot" that responds on a Mattermost chat server called Mattermost owned by .\n\nCurrent time and date in the user's location is Sat, 14 Sep 2024 01:57:07 CEST\n\nThe following is the personal information of the user. This information is given with every request to you. You can use this information to taylor the request to the specific user however most of the time it will not be relavent. Only acknowledge the information when the request is directly related to the information provided. Never repeat it as written.\nThe user making the request username is 'admin'.\n--- User ---\nhallo\n--- Tools ---\n\n--- Context ---\nTime: Sat, 14 Sep 2024 01:57:07 CEST\nServerName: Mattermost\nCompanyName: \nRequestingUser: admin\nChannel: 47b77hm4i7yijcs6asfk4watge__r4msihq8ypf3jj34nnoffx9jxh\nPost: hr9jjay8ppncmgh1c89uh6rmkw\nPromptParameters:\n"}

2024-09-14 01:57:07 {"timestamp":"2024-09-13 23:57:07.815 Z","level":"error","msg":"Streaming result to post failed partway","caller":"app/plugin_api.go:976","plugin_id":"mattermost-ai","error":"error, status code: 404, message: json: cannot unmarshal number into Go value of type openai.ErrorResponse"}

@novo-github
Copy link

I've have the same issue here, even with the 1.0.0 version of mattermost-ai plugin. I was able to solve the Sorry! An error occurred while accessing the LLM. See server logs for details. in the response when I changed the API Endpoint from http://<ip>:11434 to http://<ip>:11434/v1 (source: https://academy.mattermost.com/courses/2638266/lectures/56972617) Now the request reached mu Ollama server, but responds with "{"name": "LookupMattermostUser", "parameters": {"Username": "admin"}}"
or sometimes "No related function found in your request that matches any of the provided functions." I will have a deeper look into the prompt and get back

@azigler
Copy link
Collaborator

azigler commented Sep 18, 2024

Hi @novo-github, thanks for sharing your error. It looks like the request is reaching your model in Ollama but the model does not support tool usage or function calling. As a result, it's getting confused by the prompt, which uses some functions to fetch data about the user when the query is made.

At the bottom of the model's configuration panel, try setting Disable Tools to true and see if that allows the model to respond. Alternatively, you may want to use a local model that is tuned to support function calling.

image

@novo-github
Copy link

@azigler "Disable tools to true" did the trick! Works without a hitch with Ollama deployed locally.. Thank you!
Just to have it in one place

  1. Use http://<ip>:11434/v1 format - Don't forget to add /v1 towards the end
  2. Leave the API Key empty
  3. Set Disable Tools to True

I see in the debug logs that when disable tools is set of False, the system prompt for the query is a little flawed. I don't have the logs with me now, but when I get it, I can update them here.

@azigler
Copy link
Collaborator

azigler commented Sep 20, 2024

That's right -- thanks @novo-github! I opened a PR to make sure these instructions end up in the docs: mattermost/docs#7413

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants