Replies: 8 comments 1 reply
-
"LLM Studio" support already OpenAI format for communications, i hope that change should be not much complicated... |
Beta Was this translation helpful? Give feedback.
-
Thanks for your suggestion. You can use the custom engine to leverage LLMstudio for translation purposes. {
"name": "LLMstudio",
"languages": {
"source": {
"German": "German",
"Japanese": "Japanese"
},
"target": {
"English": "English"
}
},
"request": {
"url": "http://localhost:8000/api/engine/chat/{provider}",
"method": "POST",
"headers": {"Content-Type": "application/json"},
"data": {
"api_key": "{api_key}",
"model": "{model}",
"chat_input": "Translate the content from <slang> to <tlang>: <text>",
}
},
"response": "response['chat_output']"
} Before using this code snippet, please replace |
Beta Was this translation helpful? Give feedback.
-
Hi, {
"name": "LLMstudio",
"languages": {
"source": {
"English": "English"
},
"target": {
"Czech": "Czech"
}
},
"request": {
"url": "http://localhost:1234/v1/chat/completions",
"method": "POST",
"headers": {"Content-Type": "application/json"},
"data": {
"messages": {"role": "translate", "content": "Translate the content from English to Czech: <text>"}
}
},
"response": "response['chat_output']"
} And response from server:
|
Beta Was this translation helpful? Give feedback.
-
Could you provide the API documentation you refer to? I can check it for the code snippet you are using. |
Beta Was this translation helpful? Give feedback.
-
Hi, Load a model, start the server, and run this example in your terminalChoose between streaming and non-streaming mode by setting the "stream" fieldcurl http://localhost:1234/v1/chat/completions
|
Beta Was this translation helpful? Give feedback.
-
Thank you very much. Here is logs from LLM Studio [2024-03-09 13:15:50.796] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED |
Beta Was this translation helpful? Give feedback.
-
And this version translate test call in Ebook translator {
"name": "LLMstudio",
"languages": {
"source": {
"English": "English"
},
"target": {
"Czech": "Czech"
}
},
"request": {
"url": "http://localhost:1234/v1/chat/completions",
"method": "POST",
"headers": {"Content-Type": "application/json"},
"data": {
"messages": [
{ "role": "system", "content": "Translate the content from English to Czech." },
{ "role": "user", "content": "<text>" }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": false
}
},
"response": "response['choices'][0]['message']['content']"
} |
Beta Was this translation helpful? Give feedback.
-
Yea, results is much better, but... |
Beta Was this translation helpful? Give feedback.
-
Hi,
Have you thought about supporting LM Studio as a local translation server? There are specialized LLMs available there. Possibly other frameworks for LLM, but LM studio looks the easiest. Local server is already included, it would just be a matter of extending the API support. I tried to adapt the current configuration options, but couldn't get the right answers to work.
The performance of the translation depends on the hardware, but on the other hand it is only about the cost of electricity and time.
Beta Was this translation helpful? Give feedback.
All reactions