-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming responses from OpenAI and GPT4All #221
Conversation
# Conflicts: # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/openai/OpenAIEmbeddings.kt
# Conflicts: # core/src/commonMain/kotlin/com/xebia/functional/xef/auto/CoreAIScope.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/models/functions/CFunction.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/openai/models.kt # kotlin/src/commonMain/kotlin/com/xebia/functional/xef/auto/DeserializerLLMAgent.kt # kotlin/src/commonMain/kotlin/com/xebia/functional/xef/auto/serialization/functions/FunctionSchema.kt # scala/src/main/scala/com/xebia/functional/xef/scala/auto/package.scala
… and java depends on openai module for defaults. xef core does not depend on open ai
# Conflicts: # core/src/commonMain/kotlin/com/xebia/functional/xef/auto/AI.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/auto/AIRuntime.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/auto/AiDsl.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/auto/CoreAIScope.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/models/chat/Message.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/models/chat/Role.kt # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/models/text/CompletionRequest.kt # examples/kotlin/src/main/kotlin/com/xebia/functional/xef/auto/CustomRuntime.kt # java/src/main/java/com/xebia/functional/xef/java/auto/AIScope.java # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/DeserializerLLMAgent.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/ImageGenerationAgent.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/MockAIClient.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/OpenAIClient.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/OpenAIEmbeddings.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/OpenAIRuntime.kt # scala/src/main/scala/com/xebia/functional/xef/scala/auto/package.scala
… local models. Local models can be use in the AI DSL and interleaved with any model.
… block and manual component construction
# Conflicts: # core/src/commonMain/kotlin/com/xebia/functional/xef/llm/Chat.kt # examples/kotlin/src/main/kotlin/com/xebia/functional/xef/auto/gpt4all/Chat.kt # gpt4all-kotlin/src/jvmMain/kotlin/com/xebia/functional/gpt4all/GPT4All.kt # openai/src/commonMain/kotlin/com/xebia/functional/xef/auto/llm/openai/OpenAIClient.kt
@xebia-functional/team-ai |
a7de43b
to
6c967c0
Compare
@@ -28,6 +29,9 @@ class MockOpenAIClient( | |||
private val chatCompletion: (ChatCompletionRequest) -> ChatCompletionResponse = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe split these methods in a different interface? I see more and more that a client implements the interface only partially.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They've already been split into its own interface in main
inside Chat
, ChatWithFunctions
etc. The remaining place where it implements all of them like this is the MockClient. The AIClient
interface in the main is already unused and should be removed. I'll push a commit to this PR to remove it.
These changes introduce the capability of streaming in chat responses. It adds the following changes:
createChatCompletions
, has been added to theChat
interface, which allows creating chat completions based on aChatCompletionRequest
. This function returns aFlow<ChatCompletionChunk>
representing the generated chat completions.Chat
interface now includes two overloaded versions of thepromptStreaming
function. These functions enable streaming by returning aFlow<String>
that emits the generated chat responses as they become available.createChatCompletions
inGPT4All
utilizes aFlow
and channels to enable streaming of chat completions.MockOpenAIClient
has been updated to throw aNotImplementedError
forchatCompletions
since it's not implemented in the mock.OpenAIClient
implementation has been updated to utilize the OpenAI API's chat completion functionality and transform the response into the corresponding domain models (ChatCompletionChunk
,ChatChunk
,ChatDelta
, etc.).Example
These changes enable developers to perform streaming chat completions and receive responses incrementally, enhancing the real-time interactive chat experience.