Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Critic selection mechanism for models don't support multiple response at one time #996

Open
1 of 2 tasks
Wendong-Fan opened this issue Sep 28, 2024 · 1 comment · May be fixed by #1153 or #1269
Open
1 of 2 tasks
Assignees
Labels
call for contribution enhancement New feature or request New Feature P1 Task with middle level priority
Milestone

Comments

@Wendong-Fan
Copy link
Member

Wendong-Fan commented Sep 28, 2024

Required prerequisites

Motivation

raised by @coolbeevip https://github.com/orgs/camel-ai/discussions/985

In the Role-Playing critic scenario, the user agent receives multiple response options through the n parameter of the OpenAI API, from which the critic selects one to guide the assistant agent. (If my understanding is incorrect, please feel free to point it out!)

However, this mechanism only works well with the OpenAI API; a brief investigation shows that other models do not support the n parameter. Should we ensure consistency in this behavior when using non-OpenAI models through engineering methods? (For example, by looping n times and concatenating the results.)

Solution

No response

Alternatives

No response

Additional context

No response

@Wendong-Fan Wendong-Fan added the P1 Task with middle level priority label Sep 28, 2024
@coolbeevip
Copy link
Collaborator

coolbeevip commented Oct 31, 2024

Sorry for the late reply.

I've pinpointed the issue in the file chat_agent.py, accessible here:

https://github.com/camel-ai/camel/blob/master/camel/agents/chat_agent.py#L929

When using the OAI provided by Qwen, setting the model_backend to n=3 results in only one choice in the response.

A simple fix would be to verify if the number of choices matches the expected count of n. If it doesn't, you can re-run self.model_backend.run(openai_messages) to get more results.

Since chat_agent is a crucial component, I'm not sure if this is the best solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment