Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created Ollama Langchain agent #31

Merged
merged 4 commits into from
Mar 12, 2024
Merged

Created Ollama Langchain agent #31

merged 4 commits into from
Mar 12, 2024

Conversation

gabrielfior
Copy link
Contributor

@gabrielfior gabrielfior commented Mar 12, 2024

  • Added Ollama agent based on Langchain integration
  • Used Mistral since it supports function calling
  • Tested manually using Jupyter notebook (not great results when asking prediction markets like question, allucinated)

Summary by CodeRabbit

  • New Features
    • Enhanced the language model agent with optional external model integration.
    • Introduced a new agent variant for local usage with Ollama models, offering specialized configurations.
    • Added a utility function to check the status of the Ollama service.

Copy link
Contributor

coderabbitai bot commented Mar 12, 2024

Walkthrough

The recent update enhances the flexibility of language model initialization in the LangChainAgent class by introducing an optional llm parameter. This parameter allows for customizing the language model used by the agent. Additionally, a new class called OllamaLangChainAgent has been created to support the initialization of an Ollama model with tailored configurations for local use.

Changes

File(s) Change Summary
.../langchain_agent.py Added optional llm parameter to __init__ method for customizable language model initialization.
.../ollama_langchain_agent.py Introduced OllamaLangChainAgent class to initialize Ollama model with specific configurations.
.../ollama_utils.py Added is_ollama_running function to check the status of an Ollama service via a GET request.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@gabrielfior gabrielfior marked this pull request as draft March 12, 2024 12:32
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 2

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 79e4887 and 7e1d599.
Files selected for processing (2)
  • prediction_market_agent/agents/langchain_agent.py (1 hunks)
  • prediction_market_agent/agents/ollama_langchain_agent.py (1 hunks)

Comment on lines 6 to 10
class OllamaLangChainAgent(LangChainAgent):
def __init__(self) -> None:
# Make sure Ollama is running locally
llm = Ollama(model='mistral', base_url='http://localhost:11434') # Mistral since it supports function calling
super().__init__(llm=llm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OllamaLangChainAgent class is correctly inheriting from LangChainAgent and is initialized with an Ollama instance. However, there are a few areas that could be improved for better robustness and flexibility:

  1. Error Handling: There's no error handling for the case where the Ollama service is not running locally. Consider adding a try-except block around the Ollama initialization to catch connection errors and provide a meaningful error message.
  2. Configuration Flexibility: The model and base URL for the Ollama instance are hardcoded. It might be beneficial to allow these to be passed as parameters to the __init__ method, providing more flexibility for different environments or use cases.
  3. Performance Considerations: Ensure that the local Ollama instance is adequately optimized for the expected workload. Depending on the deployment environment, it might be necessary to adjust the configuration or scale the service to handle the load efficiently.

Comment on lines 10 to 12
def __init__(self, llm=None) -> None:
keys = utils.APIKeys()
llm = OpenAI(openai_api_key=keys.openai_api_key)
llm = OpenAI(openai_api_key=keys.openai_api_key) if not llm else llm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of an optional llm parameter to the LangChainAgent class is a welcome change for flexibility. However, there are a few areas that could be improved:

  1. Default Behavior: The current implementation overwrites the llm parameter with a new OpenAI instance if it's None. This behavior is not clear from the method signature. Consider explicitly checking if llm is None before initializing a new OpenAI instance to make the logic more transparent.
  2. Error Handling: There's no explicit error handling for issues that might arise during the initialization of the OpenAI instance, such as invalid API keys. Adding error handling and logging could improve the robustness of the agent initialization.
  3. Documentation: The method could benefit from documentation comments explaining the purpose of the llm parameter and the expected type. This would help future developers understand the intended use and constraints of the parameter.

@gabrielfior gabrielfior marked this pull request as ready for review March 12, 2024 12:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 7e1d599 and 8f6f904.
Files selected for processing (3)
  • prediction_market_agent/agents/langchain_agent.py (1 hunks)
  • prediction_market_agent/agents/ollama_langchain_agent.py (1 hunks)
  • prediction_market_agent/tools/ollama_utils.py (1 hunks)
Files skipped from review as they are similar to previous changes (2)
  • prediction_market_agent/agents/langchain_agent.py
  • prediction_market_agent/agents/ollama_langchain_agent.py

Comment on lines +4 to +6
def is_ollama_running(base_url: str = "http://localhost:11434") -> bool:
r = requests.get(f"{base_url}/api/tags")
return r.status_code == 200
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The is_ollama_running function correctly checks if the Ollama service is running by making a GET request to the /api/tags endpoint. However, there are a few areas that could be improved:

  1. Error Handling: Currently, the function does not handle exceptions that might occur during the request, such as network errors or timeouts. It's recommended to wrap the request in a try-except block to catch exceptions like requests.exceptions.RequestException.

  2. Performance: Making a synchronous HTTP request could block the execution if the Ollama service is slow to respond or if there are network issues. Consider using asynchronous requests or setting a timeout for the request to improve responsiveness.

  3. Security: While not directly related to the current implementation, ensure that any communication with the Ollama service, especially in production environments, is secured (e.g., using HTTPS).

Suggested improvements:

import requests

def is_ollama_running(base_url: str = "http://localhost:11434") -> bool:
    try:
        r = requests.get(f"{base_url}/api/tags", timeout=5)  # Set a reasonable timeout
        return r.status_code == 200
    except requests.exceptions.RequestException:
        return False

Copy link
Contributor

@evangriffiths evangriffiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First PR 🎉🎉🎉. Looks good to me 😄

@gabrielfior gabrielfior merged commit a337485 into main Mar 12, 2024
6 checks passed
@gabrielfior gabrielfior deleted the gabriel/ollama-agent branch March 12, 2024 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants