Skip to content

Commit

Permalink
merge main
Browse files Browse the repository at this point in the history
  • Loading branch information
BoBer78 committed Sep 19, 2024
2 parents b3d93f8 + bbeb09e commit 754c7f7
Show file tree
Hide file tree
Showing 15 changed files with 88 additions and 253 deletions.
11 changes: 4 additions & 7 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ NEUROAGENT_GENERATIVE__OPENAI__TOKEN=

# Important but not required
NEUROAGENT_AGENT__MODEL=
NEUROAGENT_AGENT__CHAT=

NEUROAGENT_KNOWLEDGE_GRAPH__USE_TOKEN=
NEUROAGENT_KNOWLEDGE_GRAPH__TOKEN=
NEUROAGENT_KNOWLEDGE_GRAPH__DOWNLOAD_HIERARCHY=
Expand All @@ -27,12 +27,9 @@ NEUROAGENT_TOOLS__TRACE__SEARCH_SIZE=

NEUROAGENT_TOOLS__KG_MORPHO__SEARCH_SIZE=

NEUROAGENT_GENERATIVE__LLM_TYPE= # can only be openai for now
NEUROAGENT_GENERATIVE__OPENAI__MODEL=
NEUROAGENT_GENERATIVE__OPENAI__TEMPERATURE=
NEUROAGENT_GENERATIVE__OPENAI__MAX_TOKENS=

NEUROAGENT_COHERE__TOKEN=
NEUROAGENT_OPENAI__MODEL=
NEUROAGENT_OPENAI__TEMPERATURE=
NEUROAGENT_OPENAI__MAX_TOKENS=

NEUROAGENT_LOGGING__LEVEL=
NEUROAGENT_LOGGING__EXTERNAL_PACKAGES=
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added
- Update readme

### Removed
- Github action to create the docs.

Expand All @@ -15,3 +18,4 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Fixed
- Streaming with chat agent.
- Deleted some legacy code.
93 changes: 6 additions & 87 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,93 +1,12 @@
# Agents
# Neuroagent

LLM agent made to communicate with different neuroscience related tools. It allows to communicate in a ChatGPT like fashion to get information about brain regions, morphologies, electric traces and the scientific literature.


## Getting started
1. [Funding and Acknowledgement](#funding-and-acknowledgement)

To make it easy for you to get started with GitLab, here's a list of recommended next steps.
## Funding and Acknowledgement

Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.

## Add your files

- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:

```
cd existing_repo
git remote add origin https://bbpgitlab.epfl.ch/ml/agents.git
git branch -M main
git push -uf origin main
```

## Integrate with your tools

- [ ] [Set up project integrations](https://bbpgitlab.epfl.ch/ml/agents/-/settings/integrations)

## Collaborate with your team

- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)

## Test and Deploy

Use the built-in continuous integration in GitLab.

- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)

***

# Editing this README

When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.

## Suggestions for a good README

Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.

## Name
Choose a self-explaining name for your project.

## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.

## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.

## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.

## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.

## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.

## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.

## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.

## Contributing
State if you are open to contributions and what your requirements are for accepting them.

For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.

You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.

## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.

## License
For open source projects, say how it is licensed.

## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
Copyright (c) 2024 Blue Brain Project/EPFL
53 changes: 0 additions & 53 deletions src/neuroagent/agents/base_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,61 +4,9 @@
from typing import Any, AsyncIterator

from langchain.chat_models.base import BaseChatModel
from langchain_core.messages import (
AIMessage,
ChatMessage,
FunctionMessage,
HumanMessage,
SystemMessage,
ToolMessage,
)
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.tools import BaseTool
from pydantic import BaseModel, ConfigDict

BASE_PROMPT = ChatPromptTemplate(
input_variables=["agent_scratchpad", "input"],
input_types={
"chat_history": list[
AIMessage
| HumanMessage
| ChatMessage
| SystemMessage
| FunctionMessage
| ToolMessage
],
"agent_scratchpad": list[
AIMessage
| HumanMessage
| ChatMessage
| SystemMessage
| FunctionMessage
| ToolMessage
],
},
messages=[
SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=[],
template="""You are a helpful assistant helping scientists with neuro-scientific questions.
You must always specify in your answers from which brain regions the information is extracted.
Do no blindly repeat the brain region requested by the user, use the output of the tools instead.""",
)
),
MessagesPlaceholder(variable_name="chat_history", optional=True),
HumanMessagePromptTemplate(
prompt=PromptTemplate(input_variables=["input"], template="{input}")
),
MessagesPlaceholder(variable_name="agent_scratchpad"),
],
)


class AgentStep(BaseModel):
"""Class for agent decision steps."""
Expand All @@ -72,7 +20,6 @@ class AgentOutput(BaseModel):

response: str
steps: list[AgentStep]
plan: str | None = None


class BaseAgent(BaseModel, ABC):
Expand Down
31 changes: 4 additions & 27 deletions src/neuroagent/app/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@
class SettingsAgent(BaseModel):
"""Agent setting."""

model: str = "simple"
chat: str = "simple"
model: Literal["simple", "multi"] = "simple"

model_config = ConfigDict(frozen=True)

Expand Down Expand Up @@ -84,9 +83,9 @@ class SettingsLiterature(BaseModel):
"""Literature search API settings."""

url: str
retriever_k: int = 700
retriever_k: int = 500
use_reranker: bool = True
reranker_k: int = 5
reranker_k: int = 8

model_config = ConfigDict(frozen=True)

Expand Down Expand Up @@ -173,23 +172,6 @@ class SettingsOpenAI(BaseModel):
model_config = ConfigDict(frozen=True)


class SettingsGenerative(BaseModel):
"""Generative QA settings."""

llm_type: Literal["fake", "openai"] = "openai"
openai: SettingsOpenAI = SettingsOpenAI()

model_config = ConfigDict(frozen=True)


class SettingsCohere(BaseModel):
"""Settings cohere reranker."""

token: Optional[SecretStr] = None

model_config = ConfigDict(frozen=True)


class SettingsLogging(BaseModel):
"""Metadata settings."""

Expand Down Expand Up @@ -219,8 +201,7 @@ class Settings(BaseSettings):
knowledge_graph: SettingsKnowledgeGraph
agent: SettingsAgent = SettingsAgent() # has no required
db: SettingsDB = SettingsDB() # has no required
generative: SettingsGenerative = SettingsGenerative() # has no required
cohere: SettingsCohere = SettingsCohere() # has no required
openai: SettingsOpenAI = SettingsOpenAI() # has no required
logging: SettingsLogging = SettingsLogging() # has no required
keycloak: SettingsKeycloak = SettingsKeycloak() # has no required
misc: SettingsMisc = SettingsMisc() # has no required
Expand All @@ -240,10 +221,6 @@ def check_consistency(self) -> "Settings":
model validator is run during instantiation.
"""
# generative
if self.generative.llm_type == "openai":
if self.generative.openai.token is None:
raise ValueError("OpenAI token not provided")
if not self.keycloak.password and not self.keycloak.validate_token:
if not self.knowledge_graph.use_token:
raise ValueError("if no password is provided, please use token auth.")
Expand Down
84 changes: 42 additions & 42 deletions src/neuroagent/app/dependencies.py
Original file line number Diff line number Diff line change
Expand Up @@ -308,12 +308,12 @@ def get_language_model(
settings: Annotated[Settings, Depends(get_settings)],
) -> ChatOpenAI:
"""Get the language model."""
logger.info(f"OpenAI selected. Loading model {settings.generative.openai.model}.")
logger.info(f"OpenAI selected. Loading model {settings.openai.model}.")
return ChatOpenAI(
model_name=settings.generative.openai.model,
temperature=settings.generative.openai.temperature,
openai_api_key=settings.generative.openai.token.get_secret_value(), # type: ignore
max_tokens=settings.generative.openai.max_tokens,
model_name=settings.openai.model,
temperature=settings.openai.temperature,
openai_api_key=settings.openai.token.get_secret_value(), # type: ignore
max_tokens=settings.openai.max_tokens,
seed=78,
streaming=True,
)
Expand Down Expand Up @@ -369,43 +369,10 @@ def get_agent(
ElectrophysFeatureTool, Depends(get_electrophys_feature_tool)
],
traces_tool: Annotated[GetTracesTool, Depends(get_traces_tool)],
) -> BaseAgent | BaseMultiAgent:
"""Get the generative question answering service."""
tools = [
literature_tool,
br_resolver_tool,
morpho_tool,
morphology_feature_tool,
kg_morpho_feature_tool,
electrophys_feature_tool,
traces_tool,
]
logger.info("Load simple agent")
return SimpleAgent(llm=llm, tools=tools) # type: ignore


def get_chat_agent(
llm: Annotated[ChatOpenAI, Depends(get_language_model)],
memory: Annotated[BaseCheckpointSaver, Depends(get_agent_memory)],
literature_tool: Annotated[LiteratureSearchTool, Depends(get_literature_tool)],
br_resolver_tool: Annotated[
ResolveBrainRegionTool, Depends(get_brain_region_resolver_tool)
],
morpho_tool: Annotated[GetMorphoTool, Depends(get_morpho_tool)],
morphology_feature_tool: Annotated[
MorphologyFeatureTool, Depends(get_morphology_feature_tool)
],
kg_morpho_feature_tool: Annotated[
KGMorphoFeatureTool, Depends(get_kg_morpho_feature_tool)
],
electrophys_feature_tool: Annotated[
ElectrophysFeatureTool, Depends(get_electrophys_feature_tool)
],
traces_tool: Annotated[GetTracesTool, Depends(get_traces_tool)],
settings: Annotated[Settings, Depends(get_settings)],
) -> BaseAgent:
) -> BaseAgent | BaseMultiAgent:
"""Get the generative question answering service."""
if settings.agent.chat == "multi":
if settings.agent.model == "multi":
logger.info("Load multi-agent chat")
tools_list = [
("literature", [literature_tool]),
Expand All @@ -422,7 +389,6 @@ def get_chat_agent(
]
return SupervisorMultiAgent(llm=llm, agents=tools_list) # type: ignore
else:
logger.info("Load simple chat")
tools = [
literature_tool,
br_resolver_tool,
Expand All @@ -432,7 +398,41 @@ def get_chat_agent(
electrophys_feature_tool,
traces_tool,
]
return SimpleChatAgent(llm=llm, tools=tools, memory=memory) # type: ignore
logger.info("Load simple agent")
return SimpleAgent(llm=llm, tools=tools) # type: ignore


def get_chat_agent(
llm: Annotated[ChatOpenAI, Depends(get_language_model)],
memory: Annotated[BaseCheckpointSaver, Depends(get_agent_memory)],
literature_tool: Annotated[LiteratureSearchTool, Depends(get_literature_tool)],
br_resolver_tool: Annotated[
ResolveBrainRegionTool, Depends(get_brain_region_resolver_tool)
],
morpho_tool: Annotated[GetMorphoTool, Depends(get_morpho_tool)],
morphology_feature_tool: Annotated[
MorphologyFeatureTool, Depends(get_morphology_feature_tool)
],
kg_morpho_feature_tool: Annotated[
KGMorphoFeatureTool, Depends(get_kg_morpho_feature_tool)
],
electrophys_feature_tool: Annotated[
ElectrophysFeatureTool, Depends(get_electrophys_feature_tool)
],
traces_tool: Annotated[GetTracesTool, Depends(get_traces_tool)],
) -> BaseAgent:
"""Get the generative question answering service."""
logger.info("Load simple chat")
tools = [
literature_tool,
br_resolver_tool,
morpho_tool,
morphology_feature_tool,
kg_morpho_feature_tool,
electrophys_feature_tool,
traces_tool,
]
return SimpleChatAgent(llm=llm, tools=tools, memory=memory) # type: ignore


async def get_update_kg_hierarchy(
Expand Down
Loading

0 comments on commit 754c7f7

Please sign in to comment.