Skip to content

Commit

Permalink
modify langgraph code and notes
Browse files Browse the repository at this point in the history
  • Loading branch information
jbcodeforce committed Sep 22, 2024
1 parent fff322c commit a7ed701
Show file tree
Hide file tree
Showing 16 changed files with 510 additions and 224 deletions.
2 changes: 2 additions & 0 deletions docs/coding/haystack.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Haystack AI Framework

34 changes: 21 additions & 13 deletions docs/coding/langgraph.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# LangGraph

!!!- info "Updates"
Created 04/2024 - Update 08/27/2024
Created 04/2024 - Update 09/21/2024

[LangGraph](https://python.langchain.com/docs/langgraph) is a library for building stateful, **multi-actor** applications, and being able to add cycles to LLM app. It is not a DAG.

Expand Down Expand Up @@ -38,10 +38,13 @@ runnable = graph.compile()

1. chatbot_func is a function to call a LLM. `add_node()` takes a **function or runnable**, with the input is the entire current state:

See [FirstGraphOnlyLLM.py](https://github.com/jbcodeforce/ML-studies/blob/master/llm-langchain/langgraph/FirstGraphOnlyLLM.py)

```python
def call_tool(state): # (1)
messages = state["messages"]
last_message = messages[-1]
#...
```

1. The State of the graph, in this case, includes a list of messages
Expand Down Expand Up @@ -71,14 +74,16 @@ Graphs helps implementing Agents as AgentExecutor is a deprecated API. They most
app = workflow.compile(checkpointer=checkpointer)
```

1. invoke the graph as part of an API, an integrated ChatBot, ...
1. Invoke the graph as part of an API, an integrated ChatBot, using a dict including the parameter of the State...

Graphs such as StateGraph's naturally can be composed. Creating subgraphs lets developers build things like multi-agent teams, where each team can track its own separate state.

LangGraph comes with built-in persistence, allowing developers to save the state of the graph at a given point and resume from there.
LangGraph comes with built-in persistence, allowing developers to save the state of the graph at a given point and resume from there [MemorySaver](https://langchain-ai.github.io/langgraph/reference/checkpoints/?h=sqlite+saver#memorysaver), [Postgresql](https://langchain-ai.github.io/langgraph/reference/checkpoints/?h=sqlite+saver#postgressaver), [SqliteSaver](https://langchain-ai.github.io/langgraph/reference/checkpoints/?h=sqlite+saver#sqlitesaver).

```python
memory = SqliteSaver.from_conn_string(":memory:")
from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])
```

Expand All @@ -97,8 +102,8 @@ See [other checkpointer ways to persist state](https://langchain-ai.github.io/la
memory = AsyncSqliteSaver.from_conn_string("checkpoints.sqlite")
```

* See [first basic program](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/FirstGraph.py) to call Tavily tool for searching recent information about the weather in San Francisco using OpenAI LLM. (it is based on the [tutorial](https://langchain-ai.github.io/langgraph/#example)). It does not use any prompt, and the call_method function invokes OpenAI model directly.
* See [A hello world graph without any LLM](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/graph_without_llm.py) as an interesting base code to do stateful graph.
* See [first basic program](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/FirstGraphOnlyLLM.py) or the [one with tool](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/FirstGraphWithTool.py) to call Tavily tool for searching recent information about the weather in San Francisco using OpenAI LLM. (it is based on the [tutorial](https://langchain-ai.github.io/langgraph/#example)). It does not use any prompt, and the call_method function invokes OpenAI model directly.



#### Invocation and chat history
Expand All @@ -108,12 +113,16 @@ The LangGraph's `MessageState` keeps an array of messages. So the input is a dic
```python
app.invoke(
{"messages": [HumanMessage(content="what is the weather in sf")]},
config={"configurable": {"thread_id": 42}}, debug=True
config={"configurable": {"thread_id": "42"}}, debug=True
)
```

Some code using chat_history:

* A simple version with tool and memory using prebuilt LangGraph constructs [FirstGraphWithToolAndMemory.py](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/FirstGraphWithToolAndMemory.py)



* [Close Question with a node creating a close question and then processes the outcome with llm](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/close_question.py).

![](./diagrams/close_q.drawio.png)
Expand Down Expand Up @@ -155,11 +164,11 @@ Adding a "chat memory" to the graph with LangGraph's checkpointer to retain the

### Tool Calling

Graph may include `ToolNode` to call function or tool which can be called via conditions on edge. The following declaration uses the predefined langchain tool definition of TavilySearch. The `TavilySearchResults` has function name, argument schema and tool definition so the prompt sent to LLM has information about the tool like: "name": "tavily_search_results_json"
The Graph must include `ToolNode` to call the slected function or tool which can be called via conditions on edge. The following declaration uses the predefined langchain tool definition of TavilySearch. The `TavilySearchResults` has function name, argument schema and tool definition so the prompt sent to LLM has information about the tool like: "name": "tavily_search_results_json"

```python
from langchain_community.tools.tavily_search import TavilySearchResults
tools = [TavilySearchResults(max_results=1)]
tools = [TavilySearchResults(max_results=2)]
tool_node = ToolNode(tools)
```

Expand Down Expand Up @@ -214,11 +223,10 @@ Some interesting patterns from this sample:

The human is the loop can be implemented in different ways:

* Add a confirmation before invoking a tool, using the the interrupt_before the names of the tool.
* Implementing node with close questions

* Add a confirmation before invoking a tool, using the the interrupt_before the names of the tool. [See human_in_loop.py](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/human_in_loop.py)
* Implementing a human node before which the graph will always stop [ask_human_graph.py](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/ask_human_graph.py)

See [Taipy UI with a langgraph graph](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/chatbot_graph_ui.py)
See [prompt_builder_graph](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/prompt_builder_graph.py) which is also integrated with Taipy UI in [Taipy UI with a langgraph graph](https://github.com/jbcodeforce/ML-studies/tree/master/llm-langchain/langgraph/chatbot_graph_ui.py)

## Other Code

Expand Down
1 change: 1 addition & 0 deletions haystack/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
haystack-ai
82 changes: 0 additions & 82 deletions llm-langchain/langgraph/FirstGraph.py

This file was deleted.

36 changes: 36 additions & 0 deletions llm-langchain/langgraph/FirstGraphOnlyLLM.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
from typing import Annotated
from dotenv import load_dotenv

load_dotenv("../../.env")
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

from langchain_anthropic import ChatAnthropic

class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]


def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}

llm = ChatAnthropic(model="claude-3-haiku-20240307")
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()

while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
123 changes: 123 additions & 0 deletions llm-langchain/langgraph/FirstGraphWithTool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
from typing import Annotated, Literal
from dotenv import load_dotenv
import json
load_dotenv("../../.env")
from typing_extensions import TypedDict
from langchain_core.messages import ToolMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_core.messages import BaseMessage
from langchain_anthropic import ChatAnthropic

from langchain_community.tools.tavily_search import TavilySearchResults

class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]

class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""

def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}

def __call__(self, inputs: dict):
"""
{'messages': [HumanMessage(content='What is athena decision system?', id='e11d2bb5bf'),
AIMessage(content=[{'id': 'toolXKcez',
'input': {'query': 'athena decision system'},
'name': 'tavily_search_results_json',
'type': 'tool_use'}
],
response_metadata=...]}
"""
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
"""
[{'url': 'https://athenadecisionsystems.github.io/athena-owl-core/',
'content': 'Athena Decision Systems is here t.......'},
{'url': 'https://athenadecisions.com/', 'content': 'At Athena Decision Systems, we want to ...'}
]
"""
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}


def chatbot(state: State):
msg = state["messages"] # [HumanMessage(content='What is athena decision system?', id='802864c...15aa1')]
rep = llm_with_tools.invoke(msg) # AIMessage(content=[{'id': 'toolu_01KhuK3Hog', 'input': {'query': 'athena decision system'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], response_metadata={'id': 'msg_01DX7uiRdkKJYdpCyNT2GmR8', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 374, 'output_tokens': 61}}, id='run-5f41e11b-a68b-4fe8-9256-7b8ba93be5ca-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'athena decision system'}, 'id': 'toolu_01KhujdUSZjwvFh1AnwK3Hog', 'type': 'tool_call'}], usage_metadata={'input_tokens': 374, 'output_tokens': 61, 'total_tokens': 435})
return {"messages": [rep]}

def route_tools(
state: State,
) -> Literal["tools", "__end__"]:
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return "__end__"



tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)

graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
tool_node = BasicToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("chatbot", END)
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", "__end__": "__end__"},
)

graph = graph_builder.compile()

def chat_with_human():
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
if isinstance(value["messages"][-1], BaseMessage):
print("Assistant:", value["messages"][-1].content)

if __name__ == "__main__":
rep=graph.invoke({"messages": ("user", "What is athena decision system?")})
print(rep)
Loading

0 comments on commit a7ed701

Please sign in to comment.