Skip to content

Commit

Permalink
Adds Action List Routine to the skill library (#299)
Browse files Browse the repository at this point in the history
And fixes some bugs in the skill library.
  • Loading branch information
payneio authored Jan 10, 2025
1 parent ede3e3f commit e15cdba
Show file tree
Hide file tree
Showing 54 changed files with 861 additions and 538 deletions.
2 changes: 2 additions & 0 deletions assistants/prospector-assistant/uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 7 additions & 7 deletions assistants/skill-assistant/assistant/skill_assistant.py
Original file line number Diff line number Diff line change
Expand Up @@ -218,31 +218,31 @@ async def get_or_register_assistant(
chat_driver_config=chat_driver_config,
drive_root=assistant_drive_root,
metadata_drive_root=assistant_metadata_drive_root,
skills={
"common": CommonSkillDefinition(
skills=[
CommonSkillDefinition(
name="common",
language_model=language_model,
drive=assistant_drive.subdrive("common"),
chat_driver_config=chat_driver_config,
),
"posix": PosixSkillDefinition(
PosixSkillDefinition(
name="posix",
sandbox_dir=Path(".data") / conversation_context.id,
chat_driver_config=chat_driver_config,
mount_dir="/mnt/data",
chat_driver_config=chat_driver_config,
),
# "form_filler": FormFillerSkill(
# FormFillerSkill(
# name="form_filler",
# chat_driver_config=chat_driver_config,
# language_model=language_model,
# ),
"guided_conversation": GuidedConversationSkillDefinition(
GuidedConversationSkillDefinition(
name="guided_conversation",
language_model=language_model,
drive=assistant_drive.subdrive("guided_conversation"),
chat_driver_config=chat_driver_config,
),
},
],
)

await assistant_registry.register_assistant(assistant, SkillEventMapper(conversation_context))
Expand Down
5 changes: 5 additions & 0 deletions assistants/skill-assistant/uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

20 changes: 15 additions & 5 deletions libraries/python/events/events/__init__.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,21 @@
from .events import EventProtocol, TEvent, BaseEvent, InformationEvent, ErrorEvent, StatusUpdatedEvent, MessageEvent
from .events import (
BaseEvent,
ErrorEvent,
EventProtocol,
InformationEvent,
MessageEvent,
NoticeEvent,
StatusUpdatedEvent,
TEvent,
)

__all__ = [
"EventProtocol",
"TEvent",
"BaseEvent",
"InformationEvent",
"ErrorEvent",
"StatusUpdatedEvent",
"EventProtocol",
"InformationEvent",
"MessageEvent",
"NoticeEvent",
"StatusUpdatedEvent",
"TEvent",
]
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from dataclasses import dataclass
from typing import Any, Callable, Union

from events import BaseEvent, ErrorEvent, MessageEvent
from events import BaseEvent, ErrorEvent, InformationEvent, MessageEvent
from openai import AsyncAzureOpenAI, AsyncOpenAI
from openai.types.chat import (
ChatCompletionMessageParam,
Expand Down Expand Up @@ -153,9 +153,9 @@ async def respond(
command_string = message[1:]
try:
results = await self.command_list.execute_function_string(command_string, string_response=True)
return MessageEvent(message=results)
return InformationEvent(message=results)
except Exception as e:
return ErrorEvent(message=f"Error! {e}", metadata={"error": str(e)})
return InformationEvent(message=f"Error! {e}", metadata={"error": str(e)})

# If not a command, add the message to the history.
if message is not None:
Expand Down
6 changes: 3 additions & 3 deletions libraries/python/openai-client/openai_client/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -286,12 +286,12 @@ async def execute_function_string(self, function_string: str, string_response: b
try:
function, args, kwargs = self.parse_function_string(function_string)
except ValueError as e:
raise ValueError(f"{e}. Type: `/help` for more information.")
raise ValueError(f"{e} Type: `/help` for more information.")
if not function:
raise ValueError("Function not found in registry. Type: `/help` for more information.")
response = await function.execute(*args, **kwargs)
result = await function.execute(*args, **kwargs)
if string_response:
return to_string(response)
return to_string(result)

def parse_function_string(self, function_string: str) -> tuple[ToolFunction | None, list[Any], dict[str, Any]]:
"""Parse a function call string into a function and its arguments."""
Expand Down
2 changes: 2 additions & 0 deletions libraries/python/skills/notebooks/uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

97 changes: 51 additions & 46 deletions libraries/python/skills/skill-library/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ does this through the concept of a "skill".

Think of a skill as a package of assistant capabilities. A skill can contain
"actions" that an assistant can perform and "routines" that are entire
procedures that an assistant can run.
procedures, made up of actions, that an assistant can run.

A demonstration [Posix skill](../skills/posix-skill/README.md) is provided that
makes these more clear. Various actions are provided in the skill that provide
Expand All @@ -18,34 +18,12 @@ cook you a meal. The chef would be skilled at actions in the kitchen (like
chopping or mixing or frying) but would also be able to perform full routines
(recipes), allowing them to make particular dishes according to your preferences.

In a way, this whole library was set up to be able to experiment with _routines_
more easily:

- This library hides a lot of the complexity of developing multi-layered
assistants by providing clearer purposeful abstractions and better defining or
disambiguating commonly confused terms. For example, we separate out a lot of
the complexity of interacting with the OpenAI Chat Completion API with the
[chat driver](../../openai-client/openai_client/chat_driver/README.md)
abstraction and we now distinguish between chat commands, chat tool functions,
and routine actions in a clear way, even though they're really all just
functions.
- Routines (formerly referred to as "Recipes") make it clear that what we are
developing agents that can automate productive work collaboratively with the
user. We have several ideas here, from simply following a set of steps, to
being able to run Pythonic programs skill actions, to much more fully managed
routine running with LLM-driven meta-cognitive execution (having the LLM
monitor progress and modify the routines as necessary).

Currently we provide one functional routine runner implementation, the
[InstructionRoutineRunner](./skill_library/routine_runners/instruction_routine_runner.py),
but will be adding several more in the upcoming weeks.

## Combining skills in the assistant

This library provides an [Assistant](./skill_library/assistant.py) class that
allows you to configure the conversational assistant (relying on our [chat
driver](../../chat-driver/README.md) library) and the skills that the
assistant should have.
driver](../../openai-client/openai_client/chat_driver/README.md) library) and
the skills that the assistant should have.

Oftentimes, a truly capable assistant will need to have many skills.
Additionally, some skills are dependent upon other skills. When you register
Expand All @@ -67,29 +45,25 @@ assistant allowing it to be exposed as an assistant in the workbench. See our
Assistant](../../../../assistants/skill-assistant/README.md)
package that does exactly this.

In the future, individual conversations might be handled in this library as
well.
## Routines

## Context

This library uses the same [Context](../../context/README.md) library
as the [chat driver](../../chat-driver/README.md) library. This allows
you to instantiate a Context object for the assistant and have it automatically
passed into all assistant's actions and routines. This is especially helpful in
(1) setting the session id for all parts of the system (allowing them all to
share state in external state stores) and (2) passing and `emit` function that
all the parts can use to send events back up to the assistant for consistent
handling.

## More about Routines

### Experimentation

As mentioned above, one of the main purposes of this library is to make it
possible for an assistant to run a routine.
This whole library was set up to be able to experiment with _routines_
more easily:

We are currently investigating different kinds of routine specifications and
ways of executing them.
- This library hides a lot of the complexity of developing multi-layered
assistants by providing clearer purposeful abstractions and better defining or
disambiguating commonly confused terms. For example, we separate out a lot of
the complexity of interacting with the OpenAI Chat Completion API with the
[chat driver](../../openai-client/openai_client/chat_driver/README.md)
abstraction and we now distinguish between chat commands, chat tool functions,
and routine actions in a clear way, even though they're really all just
functions.
- Routines make it clear that what we are developing agents that can automate
productive work collaboratively with the user. We have several ideas here,
from simply following a set of steps, to being able to run Pythonic programs
of skill actions, to much more fully managed routine running with LLM-driven
meta-cognitive execution (having the LLM monitor progress and modify the
routines as necessary).

Currently we provide one functional routine runner implementation, the
[InstructionRoutineRunner](./skill_library/routine_runners/instruction_routine_runner.py),
Expand All @@ -103,3 +77,34 @@ is not possible to simply instantiate a skill and run a routine within it (like
you can do with a skill's action). Routines can only be run from an
[Assistant](./skill_library/assistant.py) that has all dependent skills
registered to it.


## Run Context

This library uses the same [Context](../../context/README.md) library as the
[chat driver](../../openai-client/openai_client/chat_driver/README.md) library.
This allows you to instantiate a Context object for the assistant and have it
automatically passed into all assistant's actions and routines. This is
especially helpful in (1) setting the session id for all parts of the system
(allowing them all to share state in external state stores) and (2) passing and
`emit` function that all the parts can use to send events back up to the
assistant for consistent handling.

## State

### Drives

### Assistant drive

### Routine Stack state

```python
async with context.stack_frame_state() as state:
```



- Natural language (user understandability, generatability)
- Metacognitive runners
- Skills/routines w/ subroutines (composability)
- Facilities (run_context, storage, com)
1 change: 1 addition & 0 deletions libraries/python/skills/skill-library/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ dependencies = [
"pydantic-settings>=2.3.4",
"pydantic>=2.6.1",
"python-dotenv>=1.0.1",
"python-liquid>=1.12.1",
"requests>=2.32.0",
"tiktoken>=0.7.0",
]
Expand Down
13 changes: 0 additions & 13 deletions libraries/python/skills/skill-library/pytest.ini

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,13 @@
from .actions import ActionCallable
from .assistant import Assistant
from .chat_driver_helpers import ChatDriverFunctions
from .routine import InstructionRoutine, ProgramRoutine, RoutineTypes, StateMachineRoutine
from .routine import ActionListRoutine, InstructionRoutine, ProgramRoutine, RoutineTypes, StateMachineRoutine
from .run_context import RunContext, RunContextProvider
from .skill import EmitterType, Skill, SkillDefinition

__all__ = [
"ActionCallable",
"ActionListRoutine",
"Assistant",
"ChatDriverFunctions",
"Context",
Expand Down
Loading

0 comments on commit e15cdba

Please sign in to comment.