Skip to content

Commit

Permalink
work on agentic and crewai code
Browse files Browse the repository at this point in the history
  • Loading branch information
jbcodeforce committed Jun 1, 2024
1 parent 48c6f98 commit aecd2dc
Show file tree
Hide file tree
Showing 16 changed files with 725 additions and 15 deletions.
10 changes: 8 additions & 2 deletions docs/genAI/agentic.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,16 @@ Focus is becoming important as the context windows are becoming larger. With too

Too much tools adds confusion for the agents, as they have hard time to select tool, or distinguish what is a tool, a context or an history. Be sure to give them tools for what they need to do.

For task definition, think about process, actors and tasks. Have a clear definition for each task, with expectation and context. Task may use tools, should be able to run asynchronously, output in different format like json, xml, ...

## Use cases

* Agents to plan an article, write this article and review for better edition. [research-agent.py](https://github.com/jbcodeforce/ML-studies/blob/master/techno/crew-ai/research-agent.py)
*
* Support Representative, the [support_crew.py](https://github.com/jbcodeforce/ML-studies/tree/master/techno/crew-ai/support_crew.py) app demonstrates two agents working together to address customer's inquiry the best possible way, using some sort of quality assurance. It uses memory and web scrapping tools.
* Customer outreach campaign: [customer_outreach.py](https://github.com/jbcodeforce/ML-studies/tree/master/techno/crew-ai/customer_outreach.py) uses tools to do google searches with two agents doing lead analysis.
* Crew to tailor job application with multiple agents [job_application.py](https://github.com/jbcodeforce/ML-studies/tree/master/techno/crew-ai/job_application.py)

## Design Patterns

## CrewAI

Expand All @@ -48,7 +54,7 @@ Agent needs the following 6 elements:

1. Focus on goals and expectations to better prompt the agent: "give me an analysis of xxxx stock". Too much stuff in the context window is confusing the model, and may hallucinate. May be splitting into multiple agents is a better solution instead of using a single prompt.

1. Tools is to call external system, and is well described so the model can build parameters for the function and be able to assess when to call the function. Now too many tools will also add to the confusion. Small model will have hard time to select tools. So think to have multiple-agent with only the tools they need to do their task.
1. Tool is used to call external system, and is well described so the model can build parameters for the function and be able to assess when to call the function. Now too many tools will also add to the confusion. Small model will have hard time to select tools. So think to have multiple-agent with only the tools they need to do their task.
1. Cooperation has proved to deliver better results than unique big model. Model can take feedbacks from each others, they can delegate tasks.
1. Guardrails are helping to avoid models to loop over tool usages, creating hallucinations, and deliver consistent results. Models work on fuzzy input, generate fuzzy output, so it is important to be able to set guardrails to control outcomes or runtime execution.
1. Memory is important to keep better context, understand what was done so far, apply this knowledge for future execution. Short term memory is used during the crew execution of a task. It is shared between agents even before task completion. Long term memory is used after task execution, and can be used in any future tasks. LTM is stored in a DB. Agent can learn from previous executions. This should lead agent to self-improve. The last type of memory is the entity memory (person, organization, location). It is also a short term, and keep information of the entity extracted from NLP.
Expand Down
34 changes: 29 additions & 5 deletions docs/genAI/mixtral.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,31 @@
<<<<<<< HEAD
# Mixture of Experts
# Mistral.ai

=======
# Mixture of Experts
French Startup to build mixture of experts based LLMs with open source offering.

* The open-weights models are Mistral 7B, Mixtral 8x7B, Mixtral 8x22B
* The commercial models (Mistral Small, Mistral Medium, Mistral Large, and Mistral Embeddings (retrieval score of 55 on MTEB), codetral for code generation.

[Models description and benchmarks notes.](https://docs.mistral.ai/getting-started/models/)

Model can be fine tuned.

| model | type of usage |
| --- | --- |
| **Mistral Small** | Classification, Customer support, text gen. |
| **Mistral 8x22B** | intermediate tasks that require moderate reasoning - like Data extraction, Summarizing a Document, Writing a Job Description, or Writing Product Descriptions |
| **Mistral Large** | Complex tasks that require large reasoning capabilities or are highly specialized - like Synthetic Text Generation, Code Generation, RAG, or Agents |

Function calling is supported by Mistral Small, Large, 8x22B.

Mistral delivers docker image for the model. To run locally with [skypilot]()

---

## Mixture of Experts

MoE combines multiple models to make predictions or decisions. Each expert specializes in a specific subset of the input space and provides its own prediction. The predictions of the experts are then combined, typically using a gating network, to produce the final output.

It is useful when dealing with complex and diverse data, each expert extract different aspects or patterns in the data.

MoE in language translation may use experts by language pairs

>>>>>>> 0d6401c17707d5173c4bc1b6e287c4b1ac9c4f0f
1 change: 1 addition & 0 deletions docs/genAI/prompt-eng.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ This chapter includes a summary of prompt engineering practices and links to maj
* [Prompt engineering guide from (promptingguide.ai)](https://www.promptingguide.ai) which covers the theory and practical aspects of prompt engineering and how to leverage the best prompting techniques to interact and build with LLMs.
* [Wikipedia- prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering)
* [Anthropic - Claude - Prompt engineering.](https://docs.anthropic.com/claude/docs/prompt-engineering)
* [Mistral prompting capabilities.](https://docs.mistral.ai/guides/prompting_capabilities/)

This repository includes code, prompts to test on different LLMs.

Expand Down
20 changes: 20 additions & 0 deletions llm-langchain/mistral/mistral_basic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

from dotenv import load_dotenv
import os

print("--- Welcome to a basic QA with Mistral")
load_dotenv(dotenv_path="../../.env")

api_key = os.getenv("MISTRAL_API_KEY")
model = "mistral-large-latest"

client = MistralClient(api_key=api_key)

chat_response = client.chat(
model=model,
messages=[ChatMessage(role="user", content="What is the best French cheese?")]
)

print(chat_response.choices[0].message.content)
5 changes: 0 additions & 5 deletions llm-langchain/mistral/mistral_lc.py

This file was deleted.

4 changes: 1 addition & 3 deletions llm-langchain/mistral/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1 @@
langchain-mistralai
langchain-core
langchain-community
mistralai
38 changes: 38 additions & 0 deletions techno/crew-ai/a_resume.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@


Jerome Boyer
Santa Clara, CA - USA
Former IBM - Distinguished Engineer
AWS Principal Solution Architect
Master Computer Science Nice University, France [Linkedin]()


## Background summary

For the last ten years as AWS principal solution architect and an IBM distinguished engineer, I help customers adopt hybrid cloud, designing complex solutions around microservices, streaming and data management for AI/ML. I'm currently helping customer adopting Generative AI agents combined with traditional symbolic AI to get real actionable value from AI.
I have years of experience in business process automation and decision automation with rule engine systems. Book authors, conferences speaker, I’m still hands-on to develop MVPs, proof of technology.

I also contributed to multiple patents and publications on business rule models, IBM BPM and decision management integrations.

I am looking to guide customers to work on App modernization and cloud migration projects.

## Skills

Amazon Cloud Architecture Professional Certified
Event-driven architecture and streaming technologies with Kafka, Flink, Kafka Streams.
Cloud and hybrid technology such as Serverless Lambda, API Gateway, Kubernetes, OpenShift, Java Microprofile and Quarkus
AI: Classification, clustering, Deep Learning with PyTorch, Generative AI with prompt engineering, RAG, LangChain, LangGraph, LlamaIndex anddifferent LLMs
Methodology: Agile dev, Lean Startup, Design Thinking, Event Storming and Domain Driven Design


## Professional experience
04/204 - Present: Athena Decision Systems: Principal consultant for Neuro Symbolic AI solution implementations.
09/2022- 03/2024: AWS Principal Solution architect - ISV market support Data & AI ISVs, for serverless, event-driven and streaming or Generative AI, multi-tenancy solutions.

10/2016-09/2022: Distinguished Engineer, Event-driven architecture CTO, Specialized in hybrid cloud and reactive microservices based solution. Engaged with major IBM strategic accounts. Yearly business impact around 150 M$.

02/09 – 09/2016: IBM Lab Service - Solution Architect for BPM solution
Worldwide position, involved in complex solution delivery around IBM business process management and business rules management projects. Book author and conference speaker. Around 50+ customer engagements. 20 to 30 M$ impact per year.

12/99-1/09 ILOG Inc – Professional Service – Technical Director
I led the architect groups worldwide to develop best practices and highly qualified architects to support complex project delivery. Directly involved in the most complex NA consulting engagements.Transform a 10 M$ to 80M$ consulting business in 2 years.
138 changes: 138 additions & 0 deletions techno/crew-ai/customer_outreach.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
from crewai import Agent, Task, Crew
from crewai_tools import BaseTool, DirectoryReadTool, FileReadTool, SerperDevTool
import os
from dotenv import load_dotenv

load_dotenv("../../.env")
SERPER_API_KEY= os.getenv("SERPER_API_KEY")

"""
A sales_rep_agent works on high value leads while lead sales rep, nurtures those leads.
Both agents use tool callings
"""

sales_rep_agent = Agent(
role="Sales Representative",
goal="Identify high-value leads that match "
"our ideal customer profile",
backstory=(
"As a part of the dynamic sales team at CrewAI, "
"your mission is to scour "
"the digital landscape for potential leads. "
"Armed with cutting-edge tools "
"and a strategic mindset, you analyze data, "
"trends, and interactions to "
"unearth opportunities that others might overlook. "
"Your work is crucial in paving the way "
"for meaningful engagements and driving the company's growth."
),
allow_delegation=False,
verbose=True
)

lead_sales_rep_agent = Agent(
role="Lead Sales Representative",
goal="Nurture leads with personalized, compelling communications",
backstory=(
"Within the vibrant ecosystem of CrewAI's sales department, "
"you stand out as the bridge between potential clients "
"and the solutions they need."
"By creating engaging, personalized messages, "
"you not only inform leads about our offerings "
"but also make them feel seen and heard."
"Your role is pivotal in converting interest "
"into action, guiding leads through the journey "
"from curiosity to commitment."
),
allow_delegation=False,
verbose=True
)

class SentimentAnalysisTool(BaseTool):
name: str ="Sentiment Analysis Tool"
description: str = ("Analyzes the sentiment of text "
"to ensure positive and engaging communication.")

def _run(self, text: str) -> str:
# Your custom code tool goes here
return "positive"

sentiment_analysis_tool = SentimentAnalysisTool()
directory_read_tool = DirectoryReadTool(directory='./instructions')
file_read_tool = FileReadTool()
search_tool = SerperDevTool()

lead_profiling_task = Task(
description=(
"Conduct an in-depth analysis of {lead_name}, "
"a company in the {industry} sector "
"that recently showed interest in our solutions. "
"Utilize all available data sources "
"to compile a detailed profile, "
"focusing on key decision-makers, recent business "
"developments, and potential needs "
"that align with our offerings. "
"This task is crucial for tailoring "
"our engagement strategy effectively.\n"
"Don't make assumptions and "
"only use information you absolutely sure about."
),
expected_output=(
"A comprehensive report on {lead_name}, "
"including company background, "
"key personnel, recent milestones, and identified needs. "
"Highlight potential areas where "
"our solutions can provide value, "
"and suggest personalized engagement strategies."
),
tools=[directory_read_tool, file_read_tool, search_tool],
agent=sales_rep_agent,
)

personalized_outreach_task = Task(
description=(
"Using the insights gathered from "
"the lead profiling report on {lead_name}, "
"craft a personalized outreach campaign "
"aimed at {key_decision_maker}, "
"the {position} of {lead_name}. "
"The campaign should address their recent {milestone} "
"and how our solutions can support their goals. "
"Your communication must resonate "
"with {lead_name}'s company culture and values, "
"demonstrating a deep understanding of "
"their business and needs.\n"
"Don't make assumptions and only "
"use information you absolutely sure about."
),
expected_output=(
"A series of personalized email drafts "
"tailored to {lead_name}, "
"specifically targeting {key_decision_maker}."
"Each draft should include "
"a compelling narrative that connects our solutions "
"with their recent achievements and future goals. "
"Ensure the tone is engaging, professional, "
"and aligned with {lead_name}'s corporate identity."
),
tools=[sentiment_analysis_tool, search_tool],
agent=lead_sales_rep_agent,
)

crew = Crew(
agents=[sales_rep_agent, lead_sales_rep_agent],
tasks=[lead_profiling_task, personalized_outreach_task],
verbose=2,
memory=True
)

inputs = {
"lead_name": "DeepLearningAI",
"industry": "Online Learning Platform",
"key_decision_maker": "Andrew Ng",
"position": "CEO",
"milestone": "product launch"
}

result = crew.kickoff(inputs=inputs)
print(result)
Loading

0 comments on commit aecd2dc

Please sign in to comment.