Skip to content

Commit

Permalink
Merge pull request #67 from unoplat/63-feat-add-support-for-ollama
Browse files Browse the repository at this point in the history
feat: enable ollama cohere support and fix the naming convention for …
  • Loading branch information
JayGhiya authored Jul 12, 2024
2 parents 4911324 + d341dbd commit 024c2a8
Show file tree
Hide file tree
Showing 3 changed files with 36 additions and 4 deletions.
32 changes: 30 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,10 +131,38 @@ Local workspace on your computer from https://github.com/DataStax-Examples/sprin
}
```
Configuration Note: Do not change the download_url and keep the programming_language to java (as right now only java is supported)

llm Provider Config:
- Model Providers Supported: ["openai","together","anyscale","anthropic"] For config inside llm_provider_config refer - [Dspy Model Provider Doc](https://dspy-docs.vercel.app/docs/category/remote-language-model-clients)
- Model Providers Supported: ["openai","together","anyscale","awsanthropic","cohere","ollama"]

- For config inside llm_provider_config refer - [Dspy Model Provider Doc](https://dspy-docs.vercel.app/docs/category/remote-language-model-clients)

- Use Chat models. We have not tested instruct models as of now.

If you are looking for some credits sign up on Together AI and get 25$ to run code confluence on repository of your choice. You can even use Ollama

Together Example:
```
"llm_provider_config": {
"together": {
"api_key": "YourApiKey",
"model": "zero-one-ai/Yi-34B-Chat"
}
```

Ollama Example:
```
"llm_provider_config": {
"ollama": {
"model": "llama3"
}
```

Note: we have tried gpt3.5 turbo and it works well as data is precise for code understanding.

Note: we have tried gpt3.5 turbo and it works well as data is precise for code understanding. Our experience with https://huggingface.co/01-ai/Yi-1.5-34B-Chat also has been great apart from hiccups with last level when codebase understand is being formed. Also this will get much better as currently all the dspy modules are uncompiled.We will be rolling out evaluated models and results post optimisation soon. Until then users can use 3.5turbo for decent results.
Our experience with https://huggingface.co/01-ai/Yi-1.5-34B-Chat also has been great apart from hiccups with last level when codebase understand is being formed. Also this will get much better as currently all the dspy modules are uncompiled.We will be rolling out evaluated models and results post optimisation soon. Until then users can use 3.5turbo for decent results.

4. Run code confluence and check your output path. you will have a file name based on output file name. That file will carry precise summary of codebase at all levels - codebase,packages,classes and functions.
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ class LLMProvider(Enum):
COHERE = 'cohere'
ANYSCALE = 'anyscale'
TOGETHER = 'together'
OLLAMA = 'ollama'
AWSANTHROPIC = 'awsanthropic'



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,10 @@ def init_dspy_lm(self,llm_config: dict):
dspy.configure(lm=dspy.Together(**llm_config["together"]),experimental=True)
case "anyscale":
dspy.configure(lm=dspy.Anyscale(**llm_config["anyscale"]),experimental=True)
case "anthropic":
dspy.configure(lm=dspy.Anthropic(**llm_config["anthropic"]),experimental=True)
case "awsanthropic":
dspy.configure(lm=dspy.AWSAnthropic(**llm_config["awsanthropic"]),experimental=True)
case "ollama":
dspy.configure(lm=dspy.OllamaLocal(**llm_config["ollama"]),experimental=True)



Expand Down

0 comments on commit 024c2a8

Please sign in to comment.