For testing conversations and parameter tuning of Ollama's local large model
If this project is helpful to you, please give it a ★Star.
Craft with ❤︎ by CairoLee and Contributors
Craft with ❤︎ by CairoLee and Contributors
- Fetch the model list from Ollama's server
- Support load parameters from model
- Re-sending the same dialogue for testing after changing parameters and prompts is more concise.
We use poetry to manage dependencies. If you don't have poetry installed, you can install it by running the following command in Linux, macOS, Windows (WSL):
curl -sSL https://install.python-poetry.org | python3 -
If you are using Windows, you can install it by running the following command in PowerShell:
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | py -
If you want to install poetry in other ways, you can refer to the official documentation.
Clone the repository and run the following command in the project root directory:
poetry install
Run the following command in the project root directory:
poetry run python main.py
Then you can visit http://127.0.0.1:7860
in your browser to chat with Ollama.