Skip to content

KoboldCPP-v1.53.yr0-ROCm

Compare
Choose a tag to compare
@github-actions github-actions released this 23 Dec 09:35
· 3106 commits to main since this release

koboldcpp-1.53-ROCm

Merge with @LostRuins latest upstream update

  • Added support for SSL. You can now import your own SSL cert to use with KoboldCpp and serve it over HTTPS with --ssl [cert.pem] [key.pem] or via the GUI. The .pem files must be unencrypted, you can also generate them with OpenSSL, eg. openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 -config openssl.cnf -nodes (location of openssl.cnf might differ on linux distros. try searching for it with locate openssl.cnf) for your own self signed certificate.
  • Added support for presence penalty (alternative rep pen) over the KAI API and in Lite. If Presence Penalty is set over the OpenAI API, and rep_pen is not set, then rep_pen will be set to a default of 1.0 instead of 1.1. Both penalties can be used together, although this is probably not a good idea.
  • Added fixes for Broken Pipe error, thanks @mahou-shoujo.
  • Added fixes for aborting ongoing connections while streaming in SillyTavern.
  • Merged upstream support for Phi models and speedups for Mixtral
  • The default non-blas batch size for GGUF models is now increased from 8 to 32.
  • Merged HIPBlas fixes from @YellowRoseCx
  • Fixed an issue with building convert tools in 1.52

To use, download and run the koboldcpp_rocm.exe, which is a one-file pyinstaller.
If you're using NVIDIA, you can try koboldcpp.exe at LostRuin's upstream repo here
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller, also at LostRuin's repo.

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001/

For more information, be sure to run the program from command line with the --help flag.