v3.0.0-beta.16
Pre-release
Pre-release
3.0.0-beta.16 (2024-04-13)
Bug Fixes
Features
inspect gpu
command: print device names (#198) (5ca33c7)inspect gpu
command: print env info (#202) (d332b77)- download models using the CLI (#191) (b542b53)
- interactively select a model from CLI commands (#191) (b542b53)
- change the default log level to warn (#191) (b542b53)
- token biases (#196) (3ad4494)
Shipped with llama.cpp
release b2665
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)