Releases: withcatai/node-llama-cpp
v3.2.0
3.2.0 (2024-10-31)
Bug Fixes
- Electron crash with some models on macOS when not using Metal (#375) (ea12dc5)
- adapt to
llama.cpp
breaking changes (#375) (ea12dc5) - support
rejectattr
in Jinja templates (#376) (ea12dc5) - build warning on macOS (#377) (6405ee9)
Features
- chat session response prefix (#375) (ea12dc5)
- improve context shift strategy (#375) (ea12dc5)
- use RAM and swap sizes in memory usage estimations (#375) (ea12dc5)
- faster building from source (#375) (ea12dc5)
- improve CPU compatibility score (#375) (ea12dc5)
inspect gguf
command: print a single key flag (#375) (ea12dc5)
Shipped with llama.cpp
release b3995
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.1.1
v3.1.0
3.1.0 (2024-10-05)
Bug Fixes
Features
Shipped with llama.cpp
release b3887
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.3
✨ node-llama-cpp
3.0 is here! ✨
Read about the release in the blog post
3.0.3 (2024-09-25)
Bug Fixes
Shipped with llama.cpp
release b3825
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.2
✨ node-llama-cpp
3.0 is here! ✨
Read about the release in the blog post
3.0.2 (2024-09-25)
Bug Fixes
Shipped with llama.cpp
release b3821
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.1
✨ node-llama-cpp
3.0 is here! ✨
Read about the release in the blog post
3.0.1 (2024-09-24)
Bug Fixes
Shipped with llama.cpp
release b3808
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.0
✨ node-llama-cpp
3.0 is here! ✨
Read about the release in the blog post
3.0.0 (2024-09-24)
Features
- function calling (#139) (5fcdf9b)
- get embedding for text (#144) (4cf1fba)
- async model and context loading (#178) (315a3eb)
- token biases (#196) (3ad4494)
- automatic batching (#104) (4757af8)
- prompt completion engine (#225) (95f4645)
- model compatibility warnings (#225) (95f4645)
- Vulkan support (#171) (d161bcd)
- Windows on Arm prebuilt binary (#181) (f3b7f81)
- change the default log level to warn (#191) (b542b53)
pull
command (#214) (453c162)inspect gpu
command (#175) (5a70576)inspect gguf
command (#182) (35e6f50)inspect estimate
command (#309) (4b3ad61)inspect measure
command (#182) (35e6f50)init
command to scaffold a new project from a template (withnode-typescript
andelectron-typescript-react
templates) (#217) (d6a0f43)- move
download
,build
andclear
commands to be subcommands of asource
command (#309) (4b3ad61) - move
seed
option to the prompt level (#309) (4b3ad61) TemplateChatWrapper
: custom history template for each message role (#309) (4b3ad61)- Llama 3.1 support (#273) (e3e0994)
- Mistral chat wrapper (#309) (4b3ad61)
- Functionary v3 support (#309) (4b3ad61)
- Phi-3 support (#273) (e3e0994)
- extract all prebuilt binaries to external modules (#309) (4b3ad61)
- parallel function calling (#225) (95f4645)
- preload prompt (#225) (95f4645)
onTextChunk
option (#273) (e3e0994)- flash attention (#264) (c2e322c)
- debug mode (#217) (d6a0f43)
- load LoRA adapters (#217) (d6a0f43)
- split gguf files support (#214) (453c162)
stopOnAbortSignal
andcustomStopTriggers
onLlamaChat
andLlamaChatSession
(#214) (453c162)- Llama 3 support (#205) (ef501f9)
--gpu
flag in generation CLI commands (#205) (ef501f9)specialTokens
parameter onmodel.detokenize
(#205) (ef501f9)- interactively select a model from CLI commands (#191) (b542b53)
- automatically adapt to current free VRAM state (#182) (35e6f50)
- GGUF file metadata info on
LlamaModel
(#182) (35e6f50) - use the
tokenizer.chat_template
header from thegguf
file when available - use it to find a better specialized chat wrapper or useJinjaTemplateChatWrapper
with it as a fallback (#182) (35e6f50) - simplify generation CLI commands:
chat
,complete
,infill
(#182) (35e6f50) - gguf parser (#168) (bcaab4f)
- use the best compute layer available by default (#175) (5a70576)
- more guardrails to prevent loading an incompatible prebuilt binary (#175) (5a70576)
- completion and infill (#164) (ede69c1)
- support configuring more options for
getLlama
when using"lastBuild"
(#164) (ede69c1) - get VRAM state (#161) ([46235a2](https://github.com/withc...
v3.0.0-beta.47
3.0.0-beta.47 (2024-09-23)
Bug Fixes
Features
Shipped with llama.cpp
release b3804
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.0-beta.46
3.0.0-beta.46 (2024-09-20)
Bug Fixes
- no thread limit when using a GPU (#322) (2204e7a)
- improve
defineChatSessionFunction
types and docs (#322) (2204e7a) - format numbers printed in the CLI (#322) (2204e7a)
- revert
electron-builder
version used in Electron template (#323) (6c644ff)
Shipped with llama.cpp
release b3787
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)
v3.0.0-beta.45
3.0.0-beta.45 (2024-09-19)
Bug Fixes
- improve performance of parallel evaluation from multiple contexts (#309) (4b3ad61)
- Llama 3.1 chat wrapper standard chat history (#309) (4b3ad61)
- adapt to
llama.cpp
sampling refactor (#309) (4b3ad61) - Llama 3 Instruct function calling (#309) (4b3ad61)
- don't preload prompt in the
chat
command when using--printTimings
or--meter
(#309) (4b3ad61) - more stable Jinja template matching (#309) (4b3ad61)
Features
inspect estimate
command (#309) (4b3ad61)- move
seed
option to the prompt level (#309) (4b3ad61) - Functionary v3 support (#309) (4b3ad61)
- Mistral chat wrapper (#309) (4b3ad61)
- improve Llama 3.1 chat template detection (#309) (4b3ad61)
- change
autoDisposeSequence
default tofalse
(#309) (4b3ad61) - move
download
,build
andclear
commands to be subcommands of asource
command (#309) (4b3ad61) - simplify
TokenBias
(#309) (4b3ad61) - better
threads
default value (#309) (4b3ad61) - make
LlamaEmbedding
an object (#309) (4b3ad61) HF_TOKEN
env var support for reading GGUF file metadata (#309) (4b3ad61)TemplateChatWrapper
: custom history template for each message role (#309) (4b3ad61)- more helpful
inspect gpu
command (#309) (4b3ad61) - all tokenizer tokens iterator (#309) (4b3ad61)
- failed context creation automatic remedy (#309) (4b3ad61)
- abort generation support in CLI commands (#309) (4b3ad61)
--gpuLayers max
and--contextSize max
flag support forinspect estimate
command (#309) (4b3ad61)- extract all prebuilt binaries to external modules (#309) (4b3ad61)
- updated docs (#309) (4b3ad61)
- combine model downloaders (#309) (4b3ad61)
- feat(electron example template): update badge, scroll anchoring, table support (#309) (4b3ad61)
Shipped with llama.cpp
release b3785
To use the latest
llama.cpp
release available, runnpx -n node-llama-cpp source download --release latest
. (learn more)