Skip to content

Releases: withcatai/node-llama-cpp

v3.0.0-beta.20

19 May 00:13
d6a0f43
Compare
Choose a tag to compare
v3.0.0-beta.20 Pre-release
Pre-release

3.0.0-beta.20 (2024-05-19)

Bug Fixes

  • improve binary compatibility detection on Linux (#217) (d6a0f43)

Features

  • init command to scaffold a new project from a template (with node-typescript and electron-typescript-react templates) (#217) (d6a0f43)
  • debug mode (#217) (d6a0f43)
  • load LoRA adapters (#217) (d6a0f43)
  • improve Electron support (#217) (d6a0f43)

Shipped with llama.cpp release b2928

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.19

12 May 20:48
d321fe3
Compare
Choose a tag to compare
v3.0.0-beta.19 Pre-release
Pre-release

3.0.0-beta.19 (2024-05-12)

Bug Fixes

Features


Shipped with llama.cpp release b2861

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.18

09 May 23:28
453c162
Compare
Choose a tag to compare
v3.0.0-beta.18 Pre-release
Pre-release

3.0.0-beta.18 (2024-05-09)

Bug Fixes

  • more efficient max context size finding algorithm (#214) (453c162)
  • make embedding-only models work correctly (#214) (453c162)
  • perform context shift on the correct token index on generation (#214) (453c162)
  • make context loading work for all models on Electron (#214) (453c162)

Features


Shipped with llama.cpp release b2834

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v2.8.10

27 Apr 18:28
29e8c67
Compare
Choose a tag to compare

2.8.10 (2024-04-27)

Bug Fixes

v3.0.0-beta.17

24 Apr 17:23
ef501f9
Compare
Choose a tag to compare
v3.0.0-beta.17 Pre-release
Pre-release

3.0.0-beta.17 (2024-04-24)

Bug Fixes

  • FunctionaryChatWrapper bugs (#205) (ef501f9)
  • function calling syntax bugs (#205) ([ef501f9]
  • show GPU layers in the Model line in CLI commands (#205) ([ef501f9]
  • refactor: rename LlamaChatWrapper to Llama2ChatWrapper

Features


Shipped with llama.cpp release b2717

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.16

13 Apr 17:14
d332b77
Compare
Choose a tag to compare
v3.0.0-beta.16 Pre-release
Pre-release

3.0.0-beta.16 (2024-04-13)

Bug Fixes

Features


Shipped with llama.cpp release b2665

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.15

04 Apr 20:52
6267778
Compare
Choose a tag to compare
v3.0.0-beta.15 Pre-release
Pre-release

3.0.0-beta.15 (2024-04-04)

Bug Fixes

Features

  • automatically adapt to current free VRAM state (#182) (35e6f50)
  • inspect gguf command (#182) (35e6f50)
  • inspect measure command (#182) (35e6f50)
  • readGgufFileInfo function (#182) (35e6f50)
  • GGUF file metadata info on LlamaModel (#182) (35e6f50)
  • JinjaTemplateChatWrapper (#182) (35e6f50)
  • use the tokenizer.chat_template header from the gguf file when available - use it to find a better specialized chat wrapper or use JinjaTemplateChatWrapper with it as a fallback (#182) (35e6f50)
  • simplify generation CLI commands: chat, complete, infill (#182) (35e6f50)
  • Windows on Arm prebuilt binary (#181) (f3b7f81)

Shipped with llama.cpp release b2608

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v2.8.9

21 Mar 20:18
6b012a6
Compare
Choose a tag to compare

2.8.9 (2024-03-21)

Bug Fixes

v3.0.0-beta.14

16 Mar 22:46
315a3eb
Compare
Choose a tag to compare
v3.0.0-beta.14 Pre-release
Pre-release

3.0.0-beta.14 (2024-03-16)

Bug Fixes

  • DisposedError was thrown when calling .dispose() (#178) (315a3eb)
  • adapt to breaking llama.cpp changes (#178) (315a3eb)

Features

  • async model and context loading (#178) (315a3eb)
  • automatically try to resolve Failed to detect a default CUDA architecture CUDA compilation error (#178) (315a3eb)
  • detect cmake binary issues and suggest fixes on detection (#178) (315a3eb)

Shipped with llama.cpp release b2440

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.13

03 Mar 22:24
5a70576
Compare
Choose a tag to compare
v3.0.0-beta.13 Pre-release
Pre-release

3.0.0-beta.13 (2024-03-03)

Bug Fixes

Features


Shipped with llama.cpp release b2329

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)