Skip to content

Releases: withcatai/node-llama-cpp

v3.2.0

31 Oct 01:39
6405ee9
Compare
Choose a tag to compare

3.2.0 (2024-10-31)

Bug Fixes

  • Electron crash with some models on macOS when not using Metal (#375) (ea12dc5)
  • adapt to llama.cpp breaking changes (#375) (ea12dc5)
  • support rejectattr in Jinja templates (#376) (ea12dc5)
  • build warning on macOS (#377) (6405ee9)

Features


Shipped with llama.cpp release b3995

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.1.1

06 Oct 20:32
8145c94
Compare
Choose a tag to compare

3.1.1 (2024-10-06)

Features

  • minor: reference common classes on the Llama instance (#360) (8145c94)

Shipped with llama.cpp release b3889

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.1.0

05 Oct 20:27
51eab61
Compare
Choose a tag to compare

3.1.0 (2024-10-05)

Bug Fixes

Features


Shipped with llama.cpp release b3887

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.3

25 Sep 20:34
2e751c8
Compare
Choose a tag to compare

node-llama-cpp 3.0 is here! ✨

Read about the release in the blog post


3.0.3 (2024-09-25)

Bug Fixes


Shipped with llama.cpp release b3825

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.2

25 Sep 15:00
1291b97
Compare
Choose a tag to compare

node-llama-cpp 3.0 is here! ✨

Read about the release in the blog post


3.0.2 (2024-09-25)

Bug Fixes


Shipped with llama.cpp release b3821

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.1

24 Sep 04:11
ec45bbf
Compare
Choose a tag to compare

node-llama-cpp 3.0 is here! ✨

Read about the release in the blog post


3.0.1 (2024-09-24)

Bug Fixes


Shipped with llama.cpp release b3808

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.0

24 Sep 01:38
97b0d86
Compare
Choose a tag to compare

node-llama-cpp 3.0 is here! ✨

Read about the release in the blog post


3.0.0 (2024-09-24)

Features

Read more

v3.0.0-beta.47

23 Sep 18:53
Compare
Choose a tag to compare
v3.0.0-beta.47 Pre-release
Pre-release

3.0.0-beta.47 (2024-09-23)

Bug Fixes

Features

  • resetChatHistory function on a LlamaChatSession (#327) (ebc4e83)

Shipped with llama.cpp release b3804

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.0-beta.46

20 Sep 16:16
6c644ff
Compare
Choose a tag to compare
v3.0.0-beta.46 Pre-release
Pre-release

3.0.0-beta.46 (2024-09-20)

Bug Fixes

  • no thread limit when using a GPU (#322) (2204e7a)
  • improve defineChatSessionFunction types and docs (#322) (2204e7a)
  • format numbers printed in the CLI (#322) (2204e7a)
  • revert electron-builder version used in Electron template (#323) (6c644ff)

Shipped with llama.cpp release b3787

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.0-beta.45

19 Sep 19:11
d0795c1
Compare
Choose a tag to compare
v3.0.0-beta.45 Pre-release
Pre-release

3.0.0-beta.45 (2024-09-19)

Bug Fixes

  • improve performance of parallel evaluation from multiple contexts (#309) (4b3ad61)
  • Llama 3.1 chat wrapper standard chat history (#309) (4b3ad61)
  • adapt to llama.cpp sampling refactor (#309) (4b3ad61)
  • Llama 3 Instruct function calling (#309) (4b3ad61)
  • don't preload prompt in the chat command when using --printTimings or --meter (#309) (4b3ad61)
  • more stable Jinja template matching (#309) (4b3ad61)

Features

  • inspect estimate command (#309) (4b3ad61)
  • move seed option to the prompt level (#309) (4b3ad61)
  • Functionary v3 support (#309) (4b3ad61)
  • Mistral chat wrapper (#309) (4b3ad61)
  • improve Llama 3.1 chat template detection (#309) (4b3ad61)
  • change autoDisposeSequence default to false (#309) (4b3ad61)
  • move download, build and clear commands to be subcommands of a source command (#309) (4b3ad61)
  • simplify TokenBias (#309) (4b3ad61)
  • better threads default value (#309) (4b3ad61)
  • make LlamaEmbedding an object (#309) (4b3ad61)
  • HF_TOKEN env var support for reading GGUF file metadata (#309) (4b3ad61)
  • TemplateChatWrapper: custom history template for each message role (#309) (4b3ad61)
  • more helpful inspect gpu command (#309) (4b3ad61)
  • all tokenizer tokens iterator (#309) (4b3ad61)
  • failed context creation automatic remedy (#309) (4b3ad61)
  • abort generation support in CLI commands (#309) (4b3ad61)
  • --gpuLayers max and --contextSize max flag support for inspect estimate command (#309) (4b3ad61)
  • extract all prebuilt binaries to external modules (#309) (4b3ad61)
  • updated docs (#309) (4b3ad61)
  • combine model downloaders (#309) (4b3ad61)
  • feat(electron example template): update badge, scroll anchoring, table support (#309) (4b3ad61)

Shipped with llama.cpp release b3785

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)