Skip to content

Model training or embedding custom data #177

Closed Answered by giladgd
ZGltYQ asked this question in Q&A
Discussion options

You must be logged in to vote

llama.cpp doesn't support training a model at the moment; you can only use existing models converted into the GGUF format with it and this library.

You can try to engineer a prompt (as long as it may be) that contains all the relevant instructions to achieve what you want on existing models without training them for your specific use case.
This is much easier in most cases, but a long prompt may take up a lot of the context, shrinking the space left for generation.

This library handles many tricks for you so you can use it with a model without engineering a long prompt by yourself to achieve what you need.
It also contains some workarounds to avoid having to tell the model what you want a…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by giladgd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants