Skip to content

Commit

Permalink
linkcheck: update links
Browse files Browse the repository at this point in the history
  • Loading branch information
casperdcl committed Oct 12, 2023
1 parent 8161a1d commit 50ba949
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion fine-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Some ideas:
- [Why You (Probably) Don't Need to Fine-tune an LLM](https://www.tidepool.so/2023/08/17/why-you-probably-dont-need-to-fine-tune-an-llm/) (instead, use few-shot prompting & retrieval-augmented generation)
- [Fine-Tuning LLaMA-2: A Comprehensive Case Study for Tailoring Models to Unique Applications](https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications) (fine-tuning LLaMA-2 for 3 real-world use cases)
- [Private, local, open source LLMs](https://python.langchain.com/docs/guides/local_llms)
- [Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
- [Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)](https://github.com/hiyouga/LLaMA-Factory)
- https://dstack.ai/examples/finetuning-llama-2
- https://github.com/h2oai, etc.
- [The History of Open-Source LLMs: Better Base Models (part 2)](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better) (LLaMA, MPT, Falcon, LLaMA-2)
Expand Down
2 changes: 1 addition & 1 deletion references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ @online{cursor-llama
title={Why {GPT-3.5} is (mostly) cheaper than {LLaMA-2}},
author={Aman},
year=2023,
url={https://www.cursor.so/blog/llama-inference}
url={https://cursor.sh/blog/llama-inference}
}
@online{vector-indexing,
title={Vector databases: Not all indexes are created equal},
Expand Down
2 changes: 1 addition & 1 deletion references.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Couldn't decide which chapter(s) these links are related to. They're mostly abou
- "How I Re-implemented PyTorch for WebGPU" (`webgpu-torch`: inference & autograd lib to run NNs in browser with negligible overhead) https://praeclarum.org/2023/05/19/webgpu-torch.html
- "LLaMA from scratch (or how to implement a paper without crying)" (misc tips, scaled-down version of LLaMA for training) https://blog.briankitano.com/llama-from-scratch
- "Swift Transformers: Run On-Device LLMs in Apple Devices" https://huggingface.co/blog/swift-coreml-llm
- "Why GPT-3.5-turbo is (mostly) cheaper than LLaMA-2" https://www.cursor.so/blog/llama-inference#user-content-fn-gpt4-leak
- "Why GPT-3.5-turbo is (mostly) cheaper than LLaMA-2" https://cursor.sh/blog/llama-inference#user-content-fn-gpt4-leak
- http://marble.onl/posts/why_host_your_own_llm.html
- https://betterprogramming.pub/you-dont-need-hosted-llms-do-you-1160b2520526
- "Low-code framework for building custom LLMs, neural networks, and other AI models" https://github.com/ludwig-ai/ludwig
Expand Down

0 comments on commit 50ba949

Please sign in to comment.