Some experiments aimed at increasing LLM throughput and efficiency via Speculative Decoding.
-
Updated
Jul 31, 2023 - Python
Some experiments aimed at increasing LLM throughput and efficiency via Speculative Decoding.
Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)
[NeurIPS'23] Speculative Decoding with Big Little Decoder
Verification of the effect of speculative decoding in Japanese.
Dynasurge: Dynamic Tree Speculation for Prompt-Specific Decoding
Reproducibility Project for [NeurIPS'23] Speculative Decoding with Big Little Decoder
minimal C implementation of speculative decoding based on llama2.c
scalable and robust tree-based speculative decoding algorithm
PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
Coupling without Communication and Drafter-Invariant Speculative Decoding
(Re)-implementation of "Prompt Lookup Decoding" by Apoorv Saxena, with extended ideas from LLMA Decoding.
Implementation of Speculative Sampling in "Accelerating Large Language Model Decoding with Speculative Sampling"
[COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding
REST: Retrieval-Based Speculative Decoding, NAACL 2024
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024
Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan et al. 2023.
Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)
Add a description, image, and links to the speculative-decoding topic page so that developers can more easily learn about it.
To associate your repository with the speculative-decoding topic, visit your repo's landing page and select "manage topics."