Mirror Speculative Decoding: Breaking the Serial Barrier in LLM Inference Paper • 2510.13161 • Published Oct 15, 2025 • 2
M2R2: Mixture of Multi-Rate Residuals for Efficient Transformer Inference Paper • 2502.02040 • Published Feb 4, 2025 • 2
EL-Attention: Memory Efficient Lossless Attention for Generation Paper • 2105.04779 • Published May 11, 2021
Speculative Streaming: Fast LLM Inference without Auxiliary Models Paper • 2402.11131 • Published Feb 16, 2024 • 42
Speculative Streaming: Fast LLM Inference without Auxiliary Models Paper • 2402.11131 • Published Feb 16, 2024 • 42