I've built a library for streaming pytorch models, and I decided to take Kokoro TTS for building a demo. In this demo I show how on CPU, 90% of the inference time is spent on kokoro's decoder and I make it streamable without sentence splitting. Here is the demo: https://torchstream.koyeb.app/streaming_kokoro_tts