ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders
Abstract
ZUNA is a 380M-parameter masked diffusion autoencoder for EEG signal reconstruction that uses 4D rotary positional encoding and demonstrates superior generalization across datasets and channel positions compared to traditional interpolation methods.
We present ZUNA, a 380M-parameter masked diffusion autoencoder trained to perform masked channel infilling and superresolution for arbitrary electrode numbers and positions in EEG signals. The ZUNA architecture tokenizes multichannel EEG into short temporal windows and injects spatiotemporal structure via a 4D rotary positional encoding over (x,y,z,t), enabling inference on arbitrary channel subsets and positions. We train ZUNA on an aggregated and harmonized corpus spanning 208 public datasets containing approximately 2 million channel-hours using a combined reconstruction and heavy channel-dropout objective. We show that ZUNA substantially improves over ubiquitous spherical-spline interpolation methods, with the gap widening at higher dropout rates. Crucially, compared to other deep learning methods in this space, ZUNA's performance generalizes across datasets and channel positions allowing it to be applied directly to novel datasets and problems. Despite its generative capabilities, ZUNA remains computationally practical for deployment. We release Apache-2.0 weights and an MNE-compatible preprocessing/inference stack to encourage reproducible comparisons and downstream use in EEG analysis pipelines.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper