YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Learnable Telegraph Diffusion for Image Denoising
This repository contains stage-wise trained denoising models for additive Gaussian noise removal, together with a TNRD-style baseline and a classical PDE baseline.
The results below summarize the latest complete overnight sweep:
- log directory:
logs/overnight_20260408_013608 - evaluated datasets:
Set12,BSD68 - noise levels:
sigma = 15, 25, 50, 75 - stage count:
5
Note: the earlier run in logs/overnight_20260408_013234 was a scheduler-bug run and is not used for the summary below.
Main Takeaways
- The strongest model in this sweep is the
Finetuned TNRD baselineon bothSet12andBSD68at every tested noise level. - For both MLP and RBF parameterizations, the
No-wavevariant outperformed theTelegraphvariant throughout this sweep. - The
RBFparameterization consistently outperformed the correspondingMLPparameterization. - End-to-end fine-tuning improved every model family over its stage-wise checkpoint.
- All learned models clearly outperformed the classical PDE baseline at the tested noise levels.
Best Results
| Dataset | Sigma | Best model | PSNR (dB) |
|---|---|---|---|
| BSD68 | 15 | Finetuned TNRD baseline | 30.90 |
| BSD68 | 25 | Finetuned TNRD baseline | 28.36 |
| BSD68 | 50 | Finetuned TNRD baseline | 25.43 |
| BSD68 | 75 | Finetuned TNRD baseline | 23.91 |
| Set12 | 15 | Finetuned TNRD baseline | 31.85 |
| Set12 | 25 | Finetuned TNRD baseline | 29.33 |
| Set12 | 50 | Finetuned TNRD baseline | 26.05 |
| Set12 | 75 | Finetuned TNRD baseline | 24.18 |
Plots
Base Models
Finetuned Models
BSD68 Results
Base Models
| Method | 15 | 25 | 50 | 75 |
|---|---|---|---|---|
| MLP Telegraph | 28.31 | 25.06 | 22.86 | 20.88 |
| MLP No-wave | 29.08 | 26.87 | 23.87 | 21.71 |
| RBF Telegraph | 27.97 | 25.52 | 22.70 | 20.89 |
| RBF No-wave | 30.46 | 27.86 | 24.47 | 22.30 |
| TNRD baseline | 30.41 | 27.85 | 24.58 | 22.42 |
Finetuned Models
| Method | 15 | 25 | 50 | 75 |
|---|---|---|---|---|
| Finetuned MLP Telegraph | 29.90 | 27.30 | 24.42 | 22.40 |
| Finetuned MLP No-wave | 29.88 | 27.61 | 24.60 | 22.93 |
| Finetuned RBF Telegraph | 30.56 | 27.70 | 24.65 | 23.26 |
| Finetuned RBF No-wave | 30.79 | 28.30 | 25.23 | 23.74 |
| Finetuned TNRD baseline | 30.90 | 28.36 | 25.43 | 23.91 |
Set12 Results
Base Models
| Method | 15 | 25 | 50 | 75 |
|---|---|---|---|---|
| MLP Telegraph | 29.19 | 25.92 | 23.32 | 20.99 |
| MLP No-wave | 29.59 | 27.47 | 24.54 | 22.28 |
| RBF Telegraph | 29.19 | 26.50 | 23.32 | 21.36 |
| RBF No-wave | 31.45 | 28.97 | 25.43 | 23.12 |
| TNRD baseline | 31.43 | 28.94 | 25.54 | 23.20 |
Finetuned Models
| Method | 15 | 25 | 50 | 75 |
|---|---|---|---|---|
| Finetuned MLP Telegraph | 30.63 | 28.03 | 24.76 | 22.38 |
| Finetuned MLP No-wave | 30.64 | 28.40 | 25.10 | 23.12 |
| Finetuned RBF Telegraph | 31.51 | 28.66 | 25.23 | 23.51 |
| Finetuned RBF No-wave | 31.74 | 29.23 | 25.92 | 24.05 |
| Finetuned TNRD baseline | 31.85 | 29.33 | 26.05 | 24.18 |
Classical PDE Baseline
The latest overnight run evaluated the classical PDE baseline at sigma = 15, 50, 75.
| Dataset | 15 | 25 | 50 | 75 |
|---|---|---|---|---|
| BSD68 | 26.00 | - | 19.56 | 15.08 |
| Set12 | 26.73 | - | 19.55 | 14.98 |
Notes
- The learned models were trained stage-wise first, then optionally fine-tuned end-to-end on the same noise level.
- Fine-tuned checkpoints and base checkpoints were both evaluated using the same sigma-specific setup.
evaluate_checkpoints.pyproduced the main result table used here.plot_experiment_results.pygenerated the plots inplots/.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

