Denoising Progression

Denoising Transition
What I Built
Investigating stable diffusion models as a drop-in replacement for normalizing flows in BayesFlow, a framework for simulation-based Bayesian inference. Implemented a U-Net backbone with attention mechanisms and custom diffusion training loops in PyTorch, adapting image-generation architectures to learn posterior distributions conditioned on observational data.
What I Learned
Diffusion models handle multimodality naturally. Traditional normalizing flows struggle with multi-peaked posteriors because they map from a unimodal base distribution. Diffusion’s iterative refinement sidesteps this — the reverse process can converge to different modes depending on the noise realization. Building the forward/reverse process from scratch gave deep insight into noise scheduling, the reparameterization trick at each step, and why classifier-free guidance works for conditional generation.
Research Questions
- How do diffusion models compare to normalizing flows for posterior approximation quality?
- Can stable diffusion capture multimodal posteriors more effectively?
- What are the computational trade-offs between sampling quality and inference speed?
Project
🔗 GitHub Repository | 📚 BayesFlow Framework
Tech Stack: PyTorch, U-Net with Attention, Custom Diffusion Loops, BayesFlow | Institution: Technical University of Dortmund
Citation
@online{prasanna_koppolu,
author = {Prasanna Koppolu, Bhanu},
title = {Stable {Diffusion} for {BayesFlow}},
url = {https://bhanuprasanna2001.github.io/projects/thesis_bayesflow.html},
langid = {en}
}