Overview

Why This Workshop Now

Deep generative models now power image synthesis, video generation, language modeling, and scientific discovery, but their most important capabilities are still poorly understood.

Diffusion models, flow-based models, and autoregressive models have delivered impressive empirical gains. At the same time, major open questions remain around reliability, interpretability, privacy, and scientific use. This workshop creates a focused venue for theory, empirical analysis, and domain-driven applications to meet.

The program is designed around a practical scientific question: when a model appears capable, is it reproducing training data, capturing a real distributional structure, or performing a stronger form of compositional reasoning that transfers beyond what it has seen?

Memorization

Understand how over-parameterized DGMs retain, reproduce, or expose training data, and what that means for privacy, robustness, and trustworthy deployment.

Generalization

Characterize when generated samples reflect learned structure rather than template matching, and how model size, data complexity, and training dynamics shape that boundary.

Reasoning

Evaluate whether DGMs can support compositional, causal, or structured inference that matters for multi-step generation and scientific workflows.

Topics of Interest

Topics of Interest

The workshop welcomes work on foundational, empirical, and application-driven questions in deep generative models.

This workshop aims to bring together researchers working on the foundations of diffusion models, flow-based models, autoregressive models, and related generative learning frameworks. We welcome work that sharpens our understanding of what DGMs learn, how they behave under scale, and how they can be evaluated reliably.

The topics below reflect the research directions prioritized for the workshop and are intended to guide the scope of submissions and discussion during the workshop.

Memorization and Generalization

Empirical and theoretical studies of memorization, generalization, regime transitions, and the roles of capacity, data complexity, and scaling. Examples of related work can be found here.

Reasoning and Compositionality

Mechanisms for compositional, causal, or structured inference, including in-context learning, chain-of-thought, and multi-step generation.

Optimization and Inductive Bias

Learning dynamics, architecture, and implicit regularization in shaping memorization, generalization, and reasoning behavior.

Evaluation and Benchmarking

Metrics and diagnostic frameworks for distinguishing memorization from genuine generalization, plus robustness, privacy, and extrapolation benchmarks.

Scientific Discovery with DGMs

Applications in scientific machine learning, healthcare, protein design, and molecular discovery where interpretability and reasoning matter as much as raw sample quality.

Call for Papers

Call for Papers

This workshop is non-archival, and accepted papers will not appear in official proceedings. Accepted submissions will appear on OpenReview, but authors remain free to submit and publish their work elsewhere in the future. Accepted papers will be presented as talks or posters during the workshop. The workshop will select the best paper to recognize outstanding contributions in the field.

Important Dates

  • Paper Submission Deadline: 11:59 PM AoE on May 8, 2026
  • Notification of Acceptance: May 15, 2026
  • Camera-Ready Deadline: June 5, 2026
  • Workshop Date: TBD (Either July 10 or 11)

Submission Instructions

  • Submit via OpenReview.
  • Contributed papers are expected to align with the workshop scope described in Topics of Interest.

Formatting Instructions

  • Please prepare submissions using the workshop Submission Style Template.
  • The workshop Camera Ready Style Template will be posted soon.
  • Papers should be a maximum of 8 pages, excluding references and appendices.
  • Submission is double-blind, and authors must anonymize their manuscripts.

Schedule

Workshop Schedule

This workshop combines invited talks, contributed oral presentations, poster sessions, a panel discussion, and closing awards. Note that this schedule is subject to change.

Morning

  • 8:20-8:30Opening remarks
  • 8:30-9:00Invited talk 1
  • 9:00-9:30Invited talk 2
  • 9:30-10:30Poster session and break
  • 10:30-11:00Invited talk 3
  • 11:00-11:30Invited talk 4
  • 11:30-11:45Oral presentation 1
  • 11:45-12:00Oral presentation 2

Afternoon

  • 12:00-1:30Lunch
  • 1:30-2:00Invited talk 5
  • 2:00-2:30Invited talk 6
  • 2:30-3:30Poster session and break
  • 3:30-3:45Oral presentation 3
  • 3:45-4:00Oral presentation 4
  • 4:00-4:50Panel discussion
  • 4:50-5:00Awards and closing

Speakers

Confirmed Invited Speakers (Alphabetical Order)

All seven invited speakers listed here are confirmed. The speaker slate spans theory, reasoning, optimization, and scientific applications of deep generative models across multiple continents and career stages.

Ge Liu

Ge Liu

Assistant Professor, University of Illinois Urbana-Champaign

Yi Ma

Yi Ma

Chair Professor, University of Hong Kong

Organizers

Organizing Team (Alphabetical Order)

The team combines expertise across diffusion models, deep learning theory, optimization, sampling, and applications.

Wei Huang

Wei Huang

Research Scientist, RIKEN Center for Advanced Intelligence Project

Qing Qu

Qing Qu

Assistant Professor, University of Michigan

Molei Tao

Molei Tao

Professor, Georgia Institute of Technology

Peng Wang

Peng Wang

Assistant Professor, University of Macau

Renyuan Xu

Renyuan Xu

Assistant Professor, Stanford University

Student Organizers

Student Organizers (Alphabetical Order)

Justin Lee

Justin Lee

Ph.D. Student, University of Michigan

Xiao Li

Xiao Li

Postdoctoral Researcher, University of Hong Kong

Awards

Awards

The workshop schedule includes an awards-and-closing slot at the end of the day. Additional recognition details for outstanding submissions and presentations will be shared here.

TBD

Contact

Workshop Information

For questions about participation, contributions, or logistics, contact the workshop leads below.

Sponsors

TBD