Schedule

Room 101-B, April 26, 2026

See also the full schedule on the ICLR website.

Time (BRT) Event
9:00 - 9:30 Invited talk 1 - Jeremy Cohen (Flatiron Institute) : How does gradient descent work?
9:30 - 10:00 Invited talk 2 - Julia Kempe (NYU & Meta) : Some Insights into LLM Reasoning
10:00 - 10:30 Invited talk 3 - David Bau (Northeastern University) : Reading Science Back Out of AI
10:30 - 11:00 Coffee Break
11:00 - 11:15 Contributed Talk 1 - Minhak Song (KAIST) : Zeroth-Order Optimization at the Edge of Stability
11:15 - 11:30 Contributed Talk 2 - Jingwen Liu (Columbia University) : Less Data, Faster Training: sampling bias from small dataset can speed up training
11:30 - 11:45 Contributed Talk 3 - Bruno Loureiro (CNRS) : Optimal scaling laws in learning hierarchical multi-index models
11:45 - 12:30 Poster Session 1
12:30 - 13:30 Lunch Break
13:30 - 14:00 Invited talk 4 - Richard Baraniuk (Rice University & OpenStax) : The science of self-consuming loops in AI
14:00 - 14:30 Invited talk 5 - Matthieu Wyart (Johns Hopkins University & EPFL): Deriving Neural Scaling Laws from the statistics of natural language
14:30 - 15:00 Coffee Break
15:00 - 16:00 Panel Discussion: Jeremy Cohen, Julia Kempe, David Bau, Richard Baraniuk, Matthieu Wyart
16:00 - 16:15 Challenge Winners Announcement
16:15 - 17:00 Poster Session 2