When
Where
Title: AI Meets Philosophy of Science
Abstract: Why do simple learning rules yield AI systems that generalize far beyond their training data? I argue that this reflects an abundance of learnable structure in nature, and that this abundance motivates Nomic Liberalism, a conception of laws developed from Minimal Primitivism (Chen and Goldstein 2022). On this view, laws can be simple, predictive, representation-relative, existing at many scales and in domains far beyond fundamental physics. Simplicity functions as a nomic razor: an epistemic guide to discovering laws, rather than a general guide to truth (Chen 2025).
I show how puzzling phenomena in machine learning—such as double descent, scaling laws, and the emergence of broad, general capabilities from simple objectives such as next-token prediction—can be understood as learning systems discovering such liberal laws, often in domains traditionally thought to lack lawful structure, and in coordinate systems quite unlike familiar human concepts. AI success thus provides concrete evidence for this expanded conception of lawhood. Yet abundance has limits. Recent results in quantum foundations establish in-principle constraints on learning: in high-dimensional quantum systems, nearly all quantum states are observationally indistinguishable—a limit no learning algorithm can overcome (Chen and Tumulka, 2025). A satisfactory theory of induction must therefore explain both why learning works so well and why it must sometimes fail.
As usual, we'll meet in the Maloney Seminar Room, Social Science Building 224, 3-5 p.m. Those unable to attend in person can spectate virtually via this Zoom link.