Causal Dynamics Learning
Task-independent state abstractions from causal dynamics models.
Ph.D. Dissertation
Reinforcement learning offers a versatile paradigm for developing autonomous decision-making agents, but many current algorithms still require large amounts of data and generalize poorly. One central difficulty is that correlation-based learning can entangle all observed state factors with actions, increasing sample complexity and making learned policies vulnerable to spurious correlations.
This dissertation studies how causal reasoning can improve the sample efficiency and generalization of RL algorithms. Through the lens of causality, an agent can reason about which actions and state factors affect future states, and which factors determine task success. These structures support more accurate dynamics and reward models, more compact state abstractions, strategic exploration, reusable skill discovery, and structured representations for low-level observations.
The thesis contributes methods for learning minimal causal state abstractions, designing intrinsic rewards from local causal dependencies, discovering reusable skills that generate meaningful factor interactions, and extracting structured state and action representations when high-level factors are not directly available. Together, these contributions help agents infer the causes and consequences of their actions, generalize to unseen states, and learn new tasks with limited data.
Task-independent state abstractions from causal dynamics models.
Minimal and reusable causal state abstractions for reinforcement learning.
Exploration via local dependencies induced by agent-object interactions.
Unsupervised skill discovery guided by factor interactions.
project page / paper / code
Structured world models with object-centric representations.
Action-grouped representations for learning structured latent dynamics.
For questions or comments about this dissertation, please contact zizhao.wang@utexas.edu.