AI Seminar: When Can You Pretend Data Are IID (Even If They’re Not), and What to Do When You Can’t

Karthika Mohan
Event Speaker
Karthika Mohan
Assistant professor of computer science, ¾«¶«Ó°ÊÓ State University
Event Type
Artificial Intelligence
Date
Event Location
Zoom and LINC 302
Event Description

It’s tempting to think of data samples as neatly organized, with each sample point sitting all by itself like cupcakes in a bakery display: perfectly spaced and unaffected by the rest. But in the real world, especially in scenarios where people are involved, data samples influence and affect each other. A student’s performance might depend on their study group, a shopper’s decision could be shaped by a friend’s review, and one person getting vaccinated might protect their entire household. Ignoring these interactions when working with such data may lead us to wildly wrong conclusions.

In this talk, we explore what happens when standard causal inference tools that are designed under the assumption that data are independent and identically distributed (IID) are applied to messy, interconnected data. We dive into settings where treatments spill over between individuals and where outcomes become entangled. Using causal graphs, we develop a systematic framework to detect and quantify this interaction-induced bias. We identify conditions under which IID-based methods still work surprisingly well, and discuss techniques to correct for bias when they don’t. Whether you're a theorist, a practitioner, or just curious about how cause and effect work in our hyper-connected world, this talk offers a fresh perspective on doing causal inference when data just won't sit still!
 

Speaker Biography

Karthika Mohan is an assistant professor of computer science in the School of EECS at ¾«¶«Ó°ÊÓ State University. Before joining ¾«¶«Ó°ÊÓ State, she was a postdoctoral scholar in the Computer Science department at University of California, Berkeley mentored by Stuart Russell. Mohan received her Ph.D. in computer science (artificial intelligence) from University of California, Los Angeles (UCLA) where she was advised by Judea Pearl. Her research is of an interdisciplinary nature and her areas of interest include causal inference, graphical models, and AI safety. She was awarded the Google Outstanding Graduate Research Award, 2017 which is a UCLA Commencement Award. Currently she serves on the editorial board of the Journal of Causal Inference.