AI4OPT Tutorial Lectures: Sanjay Shakkottai

Dates: From Monday, March 13 to Friday, March 17, between the hours of 10:00 AM to 12:00 PM (noon).

Location: See locations below in 'Schedule'

Live stream link: https://gatech.zoom.us/j/99381428980

Causal Inference Course

Speaker: Sanjay Shakkottai

Moving away from decision-making based on observed correlations in data, causal inference develops the mathematical foundations for reasoning about the direction of implication — aka cause and effect – for observed dependencies in data. These foundations lead to tools and techniques that can be used for improved models and better decision-making for emerging data-driven systems. This short course covers the motivation, mathematical foundations, and machine learning algorithms for causal reasoning.

Schedule

  1. Mon, Mar 13: Lecture 1, 10 am – noon, Skiles 006 (Coffee and snacks provided)
  2. Tue, Mar 14: Lecture 2, 10 am – noon, Groseclose 119 (Lunch provided)
  3. Wed, Mar 15: Lecture 3, 10 am – noon, Love Manufacturing Building184 (Coffee and snacks provided)
  4. Thu, Mar 16: Lecture 4, 10 am – noon, Groseclose 119 (Lunch provided)
  5. Fri, Mar 17: Lecture 5, 10 am – noon, Love Manufacturing Building 184 (Coffee and snacks provided)

Topics

  1. Overview 
    • Motivation, Examples, Interventions
  2. Independence, Conditional Independence and D-Separation
    • Conditional Independence (CI)
    • Directed Acyclic Graphs (DAGs)
    • D-separation Properties
    • Global Markov Property
  3. Mathematical Formalism
    • Structural Causal Model (SCM)
    • Graphical Representation
  4. Interventions Overview
    • Observational vs interventional SCM
    • ‘Do’ Operation With SCM
    • Types Of Interventions
    • Alternate representations of ‘do’
    • Total Causal Effect
  5. Interventions Calculus
    • Computing the intervention distribution using the observational distribution
      • truncated factorization theorem
      • Average Causal Effect (ACE)
      • kidney stone example (Simpson’s paradox)
    • Adjustment
      • Definition of confounding
      • Valid adjustment set
      • invariant conditionals
      • Adjustment theorem (parental adjustment, backdoor criterion)
    • Do-calculus
      • General rules for deriving intervention distribution from the observational distribution (this generalizes the adjustment theorem)
      • Front door theorem
  6. Learning Causal Models
    • Learning with infinite samples
      • Learning up to Markov equivalence (CPDAG)
      • Faithfulness
    • Algorithms for structure learning
      • PC Algorithm for CPDA
      • ICA algorithm for LiNGAM
  7. Hidden Variables (Latent confounders)
    • Instrument variables and 2SLS method
  8. Conditional Independence (CI) Testing
    • Hardness of CI testing
    • Partial correlation coefficient
    • Kernel based methods
    • Conditional randomization
    • Classifier based testing

Bio: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. Shakkottai is a professor in the Engineering department at University of Texas at Austin and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award (2004) and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021 and is currently the Editor in Chief of IEEE/ACM Transactions on Networking. Shakkottai’s research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.

For more information click here.