NSF Causal Foundations for Decision Making and Learning



October 1st, 2023


Principle Investigators
Abstract

Artificial intelligence (AI) has become ubiquitous in our daily lives, and the importance of decision-making as a scientific challenge has increased dramatically. Decisions that were once made by humans are increasingly delegated to automated systems or made with their assistance. However, despite substantial recent progress, the current generation of AI technology is lacking in explainability, robustness, and adaptability capabilities, which hinders trust in AI. There is a growing recognition that robust decision-making requires an understanding of the often complex and dynamic causal mechanisms underlying the environment, while most of the current formalisms in AI lack explicit treatment of causal mechanisms. This project brings together the power of causal modeling and AI decision-making and learning to produce AI systems that rely on less data, can better justify and explain their decisions to people, better react to new circumstances, and consequently are safer and more trustworthy. The project produces new foundations - principles, methods, and tools - for causal decision-making systems. It enriches the traditional AI formalisms with causal ingredients for more efficient, robust, generalizable, and explainable decision-making with the potential to fundamentally transform the AI decision-making field. The theory will be evaluated through real-world use-cases in robotics and public health. The researchers will make extensive educational efforts, and develop training content with a focus on mentorship and broadening the participation of underrepresented groups. The team will engage in knowledge transfer activities including authoring an introductory book on causal decision-making and organizing events to discuss AI and decision-making topics.

This project integrates the framework of structural causal models with the leading approaches for decision-making in AI, including model-based planning with Markov decision processes and their extensions, reinforcement learning, and graphical models such as influence diagrams. The outcomes revolutionize traditional AI decision-making with causal modeling toward developing more efficient, robust, generalizable, and explainable decision-making systems. In three thrusts, the project develops new foundations (i.e., principles, theory, and algorithms) and provides a common unified framework for causal-empowered decision-making that generalizes the leading decision-making approaches. Thrust 1 studies essential aspects of causal decision-making to guarantee that the decisions of autonomous agents and decision-support systems are robust, sample-efficient, and precise. These goals are realized by developing methods for causality-integrated online and offline policy learning, interventional planning, imitation learning, curriculum learning, knowledge transfer, and adaptation. Thrust 2 studies additional aspects of causal decision-making that are especially important for decision-support systems where humans are in the loop, including how to exploit causality for constructing explanations, decide when to involve humans, and endow the systems with competence awareness and the ability to make fair decisions that align with the values of their users. Thrust 3 enhances the scalability of the resulting tools and their ability to reason efficiently, trade-off between both multiple objectives and between explainability and decision quality, and learn a causal model of the world. Together, these thrusts will contribute to a new generation of powerful AI tools for developing autonomous agents and decision-support systems.

Research Team
    PhD Students
  • Anna Raichev (UC Irvine)
  • Bobak Pezeshki (UC Irvine)
  • Kyungmin Kim (UC Irvine)
  • Nicholas Cohen (UC Irvine)
  • Saaduddin Mahmud (UMass Amherst)
  • Abhinav Bhati (UMass Amherst)
  • Arvind Raghavan (Columbia University)
  • Mingxuan Li (Columbia University)
  • Aurghya Maiti (Columbia University)
  • Adiba Ejaz (Columbia University)
  • Scott Mueller
Grant Information
Award Abstract # 2321786
CISE: Large: Causal Foundations for Decision Making and Learning
NSF Org:
CNS
Division Of Computer and Network Systems
Principal Investigators::
Elias Bareinboim
Rina Dechter
Shlomo Zilberstein
Sven Koenig
Jin Tian
NSF Program(s):
CISE Core: Large Projects
For more information see the NSF award page.
Research

2024

[C003] PDF
Anna K. Raichev, Jin Tian, Rina Dechter, Alexander Ihler. "Estimating Causal Effects from Learned Causal Bayesian Networks".
to appear in Proceedings of the 27th European Conference on Artificial Intelligence (ECAI 2024).

[C002] PDF
Yuta Kawakami, Manabu Kuroki, Jin Tian. "Identification and Estimation of Conditional Average Partial Causal Effects via Instrumental Variable".
in Proceedings of the 40th Conference on Uncertainty in Artificial Intelligence (UAI 2024).

[C001] PDF
Bobak Pezeshki, Kalev Kask, Alexander Ihler, Rina Dechter. "Value-Based Abstraction Functions for Abstraction Sampling".
in Proceedings of the 40th Conference on Uncertainty in Artificial Intelligence (UAI 2024).

For questions please contact us at nsf-causal-dm@gmail.com