Welcome!

CALM workshop

The First Workshop on Workshop on Causality and Large Models (C♥️LM) will be co-located with NeurIPS 2024, in Vancouver, and held on December, 2024.

About the workshop

The remarkable capabilities and accessibility of recent large models, also known as “foundation models,” have sparked significant interest and excitement in the research community and beyond. In particular, large pre-trained generative models have demonstrated remarkable competencies in understanding and generating human-like text, images, and more despite being trained on largely unstructured data using relatively simple self-supervised learning objectives. This raises the question: (A) Why do such large models work so well?

The impressive performance, sometimes even exceeding human experts, across a wide variety of benchmarks, together with the incorporation of multiple modalities such as images, text, audio, makes it tempting to regard these large models as versatile decision-making systems. However, the increased adoption of these models is not without challenges. The increasing size and complexity of these “black box” models raises concerns about their trustworthiness and reliability. This is especially pertinent in high-stakes domains, such as healthcare and policy-making, where decisions have significant real-world impact. Consequently, we must consider: (B) Under what circumstances can we trust these large models? and (C) How can we improve the reliability and trustworthiness of current models? (D) How to make large models more robust?

Enter causality, a principled framework for predicting a system’s behavior under interventions and reasoning over counterfactual scenarios. In high-risk applications, where performance guarantees beyond the training distribution are desirable, causal inference is critical. Moreover, causal models explain a system’s behavior by elucidating the causal relationships among its components. This opens up substantial potential for using causality to address key questions about large models, such as (A), (B), or (C). By leveraging causal inference, we hope to tackle these questions rigorously and enhance our understanding of these powerful models, as well as their reliability and trustworthiness.

Our workshop will explore the many exciting synergies between causality and large models. Specifically, we identify four main directions to cover in our workshop:

  1. Causality in large models: Assessing the causal knowledge captured by large models and their causal reasoning abilities.
  2. Causality for large models: Applying ideas from causality to augment and improve large models.
  3. Causality with large models: Leveraging large models to improve causal inference and discovery.
  4. Causality of large models: Exploiting the rich framework of causal inference to understand and interpret the impressive capabilities of large models

Important Dates

Contact us at calmworkshop2024@gmail.com