Elias Bareinboim (@eliasbareinboim) 's Twitter Profile
Elias Bareinboim

@eliasbareinboim

Professor of Causal Inference, Machine Learning, and Artificial Intelligence. Director, CausalAI Lab @ Columbia University.

ID: 107367723

linkhttp://causalai.net calendar_today22-01-2010 09:53:39

2,2K Tweet

13,13K Followers

567 Following

Yonghan Jung (@yonghanjung) 's Twitter Profile Photo

Excited to share that I’m on the academic job market! I’ve been fortunate to work with Elias Bareinboim on causal inference, developing causal effect estimators using modern ML methods. Published in ICML, NeurIPS, AAAI, & more. Details: yonghanjung.me

Elias Bareinboim (@eliasbareinboim) 's Twitter Profile Photo

The true generative model is Nature -- a collection of causal mechanisms. Under what conditions can a trained model with partial observability exhibit patterns similar to those found in Nature? We explored this question with Bengio, Xia, and Lee in a NeurIPS-21 paper:

The true generative model is Nature -- a collection of causal mechanisms. Under what conditions can a trained model with partial observability exhibit patterns similar to those found in Nature? 
We explored this question with Bengio, Xia, and Lee in a NeurIPS-21 paper:
Judea Pearl (@yudapearl) 's Twitter Profile Photo

In a panel, last Thursday, I suggested that, in order to assess scale-independent limitations of LLMs, we feed them with toy examples that require knowledge of data-fusion theory, and see if/when they fail. Dwarkesh Patel question jolted me to realize that we do not need to resort

Judea Pearl (@yudapearl) 's Twitter Profile Photo

How to speed up the process, this is the question. Should we let statisticians move naturally to modern CI, or jolt them to hurry, thus risking making them more defensive and stubborn?

David Lagnado (@david_lagnado) 's Twitter Profile Photo

Excited to advertise a postdoc to work with me and an excellent team at BR-UK applying causal modelling to behavioural research - please contact me for more info 😻

Elias Bareinboim (@eliasbareinboim) 's Twitter Profile Photo

I understand that CI, from the 1970s until around 2010, was mostly focused on the challenge of moving from OBS to EXP worlds and controlling for confounding in this sense. However, it's an oversimplification to think about CI as solely about observational studies, as the

I understand that CI, from the 1970s until around 2010, was mostly focused on the challenge of moving from OBS to EXP worlds and controlling for confounding in this sense. However, it's an oversimplification to think about CI as solely about observational studies, as the
Elias Bareinboim (@eliasbareinboim) 's Twitter Profile Photo

Hi Nando de Freitas , one surprising result from CI in the last decade is that counterfactuals (level 3) of Pearl's Hierarchy can be used for decision-making and can lead to dominant strategies over essentially any available RL strategy (level 2), as discussed in Sec 7 (p. 114) in

Hi <a href="/NandoDF/">Nando de Freitas</a> , one surprising result from CI in the last decade is that counterfactuals (level 3) of Pearl's Hierarchy can be used for decision-making and can lead to dominant strategies over essentially any available RL strategy (level 2), as discussed in Sec 7 (p. 114) in
Judea Pearl (@yudapearl) 's Twitter Profile Photo

That's a good point! I wonder if the RL community (eg Richard Sutton) is aware of the Ladder of Causation (described here causalai.net/r60.pdf), and whether it sees the interplay between causal knowledge and decision-making. The application is obvious: to move from

Judea Pearl (@yudapearl) 's Twitter Profile Photo

Could causal reasoning be the next step toward building more robust, generalizable, and interpretable RL agents? To find out, you may wish to participate in the Causal Reinforcement Learning (CausalRL) Workshop, which will be held on August 5th, 2025, as part of the

Elias Bareinboim (@eliasbareinboim) 's Twitter Profile Photo

Orthogonal to Frank Harrell’s initial note: CBN is a layer 2 model that lets us answer interventional (layer 2) queries using layer 2 calculus (do-calculus) -- see the 2nd green row in the attached table. One recent result: we can now more precisely match the query, graph, and

Orthogonal to <a href="/f2harrell/">Frank Harrell</a>’s initial note: CBN is a layer 2 model that lets us answer interventional (layer 2) queries using layer 2 calculus (do-calculus) --  see the 2nd green row in the attached table.

One recent result: we can now more precisely match the query, graph, and
UAI 2025 (@uncertaintyinai) 's Twitter Profile Photo

next is Elias Bareinboim (Elias Bareinboim) from Columbia University, who will discuss in his keynote talk about the recent ✨ progress toward building causally intelligent AI systems ✨ full abstract 👉 auai.org/uai2025/keynot…

next is Elias Bareinboim (<a href="/eliasbareinboim/">Elias Bareinboim</a>) from Columbia University, who will discuss in his keynote talk about the recent

✨ progress toward building causally intelligent AI systems ✨

full abstract 👉 auai.org/uai2025/keynot…
Qi Wang (@qiwang067) 's Twitter Profile Photo

🚀 Excited to announce our workshop “Embodied World Models for Decision Making” at #NeurIPS2025! 🎉 Keynote speakers, panelists, and content are now live! Check out: 👉 embodied-world-models.github.io #WorldModels #RL #NeurIPS #NeurIPS2025 #neuripsworkshop #workshop

🚀 Excited to announce our workshop “Embodied World Models for Decision Making” at #NeurIPS2025! 🎉

Keynote speakers, panelists, and content are now live! Check out:
👉 embodied-world-models.github.io
#WorldModels #RL #NeurIPS #NeurIPS2025 #neuripsworkshop #workshop
Elias Bareinboim (@eliasbareinboim) 's Twitter Profile Photo

One question I’ve received a few times, and would like to clarify about this work (causalai.net/r115.pdf), is: why do we need identification and the ctf-calculus? Isn’t the do-calculus enough? The answer to the first question is that identification is essential: estimating a