David Debot (@debot_david) 's Twitter Profile
David Debot

@debot_david

PhD student @KU_Leuven in neurosymbolic AI and concept-based learning
daviddebot.github.io

ID: 3065660391

calendar_today01-03-2015 15:55:14

22 Tweet

39 Followers

121 Following

Pietro Barbiero (@pietro_barbiero) 's Twitter Profile Photo

Relational and interpretable deep learning? CBMs aren't built for relational problems, and GNNs aren't interpretable... We propose Relational Concept Bottleneck Models: concept-based interpretability + GNN accuracy! 🚀 #NeurIPS2024 👉neurips.cc/virtual/2024/p…

Relational and interpretable deep learning?

CBMs aren't built for relational problems, and GNNs aren't interpretable...

We propose Relational Concept Bottleneck Models: concept-based interpretability + GNN accuracy! 🚀 #NeurIPS2024

👉neurips.cc/virtual/2024/p…
Pietro Barbiero (@pietro_barbiero) 's Twitter Profile Photo

🚦 Ever wondered how a deep learning model could fail a driving exam? Buckle up and read the first chapter of our series on concept-based interpretable #DeepLearning in #PyTorch 👉 link.medium.com/E9pGSM82YQb

Gabriele Venturato (@gabventurato) 's Twitter Profile Photo

🔥 Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! 🎉 📜 Paper: arxiv.org/pdf/2412.13023 💻 Code: github.com/ML-KULeuven/ne… 🧵⬇️

🔥 Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! 🎉

📜 Paper: arxiv.org/pdf/2412.13023
💻 Code: github.com/ML-KULeuven/ne…

🧵⬇️
Jaron Maene @ICLR (@jjcmoon) 's Twitter Profile Photo

We all know backpropagation can calculate gradients, but it can do much more than that! Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.

Rishi Hazra @ ICLR 🇸🇬 (@rishihazra95) 's Twitter Profile Photo

1/n LLMs w/ T-CoT steps can solve reasoning problems in P/Poly. But they are not learning to reason - mimicking form, not logic. Are LRMs like DeepSeek R1 different? We formalize this in our paper: Have LLMs Learned to Reason? A Characterization via 3-SAT Phase Transition 🧵👇

1/n 
LLMs w/ T-CoT steps can solve reasoning problems in P/Poly. But they are not learning to reason - mimicking form, not logic. Are LRMs like DeepSeek R1 different?
We formalize this in our paper: Have LLMs Learned to Reason? A Characterization via 3-SAT Phase Transition 🧵👇
Jaron Maene @ICLR (@jjcmoon) 's Twitter Profile Photo

We developed a library to make logical reasoning embarrassingly parallel on the GPU. For those at ICLR 🇸🇬: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!

We developed a library to make logical reasoning embarrassingly parallel on the GPU.  

For those at ICLR 🇸🇬: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
Lennert De Smet (@lennertds_) 's Twitter Profile Photo

Just under 10 days left to submit your latest endeavours in ⚡#tractable⚡ probabilistic models❗ Join us at TPM ยัยตัวร้าย auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!