Konrad Rieck 🌈 (@mlsec) 's Twitter Profile
Konrad Rieck 🌈

@mlsec

Machine Learning and Security,
Professor of Computer Science at TU Berlin,
On Bluesky: @rieck.mlsec.org

ID: 100451745

linkhttps://mlsec.org calendar_today30-12-2009 09:04:37

6,6K Tweet

2,2K Followers

365 Following

Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

Great to be at SaTML Conference with several members of my team from BIFOLD and TU Berlin. We are having a blast with exciting discussions and talks on trustworthy AI! #SaTML25

Great to be at <a href="/satml_conf/">SaTML Conference</a> with several members of my team from <a href="/bifoldberlin/">BIFOLD</a> and <a href="/TUBerlin/">TU Berlin</a>. We are having a blast with exciting discussions and talks on trustworthy AI! #SaTML25
SaTML Conference (@satml_conf) 's Twitter Profile Photo

🚨 SaTML is searching for its 2026 home! Interested in becoming General Chair and hosting the conference in your city or institution? We’d love to hear from you. Place a bid here: 👉 forms.gle/kbxtwZddpcLDPU…

Niklas Risse (@niklas2484) 's Twitter Profile Photo

Function-level vulnerability detection is dead. Excited to share that our paper "Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection" got into ISSTA 2025. We show that function-level vulnerability detection is fundamentally flawed — and

Function-level vulnerability detection is dead.

Excited to share that our paper "Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection" got into ISSTA 2025.

We show that function-level vulnerability detection is fundamentally flawed — and
Marcel Böhme👨‍🔬 (@mboehme_) 's Twitter Profile Photo

Our paper "Top Score on the Wrong Exam" paper will be presented at #ISSTA25 🐣 in Trondheim! 📝mpi-softsec.github.io/papers/ISSTA25… 🧑‍💻github.com/niklasrisse/To… // Niklas Risse Jing Liu (fuzzing.bsky.social).

Our paper "Top Score on the Wrong Exam" paper will be presented at #ISSTA25 🐣 in Trondheim!

📝mpi-softsec.github.io/papers/ISSTA25…
🧑‍💻github.com/niklasrisse/To…

// <a href="/niklas2484/">Niklas Risse</a> <a href="/fuzzjing/">Jing Liu (fuzzing.bsky.social)</a>.
ELLIS (@ellisforeurope) 's Twitter Profile Photo

📢 ELLIS at ICLR 2025 - We’d like to congratulate the ELLIS community for their impressive contributions to this year’s ICLR, with a standout presence from our ELLIS PhD students! Find out who contributed: ellis.eu/contributions-…

📢 ELLIS at ICLR 2025 - We’d like to congratulate the ELLIS community for their impressive contributions to this year’s ICLR, with a standout presence from our ELLIS PhD students!

Find out who contributed: ellis.eu/contributions-…
Zion Leonahenahe Basque (@mahal0z) 's Twitter Profile Photo

I'm proud to announce that myself and Ati Priya Bajaj have created the Workshop on Software Understanding and Reverse Engineering (SURE), which will be co-located at CCS 2025. sure-workshop.org Please follow our workshop account ACM SURE Workshop and RT it for visibility :).

Maura Pintor (@maurapintor) 's Twitter Profile Photo

🚀 Exciting Announcement! 🚀 Get ready for the 18th ACM Workshop on Artificial Intelligence and Security (AISec 2025)! 📍Co-located: ACM CCS 2024 🗓️ Deadline: June 20th, 2025 🌐 Website: aisec.cc w/ Ruoxi Jia and Matthew Jagielski

🚀 Exciting Announcement! 🚀

Get ready for the 18th ACM Workshop on Artificial Intelligence and Security (AISec 2025)!

📍Co-located: <a href="/acm_ccs/">ACM CCS 2024</a>
🗓️ Deadline: June 20th, 2025
🌐 Website: aisec.cc

w/ <a href="/ruoxijia/">Ruoxi Jia</a> and Matthew Jagielski
Brendan Dolan-Gavitt (@moyix) 's Twitter Profile Photo

The code for building and updating the ARVO dataset, a collection of over 5000 memory safety vulnerabilities in open source software, is now open source! Link in reply :)

The code for building and updating the ARVO dataset, a collection of over 5000 memory safety vulnerabilities in open source software, is now open source! Link in reply :)
Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

We're excited to announce the Call for Papers for SaTML 2026, the premier conference on secure and trustworthy machine learning SaTML Conference We seek papers on secure, private, and fair learning algorithms and systems. 👉 satml.org/call-for-paper… ⏰ Deadline: Sept 24

We're excited to announce the Call for Papers for SaTML 2026, the premier conference on secure and trustworthy machine learning <a href="/satml_conf/">SaTML Conference</a>

We seek papers on secure, private, and fair learning algorithms and systems.

👉 satml.org/call-for-paper…
⏰ Deadline: Sept 24
Marcel Böhme👨‍🔬 (@mboehme_) 's Twitter Profile Photo

Can we statistically estimate how likely an LLM-generated program is correct w/o knowing what is a correct program for that task? Sounds impossible-but it's actually really simple. In fact our oracle-less eval can reliably substitute a pass@1 based eval. arxiv.org/abs/2507.00057

Can we statistically estimate how likely an LLM-generated program is correct w/o knowing what is a correct program for that task?

Sounds impossible-but it's actually really simple. In fact our oracle-less eval can reliably substitute a pass@1 based eval.

arxiv.org/abs/2507.00057
Konrad Rieck 🌈 (@mlsec) 's Twitter Profile Photo

We’re happy to announce the Call for Competitions for SaTML Conference! The competition track has been a highlight of SaTML, featuring exciting topics and strong participation. If you’d like to host a competition for SaTML 2026, visit: 👉 satml.org/call-for-compe… ⏰ Deadline: Aug 6

We’re happy to announce the Call for Competitions for
<a href="/satml_conf/">SaTML Conference</a>!

The competition track has been a highlight of SaTML, featuring exciting topics and strong participation. If you’d like to host a competition for SaTML 2026, visit:

👉 satml.org/call-for-compe…
⏰ Deadline: Aug 6
Arthur Spirling (@arthur_spirling) 's Twitter Profile Photo

Deeply concerning to see scholars cut and paste chatGPT suggested citations into their papers without checking whether they actually exist. Back in my day we were taught the correct way: cut and paste citations from *other papers* without checking whether they actually exist.

Arvind Narayanan (@random_walker) 's Twitter Profile Photo

Some aspects of AI discourse seem to come from a different planet, oblivious to basic realities on Earth. AI for science is one such area. In this new essay, Sayash Kapoor and I argue that visions of accelerating science through AI should be considered unserious if they don't confront