explAInable.NL (@explainablenl) 's Twitter Profile
explAInable.NL

@explainablenl

Tweeting about Explainable AI in the Netherlands. Account run by @wzuidema (amsterdam.explainable.nl) and others.

explainable.nl

ID: 1181276757032407042

calendar_today07-10-2019 18:35:14

259 Tweet

435 Followers

224 Following

Grzegorz Chrupała 🇪🇺🇺🇦 (@gchrupala) 's Twitter Profile Photo

This paper analyzing discrete representations in models of spoken language with @bertrand_higy Lieke Gelderloos and afra alishahi will appear at #BlackboxNLP #EMNLP2021 arxiv.org/abs/2105.05582

Yonatan Belinkov (@boknilev) 's Twitter Profile Photo

@wzuidema Sara Hooker Ana Marasović Hot take from @wzuidema : progress in probing classifiers will not come from sophisticated probing techniques but from the hard work of forming better hypotheses.

@wzuidema <a href="/sarahookr/">Sara Hooker</a> <a href="/anmarasovic/">Ana Marasović</a> Hot take from @wzuidema : progress in probing classifiers will not come from sophisticated probing techniques but from the hard work of forming better hypotheses.
Dr. Badr M. Abdullah 🇾🇪 (@badr_nlp) 's Twitter Profile Photo

Grzegorz Chrupała 🇪🇺🇺🇦 A paper in ICASSP 2020 proposed probing by "audification" of hidden representations in ASR model. They learn a speech synthesizer on top of the ASR representations. They have a nice video of their work here youtu.be/6gtn7H-pWr8

explAInable.NL (@explainablenl) 's Twitter Profile Photo

Interesting blogpost on SHAP / feature attribution using Shapley values, by researchers from Dutch medical AI company Pacmed.

explAInable.NL (@explainablenl) 's Twitter Profile Photo

A team of researchers from Amsterdam and Rome proposes CF-GNNExplainer: an explainability method for the popular Graph Neural Networks. The method iteratively removes edges from the graph, returning the minimal perturbation that leads to a change in prediction.

explAInable.NL (@explainablenl) 's Twitter Profile Photo

"Transparency and explainability pertain to the technical domain ... leaving the ethics and epistemology of AI largely disconnected. In this talk, Russo will focus on how to remedy this problem and introduce an epistemology for glass box AI that can explicitly incorporate values"

NL4XAI (@nl4xai) 's Twitter Profile Photo

📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within NL4XAI Marie Skłodowska-Curie Actions at CiTIUS, ES. Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 Apply👉nl4xai.eu/open_position/… Horizon 2020

📢#MSCAJobAlert Last days to apply to the PhD student position in #AI within <a href="/NL4XAI/">NL4XAI</a> <a href="/MSCActions/">Marie Skłodowska-Curie Actions</a> 
at <a href="/citiususc/">CiTIUS</a>, ES.  Join us and work on the following topic: From Grey-box Models to Explainable Models. ⌛️Deadline 31/03/2022 
Apply👉nl4xai.eu/open_position/… 
<a href="/EU_H2020/">Horizon 2020</a>
Fernando P. Santos (@fernandopsantos) 's Twitter Profile Photo

Interested in Explainable AI and Finance? Check out this opportunity for a Tenure Track Assistant Professor position at the Informatics Institute, University of Amsterdam! Deadline extended to 3 April 2022.

DIGIT 🇪🇺 (@eu_digit) 's Twitter Profile Photo

📢Call for contributions to help identify Europe’s most Critical #OpenSourceSoftware ! We urge all national, regional and local public administrations across all EU 27 member states, to participate! Learn more👉europa.eu/!HXxQqp #FOSSEPS #ThinkOpen

📢Call for contributions to help identify Europe’s most Critical #OpenSourceSoftware !
We urge all national, regional and local public administrations across all EU 27 member states, to participate! Learn more👉europa.eu/!HXxQqp
#FOSSEPS #ThinkOpen
Marco Virgolin 🇺🇦 (@marcovirgolin) 's Twitter Profile Photo

And happy that also our work "On genetic programming representations and fitness functions for interpretable dimensionality reduction" made it to GECCO 2025! Preprint: arxiv.org/abs/2203.00528 A short explanation 👇 1/8

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

This year's Spring Conference focuses on foundation models, accountable AI, and embodied AI. HAI Associate Director and event co-host Christopher Manning explains these key areas and why you should not miss this event: stanford.io/3IxnjdH

Marco Virgolin 🇺🇦 (@marcovirgolin) 's Twitter Profile Photo

Symbolic regression (SR) is the problem of finding an accurate model of the data in the form of a (hopefully elegant) mathematical expression. SR has been thought to be hard and traditionally attempted using evolutionary algorithms. This begs the question: is SR NP-hard? 1/2

explAInable.NL (@explainablenl) 's Twitter Profile Photo

Exciting results from StanfordNLP (with D'Oosterlinck from Gent) on Causal Proxies: using symbolic surrogate models for interpreting deep learning, and testing for causality using counterfactual interventions.