Dr. Ulrike Kuhl (@drulrikekuhl) 's Twitter Profile
Dr. Ulrike Kuhl

@drulrikekuhl

Deactivated account as of 01/2025.
PostDoc @HammerLabML, pondering cognition, explainability, and machine learning. she/her

ID: 913777455013466113

linkhttps://ukuhl.github.io calendar_today29-09-2017 14:48:17

1,1K Tweet

451 Followers

647 Following

Hammer Lab ML (inactive) (@hammerlabml) 's Twitter Profile Photo

🎉Exciting News! Our own André Artelt & his Water-Futures ERC colleagues just won the Best Poster 🏆 at the '#AI for Critical Infrastructure Workshop' IJCAIconf, presenting the EPyT-Flow Toolbox!🧰 📜➡️arxiv.org/abs/2406.02078 🧑‍💻➡️@GitHub: github.com/WaterFutures/E… Congratulations!

🎉Exciting News! Our own André Artelt &amp; his <a href="/WaterFuturesERC/">Water-Futures ERC</a> colleagues just won the Best Poster 🏆 at the '#AI for Critical Infrastructure Workshop' <a href="/IJCAIconf/">IJCAIconf</a>, presenting the EPyT-Flow Toolbox!🧰
📜➡️arxiv.org/abs/2406.02078
🧑‍💻➡️@GitHub: github.com/WaterFutures/E…
Congratulations!
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Elisabeth's team show that alter-factual explanations, which show the attributes that are irrelevant to a decision, are helpful for people to understand models.

Elisabeth's team show that alter-factual explanations, which show the attributes that are irrelevant to a decision, are helpful for people to understand models.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Deliang Wang from HKU investigating whether explainability effects trust and satisfaction for teachers USING AI to give personalised feedback on dialogic teaching

Deliang Wang from HKU investigating whether explainability effects trust and satisfaction for teachers USING AI to give personalised feedback on dialogic teaching
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Mengisti Girmay presents results from a study on the perspectives on explainability formats of software providers and dairy farmers. The two groups have different views.

Mengisti Girmay presents results from a study on the perspectives  on explainability formats of software providers and dairy farmers. The two groups have different views.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Min Lee from SMU presents a gradient-based time series explanation of video data. Gradient information is used to identify important frames, local information is highlighted within image frames.

Min Lee from SMU presents a gradient-based time series explanation of video data. Gradient information is used to identify important frames, local information is highlighted within image frames.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Yasunobu Nohara proposes a theory for feature importance in SHAP with a focus on trading off the number of main features vs fidelity. Results show that using L2 norm of SHAP values is often better than L1 norm, but the difference is not large.

Yasunobu Nohara proposes a theory for feature importance in SHAP with a focus on trading off the number of main features vs fidelity. Results show that using L2 norm of SHAP values is often better than L1 norm, but the difference is not large.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Laurent El Ghaoui showing that Implicit Models generalise to out of distribution than standard deep learning models, even with no special hand tuning.

Laurent El Ghaoui showing that Implicit Models generalise to out of distribution than standard deep learning models, even with no special hand tuning.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Lijie Hu discusses using concept bottlenecks applied to GNNs. LLMs are used to extract concepts for graphs and feed into a concept bottleneck. Nice to see research helping to overcome the expensive step of concept labelling. Good results!

Lijie Hu discusses using concept bottlenecks applied to GNNs. LLMs are used to extract concepts for graphs and feed into a concept bottleneck.  Nice to see research helping to overcome the expensive step of concept labelling. Good results!
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Yong-Min Shin shows us how to use GNN explanations to prune GNNs to make them more efficient. Promising but somewhat puzzling results on the link between fidelity and the pruned graph.

Yong-Min Shin shows us how to use GNN explanations to prune GNNs to make them more efficient. Promising but somewhat puzzling results on the link between fidelity and the pruned graph.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Fábio Neves-Moreira on learning symbolic expressions for vehicle routing using genetic programming for policy search. Promising early results.

Fábio Neves-Moreira on learning symbolic expressions for vehicle routing using genetic programming for policy search. Promising early results.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Neo Christopher Chung shows us CDAM for finding attention masks relative to downstream heads such as specific classes or latent concepts.

Neo Christopher Chung  shows us CDAM for finding attention masks relative to downstream heads such as specific classes or latent concepts.
Tim Miller (@tmiller_uq) 's Twitter Profile Photo

Some high-level thoughts: it is clear that the field has matured a lot in the last 5 years. Much more focus on evaluation. Much less focus on perturbation-based feature attribution. Counterfactuals still having success in studies. Several studies using co-design approaches.

Hammer Lab ML (inactive) (@hammerlabml) 's Twitter Profile Photo

🎉 Exciting news! The proceedings from the ✨DataNinja sAIOnARA 2024 Conference✨have been published, featuring extended abstracts of the outstanding posters presented. Check out the incredible research from our #AI community in #NRW! 👉 biecoll.ub.uni-bielefeld.de/index.php/data… mkw.nrw

Dr. Ulrike Kuhl (@drulrikekuhl) 's Twitter Profile Photo

As the editor f this issue, I can confirm that the abstracts featured in this collection truly highlight the innovative research being done in #NRW in the field of #TrustworthyAI!

Tim Miller (@tmiller_uq) 's Twitter Profile Photo

I'm hiring! I'm look for a postdoctoral research fellow to work on human-centred decision support at the University of Queensland (women-only applicants). This is with my colleagues Ida Asadi Someh and Tapani Rinta-Kahila uqtmiller.github.io/recruitment/ 1/2