Zifeng Ding (@zifengding6) 's Twitter Profile
Zifeng Ding

@zifengding6

Postdoc @cambridgenlp @Cambridge_Uni
Visiting Research Scientist @CamelAIOrg

ID: 1593720090981277697

linkhttps://zifengding.github.io/ calendar_today18-11-2022 21:37:52

6 Tweet

11 Followers

17 Following

Michael Schlichtkrull (@michael_sejr) 's Twitter Profile Photo

The AVeriTeC automated #factchecking shared task paper is now online. Congratulations again to the winners and thanks to all our amazing participants! To hear more, swing by FEVERworkshop at #EMNLP in Miami on the 15th. 📝: arxiv.org/abs/2410.23850 🌐: fever.ai/task.html

The AVeriTeC automated #factchecking shared task paper is now online. Congratulations again to the winners and thanks to all our amazing participants! To hear more, swing by <a href="/FEVERworkshop/">FEVERworkshop</a> at #EMNLP in Miami on the 15th.

📝: arxiv.org/abs/2410.23850
🌐: fever.ai/task.html
Xingyue Huang (@hxyscott) 's Twitter Profile Photo

Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didn’t have a principled understanding of what we can (or can’t) do with them. Now we do! 💡🚀 🧵 with Pablo Barcelo, İsmail İlkan Ceylan, Michael Bronstein, Michael Galkin, Juan Reutter, Miguel

Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didn’t have a principled understanding of what we can (or can’t) do with them. Now we do! 💡🚀
🧵
with Pablo Barcelo, <a href="/ismaililkanc/">İsmail İlkan Ceylan</a>, <a href="/mmbronstein/">Michael Bronstein</a>, <a href="/michael_galkin/">Michael Galkin</a>, <a href="/JuanLReutter/">Juan Reutter</a>, <a href="/OrthMiguel/">Miguel</a>
Yuan He (@lawhy_x) 's Twitter Profile Photo

🚨 New paper: "Supposedly Equivalent Facts That Aren’t? Entity Frequency in Pre-training Induces Asymmetry in LLMs" Insight: LLMs treat equivalent facts differently due to bias from pre-training data. 🔗 Arxiv: arxiv.org/abs/2503.22362 #NLP #LLMs #AI

🚨 New paper: "Supposedly Equivalent Facts That Aren’t? Entity Frequency in Pre-training Induces Asymmetry in LLMs" 

Insight: LLMs treat equivalent facts differently due to bias from pre-training data.

🔗 Arxiv: arxiv.org/abs/2503.22362

#NLP #LLMs #AI