Nicola Cancedda (@nicola_cancedda) 's Twitter Profile
Nicola Cancedda

@nicola_cancedda

Research Scientist Manager @MetaAI

ID: 19602444

calendar_today27-01-2009 17:15:11

43 Tweet

417 Followers

156 Following

Roberta Raileanu (@robertarail) 's Twitter Profile Photo

Our FAIR London team is hiring multiple Research Scientists to work on tool-augmented LLMs! Apply here if interested: metacareers.com/jobs/180220254… metacareers.com/jobs/276673558…

Nicola Cancedda (@nicola_cancedda) 's Twitter Profile Photo

So many applications need to index text content. Releasing BELA, the first end-to-end entity linking model that works in 97 languages! arxiv.org/abs/2306.08896 github.com/facebookresear… Nora Kassner Mike Kashyap Popat Louis Martin Frédéric Dreyer

So many applications need to index text content. Releasing BELA, the first end-to-end entity linking model that works in 97 languages!

arxiv.org/abs/2306.08896
github.com/facebookresear…

<a href="/KassnerNora/">Nora Kassner</a> <a href="/movb/">Mike</a> <a href="/kash_pop/">Kashyap Popat</a> <a href="/louismrt/">Louis Martin</a> <a href="/frdreyer/">Frédéric Dreyer</a>
Alex Havrilla (@dahoas1) 's Twitter Profile Photo

New paper alert🚨🚨🚨 How to bootstrap the reasoning refinement capabilities of LLMs using synthetic data? Introducing "GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements". Applied on GSM8K we can improve a strong RL finetuned LLama-2 13B by 12%

New paper alert🚨🚨🚨
How to bootstrap the reasoning refinement capabilities of LLMs using synthetic data? Introducing "GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements". Applied on GSM8K we can improve a strong RL finetuned LLama-2 13B by 12%
Nicola Cancedda (@nicola_cancedda) 's Twitter Profile Photo

Rainbow Teaming adapts successful methods from Reinforcement Learning to automatically create adversarial prompts that are both effective and diverse. Finetuning on this dataset in turn greatly increases the model robustness to adversarial attacks. Check the paper out!

Jason Weston (@jaseweston) 's Twitter Profile Photo

🚨New paper: Source2Synth🚨 - Generates synthetic examples grounded in real data - Curation step makes data high quality based on answerability - Improves performance on two challenging domains: Multi-hop QA and using tools: SQL for tabular QA arxiv.org/abs/2409.08239 🧵(1/4)

🚨New paper: Source2Synth🚨
- Generates synthetic examples grounded in real data 
- Curation step makes data high quality based on answerability
- Improves performance on two challenging domains: Multi-hop QA and using tools: SQL for tabular QA 
arxiv.org/abs/2409.08239
🧵(1/4)
Diego Garcia-Olano (@dgolano) 's Twitter Profile Photo

Hi all, I'll be hosting a Research Scientist intern for 2025 with a focus on exploring LLM safety alignment and understanding potentially using explainability methods. If that sounds of interest apply/reach out! metacareers.com/jobs/901069411…

Virginie Do (@gini_do) 's Twitter Profile Photo

I am at #ICLR and honored to present this work on Saturday afternoon at the poster session. Thanks Lei Yu Karen Hambardzumyan Nicola Cancedda for this wonderful collaboration! I am also happy to chat about Llama / agents / safety 👋

Sonia (@soniajoseph_) 's Twitter Profile Photo

Our paper Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video received an Oral at the Mechanistic Interpretability for Vision Workshop at CVPR 2025! 🎉 We’ll be in Nashville next week. Come say hi 👋 #CVPR2025 Mechanistic Interpretability for Vision @ CVPR2025

Our paper Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video received an Oral at the Mechanistic Interpretability for Vision Workshop at CVPR 2025! 🎉

We’ll be in Nashville next week. Come say hi 👋

<a href="/CVPR/">#CVPR2025</a>  <a href="/miv_cvpr2025/">Mechanistic Interpretability for Vision @ CVPR2025</a>