Ricardo Rei (@ricardorei7) 's Twitter Profile
Ricardo Rei

@ricardorei7

Research Scientist @unbabel | @istecnico

ID: 1147142313044561920

linkhttps://ricardorei.github.io calendar_today05-07-2019 13:56:55

190 Tweet

344 Followers

313 Following

Wenda Xu (@wendaxu2) 's Twitter Profile Photo

I am excited to attend ACL2024 to present our proud paper "Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement" (We received scores of 5, 4.5, 4 and a meta-review of 5🤲). I am looking for full-time industry positions. Here is a list of my first-author publications

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

Super proud of the work of José Maria Pombal in Tower! He just started his PhD on MT and MT evaluation along with me and Andre Martins! Give him a follow to get updates on what’s to come!

Nuno M. Guerreiro (@nunonmg) 's Twitter Profile Photo

The good news continue 🥳 We are delighted that Tower has been selected as a oral spotlight presentation at Conference on Language Modeling (top 2% of submissions)! Check out our paper here: openreview.net/pdf?id=EHPns3h…

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

We are releasing the first 🇪🇺LLM’s! New models are coming (9B and 22B) as well as strong instruct models! Stay tuned!

Slator (@slatornews) 's Twitter Profile Photo

Researchers from industry and academia have released EuroLLM, the first #LLM that supports all #EU languages 🇪🇺 and several additional languages 🌍 and excels 🚀 at machine #translation. slator.com/first-large-la… Pedro Martins Manuel Faysse Andre Martins Ricardo Rei

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

Me and Nuno M. Guerreiro will be giving a keynote at WMT this Friday at 14:00 about why research on MT is still fascinating in the era of LLMs! Don’t miss it!

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

In the morning, we will also present our work on Tower v2 which ranked first in 8/11 language pairs on WMT 2024 General shared task! "Tower-v2: Unbabel-IST 2024 Submission for the General MT Shared Task", WMT, Nov 15, 11:00-12:00

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

Super excited to announce the launch of Widn.ai! 🎉 This results from several years of work in Translation Evaluation and the application of that knowledge to LLM research, leading to the creation of Tower. Now, Tower is accessible and ready for anyone!

Unbabel (@unbabel) 's Twitter Profile Photo

💻 Thank you Slator for inviting Unbabel to speak on the current state of AI in translations. Key takeaways: 🔶 LLMs will continue to outperform NMT 🔶 LLM research will become evaluation centric 🔶 MT 2.0 will focus more on persoanlization and transcreation

💻 Thank you <a href="/slatornews/">Slator</a> for inviting Unbabel to speak on the current state of AI in translations.

Key takeaways:
🔶 LLMs will continue to outperform NMT
🔶 LLM research will become evaluation centric
🔶 MT 2.0 will focus more on persoanlization and transcreation
Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

Extremely happy with the release of EuroLLM 9B and with its results! This model is truly Multilingual, supporting 35 languages and covering all European Union official languages! Follow the 🧵 below for more details..👇👇

Pedro Martins (@pedrohenmartins) 's Twitter Profile Photo

EuroLLM-9B is now ranking as the best LLM of its size (tied with Gemma) on the European LLM Leaderboard! Check it here: huggingface.co/spaces/openGPT…

EuroLLM-9B is now ranking as the best LLM of its size (tied with Gemma) on the European LLM Leaderboard! 
Check it here: huggingface.co/spaces/openGPT…
Andre Martins (@andre_t_martins) 's Twitter Profile Photo

Good to see European Commission promoting OS LLMs in Europe. However (1) "OpenEuroLLM" is appropriating a name (#EuroLLM) which already exists, (2) it is certainly *not* the "first family of open-source LLMs covering all EU languages" 🧵

Dan Deutsch (@_danieldeutsch) 's Twitter Profile Photo

🚨New machine translation dataset alert! 🚨We expanded the language coverage of WMT24 from 9 to 55 en->xx language pairs by collecting new reference translations for 46 languages in a dataset called WMT24++ Paper: arxiv.org/abs/2502.12404… Data: huggingface.co/datasets/googl…

🚨New machine translation dataset alert! 🚨We expanded the language coverage of WMT24 from 9 to 55 en-&gt;xx language pairs by collecting new reference translations for 46 languages in a dataset called WMT24++

Paper: arxiv.org/abs/2502.12404…
Data: huggingface.co/datasets/googl…
Nicolas Boizard (@n1colais) 's Twitter Profile Photo

🇪🇺 One month after the AI Action Summit 2025 in Paris, I am thrilled to announce EuroBERT, a family of multilingual encoder exhibiting the strongest multilingual performance for task such as retrieval, classification and regression over 15 languages, mathematics and code. ⬇️ 1/6

🇪🇺 One month after the AI Action Summit 2025 in Paris, I am thrilled to announce EuroBERT, a family of multilingual encoder exhibiting the strongest multilingual performance for task such as retrieval, classification and regression over 15 languages, mathematics and code. ⬇️ 1/6
Duarte Alves (@duartemralves) 's Twitter Profile Photo

🧵 (3/7) 🌐 EuroBERT is open-source: 👉 Models (210M, 610M, 2.1B params) 👉 Training snapshots 👉 Full training framework Explore here: [huggingface.co/EuroBERT](huggingface.co/EuroBERT) Code coming soon! [github.com/Nicolas-BZRD/E…](github.com/Nicolas-BZRD/E…)

Ricardo Rei (@ricardorei7) 's Twitter Profile Photo

Happy to share our new work: ZSB is a framework to create benchmarks for any task and automatically evaluate any LLM. We show that it correlates highly with human evaluation on some of the most used benchmarks like chatbot arena

José Maria Pombal (@zmprcp) 's Twitter Profile Photo

We just released M-Prometheus, a suite of strong open multilingual LLM judges at 3B, 7B, and 14B parameters! Check out the models and training data on Huggingface: huggingface.co/collections/Un… and our paper: arxiv.org/abs/2504.04953

We just released M-Prometheus, a suite of strong open multilingual LLM judges at 3B, 7B, and 14B parameters!

Check out the models and training data on Huggingface: huggingface.co/collections/Un…
and our paper: arxiv.org/abs/2504.04953