Alexandra Chronopoulou (@alexandraxron) 's Twitter Profile
Alexandra Chronopoulou

@alexandraxron

Research in post-training of LLMs @GoogleDeepMind Gemini | Prev. @LMU_Muenchen @allen_ai @amazonscience @ecentua

ID: 39305535

linkhttps://alexandra-chron.github.io/ calendar_today11-05-2009 18:17:22

309 Tweet

1,1K Followers

411 Following

Alexandra Chronopoulou (@alexandraxron) 's Twitter Profile Photo

I am in Miami 🌴 attending #EMNLP2024! Tomorrow I will be presenting my Google DeepMind internship paper at the MRL workshop at 11:00am (poster session) and as an oral at 2:30pm! If you are still around and interested in multilingual model merging, stop by!🎉

Alexandra Chronopoulou (@alexandraxron) 's Twitter Profile Photo

Super excited - we received ✨✨best paper award ✨✨ for our paper at the Multilingual Representation Learning workshop #EMNLP2024!!! This was a fun internship project thanks to Sebastian Ruder Priyanka Agrawal and Xinyi Wang (Cindy) Jonas Pfeiffer Joshua Maynez!

Nikos Aletras (@nikaletras) 's Twitter Profile Photo

We're looking for a #PhD student to work on multimodal LLMs. This is a fully-funded scholarhsip (including stipend), open to home and international candidates. Deadline: 29/1/2025 Please spread the work! #nlproc

Tu Vu (@tuvllms) 's Twitter Profile Photo

📢✨ I am recruiting 1-2 PhD students at Virginia Tech this cycle. If you are interested in efficient model development (including model merging, parameter-efficient fine-tuning & transfer learning), instruction tuning, advanced reasoning, LLMs-as-judges, etc., please apply!!

Raj Dabre (@prajdabre1) 's Twitter Profile Photo

Paper #2: Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization Link: aclanthology.org/2024.mrl-1.7/ Ever wondered how we can do LLM weight arithmetic to enable models to handle tasks in languages in zero shot style? The authors have a solution.

Paper #2: Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Link: aclanthology.org/2024.mrl-1.7/

Ever wondered how we can do LLM weight arithmetic to enable models to handle tasks in languages in zero shot style? The authors have a solution.
Alexandra Chronopoulou (@alexandraxron) 's Twitter Profile Photo

We are organizing Repl4NLP 2025 along with Freda Shi Giorgos Vernikos Vaibhav Adlakha Xiang Lorraine Li Bodhisattwa Majumder. The workshop will be co-located with NAACL 2025 in Albuquerque, New Mexico and we plan to have a great panel of speakers. Consider submitting your coolest work!

Kosmas Marinakis (@kos_marinakis) 's Twitter Profile Photo

Απάντηση στην Διαπλοκή, στους μπράβους, και σε όσους προσπάθησαν να με φιμώσουν. youtu.be/l3RNgT5msX4

SIGREP (@sigrep_acl) 's Twitter Profile Photo

Disappointed with #ICLR or #NAACL reviews? Consider submitting your work at #Repl4NLP NAACL HLT 2025 , whether it's full papers, extended abstracts, or cross-submissions. 🔥 Details on submissions 👉 sites.google.com/view/repl4nlp2… ⏰ Deadline January 30

AUEB (@aueb) 's Twitter Profile Photo

🤖O Ίων Ανδρουτσόπουλος, Καθηγητής Τεχνητής Νοημοσύνης (ΤΝ) στο Τμήμα Πληροφορικής του ΟΠΑ μιλά για την Τεχνητή Νοημοσύνη σε μια συζήτηση με τον Γιώργο Ψάλτη στο ΣΚΑΪ. 🎧Ακούστε την συζήτηση: aueb-cast.simplecast.com/episodes/artif… #aueb #ai #artificialintelligence #talk

🤖O Ίων Ανδρουτσόπουλος, Καθηγητής Τεχνητής Νοημοσύνης (ΤΝ) στο Τμήμα Πληροφορικής του ΟΠΑ μιλά για την Τεχνητή Νοημοσύνη σε μια συζήτηση με τον Γιώργο Ψάλτη στο ΣΚΑΪ.

🎧Ακούστε την συζήτηση: aueb-cast.simplecast.com/episodes/artif…
#aueb #ai #artificialintelligence #talk
Alisa Liu (@alisawuffles) 's Twitter Profile Photo

We created SuperBPE🚀, a *superword* tokenizer that includes tokens spanning multiple words. When pretraining at 8B scale, SuperBPE models consistently outperform the BPE baseline on 30 downstream tasks (+8% MMLU), while also being 27% more efficient at inference time.🧵

We created SuperBPE🚀, a *superword* tokenizer that includes tokens spanning multiple words.

When pretraining at 8B scale, SuperBPE models consistently outperform the BPE baseline on 30 downstream tasks (+8% MMLU), while also being 27% more efficient at inference time.🧵
lmarena.ai (formerly lmsys.org) (@lmarena_ai) 's Twitter Profile Photo

BREAKING: Gemini 2.5 Pro is now #1 on the Arena leaderboard - the largest score jump ever (+40 pts vs Grok-3/GPT-4.5)! 🏆 Tested under codename "nebula"🌌, Gemini 2.5 Pro ranked #1🥇 across ALL categories and UNIQUELY #1 in Math, Creative Writing, Instruction Following, Longer

BREAKING: Gemini 2.5 Pro is now #1 on the Arena leaderboard - the largest score jump ever (+40 pts vs Grok-3/GPT-4.5)! 🏆

Tested under codename "nebula"🌌, Gemini 2.5 Pro ranked #1🥇 across ALL categories and UNIQUELY #1 in Math, Creative Writing, Instruction Following, Longer
Salman (@salman1422571) 's Twitter Profile Photo

🚨 Excited to share our new paper on 𝕏-Teaming! 🤖 Multiagent system for multiturn jaibreaking 🔍 96.2% attack success against Claude 3.7 (immune to single-turn attacks!) 💥 Upto 98.1% attack success on leading model 🛡️ Released 30K safety dataset 🧵below #AI #LLMSafety

🚨 Excited to share our new paper on 𝕏-Teaming!

🤖 Multiagent system for multiturn jaibreaking

🔍 96.2% attack success against Claude 3.7 (immune to single-turn attacks!) 

💥 Upto 98.1% attack success on leading model

🛡️ Released 30K safety dataset

🧵below 
#AI #LLMSafety
Freda Shi (@fredahshi) 's Twitter Profile Photo

Hey #NAACL2025 friends! You are all invited to join us at the RepL4NLP workshop with an amazing lineup of speakers & panelists Ana Marasović Najoung Kim 🫠 Akari Asai (starting TODAY 9:30am Ballroom A, floor 2) and posters (Hall 3, floor 1)!

Tu Vu (@tuvllms) 's Twitter Profile Photo

Excited to share that our paper on model merging at scale has been accepted to Transactions on Machine Learning Research (TMLR). Huge congrats to my intern Prateek Yadav and our awesome co-authors Jonathan Lai, Alexandra Chronopoulou, Manaal Faruqui, Mohit Bansal, and Tsendsuren 🎉!!

Excited to share that our paper on model merging at scale has been accepted to Transactions on Machine Learning Research (TMLR). Huge congrats to my intern <a href="/prateeky2806/">Prateek Yadav</a> and our awesome co-authors <a href="/_JLai/">Jonathan Lai</a>, <a href="/alexandraxron/">Alexandra Chronopoulou</a>, <a href="/manaalfar/">Manaal Faruqui</a>, <a href="/mohitban47/">Mohit Bansal</a>, and <a href="/TsendeeMTS/">Tsendsuren</a> 🎉!!