Matan Eyal (@mataneyal1) 's Twitter Profile
Matan Eyal

@mataneyal1

ID: 2954613560

calendar_today01-01-2015 09:42:04

111 Tweet

199 Followers

379 Following

Alon Jacovi (@alon_jacovi) 's Twitter Profile Photo

Hi, I gathered a ton of XAI papers (>5k) and wrote up an empirical trends report. check it out here!! Arxiv: arxiv.org/abs/2301.05433 Blog: medium.com/@alonjacovi/tr… Data: github.com/alonjacovi/XAI…

Hi, I gathered a ton of XAI papers (>5k) and wrote up an empirical trends report. check it out here!!

Arxiv: arxiv.org/abs/2301.05433
Blog: medium.com/@alonjacovi/tr…
Data: github.com/alonjacovi/XAI…
roeeaharoni (@roeeaharoni) 's Twitter Profile Photo

Finally, in "Multilingual Sequence-to-Sequence Models for Hebrew NLP" (arxiv.org/pdf/2212.09682…) we show that multilingual models like mT5 can go a long way for Hebrew when modeled right, with lots of strong results. With Matan Eyal Hila Noga هيله نوغا Reut Tsarfaty and Idan Szpektor

Finally, in "Multilingual Sequence-to-Sequence Models for Hebrew NLP" (arxiv.org/pdf/2212.09682…) we show that multilingual models like mT5 can go a long way for Hebrew when modeled right, with lots of strong results. With <a href="/mataneyal1/">Matan Eyal</a> <a href="/hila_noga/">Hila Noga هيله نوغا</a>  <a href="/rtsarfaty/">Reut Tsarfaty</a> and Idan Szpektor
(((ل()(ل() 'yoav))))👾 (@yoavgo) 's Twitter Profile Photo

"semantic embeddings" are becoming increasingly popular, but "semantics" is really ill-defined. sometimes you want to search for text given a description of its content. current embedders suck at this. in this work we introduce a new embedder. Shauli Ravfogel Valentina Pyatkin ➡️ ICML Avshalom Manevich

"semantic embeddings" are becoming increasingly popular, but "semantics" is really ill-defined. sometimes you want to search for text given a description of its content. current embedders suck at this. in this work we introduce a new embedder.
<a href="/ravfogel/">Shauli Ravfogel</a> <a href="/valentina__py/">Valentina Pyatkin ➡️ ICML</a> <a href="/AvshalomM/">Avshalom Manevich</a>
Uri Shaham (@uri_shaham) 's Twitter Profile Photo

Happy to share ZeroSCROLLS, a zero-shot benchmark for long text understanding! 📜Paper arxiv.org/pdf/2305.14196… 📜 Leaderboard zero.scrolls-benchmark.com 📜 Data (inputs only) huggingface.co/datasets/tau/z… Maor Ivgi Avia Efrat Jonathan Berant Omer Levy 1/5

Happy to share  ZeroSCROLLS, a zero-shot benchmark for long text understanding!

📜Paper arxiv.org/pdf/2305.14196…
📜 Leaderboard zero.scrolls-benchmark.com
📜 Data (inputs only) huggingface.co/datasets/tau/z…

<a href="/maorivg/">Maor Ivgi</a> <a href="/AviaEfrat/">Avia Efrat</a> <a href="/JonathanBerant/">Jonathan Berant</a> <a href="/omerlevy_/">Omer Levy</a> 

1/5
AK (@_akhaliq) 's Twitter Profile Photo

Multilingual Instruction Tuning With Just a Pinch of Multilinguality paper page: huggingface.co/papers/2401.01… As instruction-tuned large language models (LLMs) gain global adoption, their ability to follow instructions in multiple languages becomes increasingly crucial. One

Multilingual Instruction Tuning With Just a Pinch of Multilinguality

paper page: huggingface.co/papers/2401.01…

As instruction-tuned large language models (LLMs) gain global adoption, their ability to follow instructions in multiple languages becomes increasingly crucial. One
Matan Eyal (@mataneyal1) 's Twitter Profile Photo

Coming into this project we had a lot of open questions when ITing in multilingual settings Can LLMs Xligual-Transfer in IT❓ How many multilingual examples should we curate❓ How many languages should we curate❓ And so on... Great work from Uri giving great insights on these!

omer goldman (@omernlp) 's Twitter Profile Photo

so what’s up with tokenization? why and how does it work? welcome to a🚨new paper🚨 thread! "Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance" w/ Avi Caciularu Matan Eyal Kris Cao idan szpektor and Reut Tsarfaty arxiv.org/abs/2403.06265 1/🧵

Zorik Gekhman (@zorikgekhman) 's Twitter Profile Photo

Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? New preprint!📣 - LLMs struggle to integrate new factual knowledge through fine-tuning - As the model eventually learns new knowledge, it becomes more prone to hallucinations😵‍💫 📜arxiv.org/pdf/2405.05904 🧵1/12👇

Johan Ferret (@johanferret) 's Twitter Profile Photo

Glad to announce that these cool papers were accepted at #ICLR2025!! 💫 BOND: Aligning LLMs with Best-of-N Distillation arxiv.org/abs/2407.14622 Diversity-Rewarded Classifier-Free Guidance Distillation arxiv.org/abs/2410.06084 🧵

Matan Eyal (@mataneyal1) 's Twitter Profile Photo

So proud to have been part of the team bringing Gemma 3 to life, working on its multilingual capabilities! 🌎 It's amazing to see the impact of everyone's hard work #Gemma3 #Multilingual

Ai2 (@allen_ai) 's Twitter Profile Photo

Meet Ai2 Paper Finder, an LLM-powered literature search system. Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers find more papers than ever 🔍

Meet Ai2 Paper Finder, an LLM-powered literature search system.

Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers find more papers than ever 🔍
Google AI (@googleai) 's Twitter Profile Photo

Introducing ECLeKTic, a new benchmark for Evaluating Cross-Lingual Knowledge Transfer in LLMs. It uses a closed-book QA task, where models must rely on internal knowledge to answer questions based on information captured only in a single language. More →goo.gle/3Y5TqvZ

Introducing ECLeKTic, a new benchmark for Evaluating Cross-Lingual Knowledge Transfer in LLMs. It uses a closed-book QA task, where models must rely on internal knowledge to answer questions based on information captured only in a single language. More →goo.gle/3Y5TqvZ