Xinyi Wang (Cindy) (@cindyxinyiwang) 's Twitter Profile
Xinyi Wang (Cindy)

@cindyxinyiwang

Research Scientist at Google DeepMind

previously: PhD candidate at Language Technologies Institute at CMU

ID: 778246363175907328

linkhttp://cindyxinyiwang.github.io/ calendar_today20-09-2016 14:56:06

58 Tweet

1,1K Followers

311 Following

Paul Michel (@pmichelx) 's Twitter Profile Photo

My lab École normale supérieure | PSL is recruiting interns! In particular I'm looking for people to explore two topics: - Domain Discovery for Blind Distributional Robustness - Curriculum design for Emergent Communication More info (and other topics): cognitive-ml.fr/docs/callForIn…

My lab <a href="/ENS_ULM/">École normale supérieure | PSL</a> is recruiting interns!
In particular I'm looking for people to explore two topics:
- Domain Discovery for Blind Distributional Robustness
- Curriculum design for Emergent Communication

More info (and other topics):  cognitive-ml.fr/docs/callForIn…
Graham Neubig (@gneubig) 's Twitter Profile Photo

GPT-4 has been out for 72 hours, and it could change the world! Here are some amazing and important things it *can't* do (yet) ⬇️

Xinyi Wang (Cindy) (@cindyxinyiwang) 's Twitter Profile Photo

While ChatGPT and Bard are amazing NLP progress in English, under-represented languages often lack necessary data and evaluation to benefit from the technology advancement. Our benchmark focusing on under-represented languages is the first step towards closing the gap.

Benjamin Muller (@ben_mlr) 's Twitter Profile Photo

Despite the fantastic progress we've seen recently in cross-lingual modeling, the best systems still make a lot of factual errors. To address this, here is our work on 🚨 Evaluating and Modeling Attribution for Cross-Lingual Question Answering 🚨 #1 Attribution Evaluation: Our

Jonas Pfeiffer (@pfeiffjo) 's Twitter Profile Photo

We propose 𝗺𝗺𝗧𝟱 a modular multilingual seq2seq model. Our modular design and training regime solves source language hallucinations resulting in massive performance gains in cross-lingual transfer scenarios. 📄 arxiv.org/abs/2305.14224

We propose 𝗺𝗺𝗧𝟱 a modular multilingual seq2seq model.

Our modular design and training regime solves source language hallucinations resulting in massive performance gains in cross-lingual transfer scenarios.

📄 arxiv.org/abs/2305.14224
AK (@_akhaliq) 's Twitter Profile Photo

FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning paper page: huggingface.co/papers/2309.04… Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own

FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning

paper page: huggingface.co/papers/2309.04…

Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own
Alexandra Chronopoulou (@alexandraxron) 's Twitter Profile Photo

Can we achieve effective zero-shot summarization by combining language & task information from existing PEFT adapters with weight arithmetic? Our paper, accepted at the MRL workshop #EMNLP2024, explores this question! A 🧵:

Can we achieve effective zero-shot summarization by combining language &amp; task information from existing PEFT adapters with weight arithmetic? 

Our paper, accepted at the <a href="/mrl2024_emnlp/">MRL</a> workshop #EMNLP2024, explores this question! A 🧵: