I-Hung Hsu (@ihung_hsu) 's Twitter Profile
I-Hung Hsu

@ihung_hsu

Research Scientist @Google; CS PhD from @USC in NLP; Work on making machines reliable, intelligent , and user-friendly tools that are accessible for all people.

ID: 829514565931069441

linkhttps://ihungalexhsu.github.io calendar_today09-02-2017 02:17:38

109 Tweet

451 Followers

325 Following

Kuan-Hao Huang (@kuanhaoh_) 's Twitter Profile Photo

I am thrilled to share that I will join the Department of Computer Science and Engineering at Texas A&M University as an Assistant Professor in Fall 2024. Many thanks to my advisors, colleagues, and friends for their support and help. I'm really excited about the new journey at College Station!

I am thrilled to share that I will join the Department of Computer Science and Engineering at <a href="/TAMU/">Texas A&M University</a> as an Assistant Professor in Fall 2024. Many thanks to my advisors, colleagues, and friends for their support and help. I'm really excited about the new journey at College Station!
AK (@_akhaliq) 's Twitter Profile Photo

New Hugging Face Daily Papers feature If you have at least one indexed paper on Hugging Face, you can now directly submit papers to HF daily papers try it here: huggingface.co/papers

New Hugging Face Daily Papers feature

If you have at least one indexed paper on Hugging Face, you can now directly submit papers to HF daily papers

try it here: huggingface.co/papers
Kuan-Hao Huang (@kuanhaoh_) 's Twitter Profile Photo

I'll miss #NAACL2024 next week due to my visa status. However, feel free to talk to Tanmay Parekh about our latest work on event extraction and cross-lingual transfer! I'm also seeking students to join my group at TAMU. Interested? Fill out this Google form! forms.gle/Cw4mn47AWNgkmc…

Chen-Yu Lee (@chl260) 's Twitter Profile Photo

If you're struggling with the credibility and grounding of your LLM's generated text, our new approach, CaLM: Contrasting Large and Small Language Models to Verify Grounded Generation (ACL '24), might be the solution you've been looking for. 💡 Paper: arxiv.org/abs/2406.05365 1/n

Cheng-Yu Hsieh (@cydhsieh) 's Twitter Profile Photo

Why LLMs lost in the middle❓ 💡LLMs exhibit U-shape positional attention bias that dominates their generation behavior (often using leading/ending contexts in the response) 🚀By modeling and removing such bias, we hugely improve LLMs RAG performances! 📜: arxiv.org/abs/2406.16008

Why LLMs lost in the middle❓
💡LLMs exhibit U-shape positional attention bias that dominates their generation behavior (often using leading/ending contexts in the response)
🚀By modeling and removing such bias, we hugely improve LLMs RAG performances!
📜: arxiv.org/abs/2406.16008
I-Hung Hsu (@ihung_hsu) 's Twitter Profile Photo

Unforetunately, I cannot attend #acl24 this year. However, you can still dive into my paper's intro video and poster on Underline: underline.io/events/466/pos…… (available for #ACL2024NLP attendees).

Zihan Xue (@zihanxuezx) 's Twitter Profile Photo

Excited to share our work on 8/14 at #ACL2024NLP Poster Session 4! Ever heard of Event Linking? It's a task that grounds events to a KB for better event understanding, such as RAG usage. Our work Argument-Aware Approach to Event Linking tackles two main challenges of the task.

Excited to share our work on 8/14 at #ACL2024NLP Poster Session 4!

Ever heard of Event Linking? It's a task that grounds events to a KB for better event understanding, such as RAG usage. Our work Argument-Aware Approach to Event Linking tackles two main challenges of the task.
Lucas Bandarkar (@lucasbandarkar) 's Twitter Profile Photo

We presented Belebele at ACL 2024 this week! (Thx to Davis Liang and Satya Narayan Shukla) A year on from its release, it’s been really cool to see the diversity of research projects that have used it. The field is in dire need of more multilingual benchmarks !

Violet Peng (@violetnpeng) 's Twitter Profile Photo

Thrilled, grateful, and humbled to have won 3 outstanding paper awards at #EMNLP2024!!! Not even in my wildest dreams. Immense thanks to my amazing students and collaborators! All three works are on evaluating LLM’ abilities in creative narrative generation. 🧵👇

Thrilled, grateful, and humbled to have won 3 outstanding paper awards at #EMNLP2024!!! Not even in my wildest dreams. Immense thanks to my amazing students and collaborators! 
All three works are on evaluating LLM’ abilities in creative narrative generation. 🧵👇
I-Hung Hsu (@ihung_hsu) 's Twitter Profile Photo

Our team (Google Cloud AI Research: research.google/teams/cloud-ai/) is seeking PhD student researchers/interns to work on LLM-related topics (agent, reasoning, RAG, data synthesis, etc.). If interested, please fill in this form: forms.gle/Cia2WGY94zTkpP…. Thank you and plz help RT!

Our team (Google Cloud AI Research: research.google/teams/cloud-ai/) is seeking PhD student researchers/interns to work on LLM-related topics (agent, reasoning, RAG, data synthesis, etc.). If interested, please fill in this form: forms.gle/Cia2WGY94zTkpP…. 
Thank you and plz help RT!
Shao-Hua Sun (@shaohua0116) 's Twitter Profile Photo

#NeurIPS2024 Xinyun Chen (Google DeepMind), Levi Lelis (U of Alberta), and I will be giving a tutorial on programmatic reinforcement learning and code generation on Dec 10 (Tue) 1:30-4 PM @ East Exhibition Hall A. Come say hi!

#NeurIPS2024 <a href="/xinyun_chen_/">Xinyun Chen</a> (Google DeepMind), <a href="/levilelis/">Levi Lelis</a> (U of Alberta), and I will be giving a tutorial on programmatic reinforcement learning and code generation on Dec 10 (Tue) 1:30-4 PM @ East Exhibition Hall A. Come say hi!
Manling Li (@manlingli_) 's Twitter Profile Photo

[Long Tweet Ahead] Faculty Interview Tips & Common Questions: 🧘‍♀️0. Firstly, do not be nervous - Almost everything can be prepared in advance:) - Be grateful for everyone's time. - Think of it as an opportunity to share your research with others -- exciting, right? - Technical

Yihe Deng (@yihe__deng) 's Twitter Profile Photo

🚀Excited to share our latest work: OpenVLThinker, an exploration into enhancing vision-language models with R1 reasoning capabilities. By iterative integration of SFT and RL, we enabled LVLMs to exhibit robust R1 reasoning behavior. As a result, OpenVLThinker achieves a 70.2%

🚀Excited to share our latest work: OpenVLThinker, an exploration into enhancing vision-language models with R1 reasoning capabilities. 

By iterative integration of SFT and RL, we enabled LVLMs to exhibit robust R1 reasoning behavior. As a result, OpenVLThinker achieves a 70.2%