Kexin Wang (@kexinwang2049) 's Twitter Profile
Kexin Wang

@kexinwang2049

A doctoral researcher at UKP @UKPLab @TUDarmstadt.

ID: 1304177764493295617

linkhttps://github.com/kwang2049 calendar_today10-09-2020 21:59:51

93 Tweet

309 Followers

121 Following

Daniel van Strien (@vanstriendaniel) 's Twitter Profile Photo

New blog post experimenting with using semantic search to find potential machine learning models for fine-tuning on the Hugging Face model hub. danielvanstrien.xyz/huggingface/hu…

Nils Reimers (@nils_reimers) 's Twitter Profile Photo

πŸ”’Semantic search based on dense vector spaces can significantly improve search results, but generalizes badly to new domains & new language ☹️ πŸ”In this talk, I will give an outlook into the next generation of algorithms we will be using to improve search results.

Nils Reimers (@nils_reimers) 's Twitter Profile Photo

Join today at 7pm CEST (UTC+2) on a webinar on how to improve search using Neural Networks: - Overview of neural architectures for search - Pros and cons - Distillation across architectures

Nandan Thakur (@beirmug) 's Twitter Profile Photo

I'm super happy to share that I have joined Google research this fall as a PhD research intern in California to work on multilingual search w/ Daniel Cer, Jianmo Ni! 😍πŸ₯³ It's my first time in the US. If you are in the bay area, let's meet! πŸ‡ΊπŸ‡ΈπŸ» #google #internship #NLP #IR

Charles Pierse (@cdpierse) 's Twitter Profile Photo

This latest release for Transformers Interpret (v0.8.1) brings some features I'm really excited about particularly the new pairwise classification explainer. (1/8) github.com/cdpierse/trans…

Nafise Sadat Moosavi (@nafisesadat) 's Twitter Profile Photo

Activation functions reduce the topological complexity of data. Best AF may be diff for diff models and diff layers, but most Transformer models use GELU. What if the model learns optimized activation functions during training? led by Haishuo with Ji Ung Lee and Iryna Gurevych

Sebastian HofstΓ€tter (@s_hofstaetter) 's Twitter Profile Photo

I am super proud to share the work of my Google AI internship πŸŽ‰ FiD-Light is an efficient retrieval-augmented generation model advancing the state-of-the-art effectiveness on six KILT tasks considerably πŸ™Œ w/ Jiecao Chen Karthik Raman Hamed Zamani πŸ“„ arxiv.org/abs/2209.14290

I am super proud to share the work of my <a href="/GoogleAI/">Google AI</a> internship πŸŽ‰ 
FiD-Light is an efficient retrieval-augmented generation model advancing the state-of-the-art effectiveness on six KILT tasks considerably πŸ™Œ
w/ <a href="/chenjiecao/">Jiecao Chen</a> <a href="/karthikraman09/">Karthik Raman</a> <a href="/HamedZamani/">Hamed Zamani</a> 
πŸ“„ arxiv.org/abs/2209.14290
Nils Reimers (@nils_reimers) 's Twitter Profile Photo

MTEB - Massive Text Embedding Benchmark 🧨 Text embeddings are usefull for many applications πŸ’», but still their evaluation is often done rather poorly on trivial datasets πŸ™. MTEB is here to change it. We collected 58 datasets across 8 tasks and evaluated many public models.

MTEB - Massive Text Embedding Benchmark 🧨

Text embeddings are usefull for many applications πŸ’», but still their evaluation is often done rather poorly on trivial datasets πŸ™. MTEB is here to change it.

We collected 58 datasets across 8 tasks and evaluated many public models.
Akari Asai (@akariasai) 's Twitter Profile Photo

New paper 🚨 arxiv.org/abs/2211.09260 Can we train a single search system that satisfies our diverse information needs? We present 𝕋𝔸ℝ𝕋 πŸ₯§ the first multi-task instruction-following retriever trained on 𝔹𝔼ℝℝ𝕀 🫐, a collections of 40 retrieval tasks with instructions! 1/N

New paper 🚨 arxiv.org/abs/2211.09260
Can we train a single search system that satisfies our diverse information needs?
We present 𝕋𝔸ℝ𝕋 πŸ₯§ the first multi-task instruction-following retriever trained on 𝔹𝔼ℝℝ𝕀 🫐, a collections of 40 retrieval tasks with instructions! 1/N
Jingfeng Yang (@jingfengy) 's Twitter Profile Photo

#ChatGPT and #GPT3 are hot. But let’s be practical, when we want to reproduce GPT-3 or use it in our applications. Why did all of the public reproduction of GPT-3 fail? In which tasks should we use GPT-3.5/ChatGPT? I tried to answer them in a new blog: jingfengyang.github.io/gpt .

#ChatGPT and #GPT3 are hot. But let’s be practical, when we want to reproduce GPT-3 or use it in our applications. Why did all of the public reproduction of GPT-3 fail? In which tasks should we use GPT-3.5/ChatGPT? I tried to answer them in a new blog: jingfengyang.github.io/gpt .
Nils Reimers (@nils_reimers) 's Twitter Profile Photo

πŸ‡ΊπŸ‡³πŸπŸ“πŸŽπŒ π–π’π€π’π©πžππ’πš π’π§π­πŸ– & π›π’π§πšπ«π² π„π¦π›πžπππ’π§π π¬ πŸ‡ΊπŸ‡³ int8 & binary embeddings are amazing: πŸ’° Up to 100x lower cost πŸš€ Up to 30x faster πŸ† State-of-the-art search quality Wikipedia embedded in 300+ languages: huggingface.co/datasets/Coher…

πŸ‡ΊπŸ‡³πŸπŸ“πŸŽπŒ π–π’π€π’π©πžππ’πš π’π§π­πŸ– &amp; π›π’π§πšπ«π² π„π¦π›πžπππ’π§π π¬ πŸ‡ΊπŸ‡³

int8 &amp; binary embeddings are amazing:
 πŸ’° Up to 100x lower cost
πŸš€ Up to 30x faster
πŸ† State-of-the-art search quality

Wikipedia embedded in 300+ languages:
huggingface.co/datasets/Coher…
Nils Reimers (@nils_reimers) 's Twitter Profile Photo

π’πžπ¦πšπ§π­π’πœ π’πžπšπ«πœπ‘ 𝐨𝐧 𝟏𝟎𝟎𝐌 𝐝𝐨𝐜𝐬 - 𝐖𝐒𝐭𝐑 𝟏𝟎𝟎𝐌𝐁 𝐨𝐟 𝐌𝐞𝐦𝐨𝐫𝐲 GPU-poor and Memory-poor, and not having 500GB of memory to embed & index 100M docs? Still want to participate at TREC-RAG 2024? Introducing πƒπ’π¬π€π•πžπœπ­π¨π«πˆπ§ππžπ±

π’πžπ¦πšπ§π­π’πœ π’πžπšπ«πœπ‘ 𝐨𝐧 𝟏𝟎𝟎𝐌 𝐝𝐨𝐜𝐬 - 𝐖𝐒𝐭𝐑 𝟏𝟎𝟎𝐌𝐁 𝐨𝐟 𝐌𝐞𝐦𝐨𝐫𝐲

GPU-poor and Memory-poor, and not having 500GB of memory to embed &amp; index 100M docs?

Still want to participate at TREC-RAG 2024?

Introducing πƒπ’π¬π€π•πžπœπ­π¨π«πˆπ§ππžπ±
Kexin Wang (@kexinwang2049) 's Twitter Profile Photo

I am attending acl24 at BangkokπŸ‡ΉπŸ‡­ to present my paper DAPR huggingface.co/datasets/UKPLa… (Mon 14:00 poster). Looking forward to meeting you! #ACL2024NLP