Shengyao Zhuang (@shengyaozhuang) 's Twitter Profile
Shengyao Zhuang

@shengyaozhuang

Postdoctoral researcher at @CSIRO @ehealthresearch.
Former PhD student @IELabGroup, The University of Queensland.
Working on information retrieval, NLP.

ID: 1162690737516863494

linkhttps://arvinzhuang.github.io/ calendar_today17-08-2019 11:41:00

157 Tweet

257 Followers

275 Following

Wang shuai (Dylan) (@dylan_wangs) 's Twitter Profile Photo

Share our paper "ReSLLM: Large Language Models are Strong Resource Selectors for Federated Search" with Shengyao Zhuang , Bevan Koopman and Guido Zuccon has been accepted to TheWebConference 2025 as a short. Preprint: arxiv.org/abs/2401.17645 Github: github.com/ielab/SLAT-RsL…

Manuel Faysse (@manuelfaysse) 's Twitter Profile Photo

Beyond this nice realization that the ~continuous~ input space of document embeddings is easier to run adversarial attacks on than on pure text, this paper compares single and multi-vector approaches in terms of robustness which I had never seen in a ML privacy setting before !

Xueguang Ma (@xueguang_ma) 's Twitter Profile Photo

It's always exciting to learn something new! With Shengyao Zhuang, we explore how recent RL techniquesbecome can help the reranking being more explainable and better handle queries requiring complex reasoning.

tomaarsen (@tomaarsen) 's Twitter Profile Photo

I've just ported the excellent monoELECTRA-{base, large} reranker models from Ferdinand Schlatt & the research network Webis Group to Sentence Transformers! These models were introduced in the Rank-DistiLLM paper, and distilled from LLMs like RankZephyr and RankGPT4. Details in 🧵

I've just ported the excellent monoELECTRA-{base, large} reranker models from <a href="/fschlatt1/">Ferdinand Schlatt</a> &amp; the research network Webis Group to Sentence Transformers!

These models were introduced in the Rank-DistiLLM paper, and distilled from LLMs like RankZephyr and RankGPT4.

Details in 🧵
ielab (@ielabgroup) 's Twitter Profile Photo

🎉 Excited to share the list of our ielab papers accepted at #SIGIR2025 ACM SIGIR! Topics include efficiency in IR, domain-specific IR (SysRev, podcast), screenshot retrievers, hallucination detection, and more. See you in Padua 🇮🇹! Details coming soon, stay tuned!

🎉 Excited to share the list of our <a href="/IELabGroup/">ielab</a> papers accepted at #SIGIR2025 <a href="/ACMSIGIR/">ACM SIGIR</a>! Topics include efficiency in IR, domain-specific IR (SysRev, podcast), screenshot retrievers, hallucination detection, and more. See you in Padua 🇮🇹!
Details coming soon, stay tuned!
Guido Zuccon (@guidozuc) 's Twitter Profile Photo

Yesterday I presented our #ECIR2025 paper "An Investigation of Prompt Variations for Zero-shot LLM-based Rankers", with Shuoqi Sun Shengyao Zhuang Wang shuai (Dylan). Yes, it was in a church! Slides at ielab.io/files/guido-20… and paper at arxiv.org/abs/2406.14117

Yesterday I presented our #ECIR2025 paper "An Investigation of Prompt Variations for Zero-shot LLM-based Rankers", with <a href="/shuoqi_sun/">Shuoqi Sun</a> <a href="/ShengyaoZhuang/">Shengyao Zhuang</a> <a href="/dylan_wangs/">Wang shuai (Dylan)</a>. Yes, it was in a church! Slides at ielab.io/files/guido-20… and paper at arxiv.org/abs/2406.14117
Sumit (@_reachsumit) 's Twitter Profile Photo

Tevatron 2.0: Unified Document Retrieval Toolkit across Scale, Language, and Modality Xueguang Ma et al. introduce a unified pipeline enabling researchers to explore retriever models at different scales, languages, modalities 📝arxiv.org/abs/2505.02466 👨🏽‍💻github.com/texttron/tevat…

Shengyao Zhuang (@shengyaozhuang) 's Twitter Profile Photo

One embedding model for all modalities and cross different languages! We will demo the model training pipeline in #SIGIR2025 Our OmniEmbed-v0.1 also demonstrate very strong performance on MAGMaR multimodal retrieval shared task eval.ai/web/challenges…