abhijitanand (@abhijit_ai) 's Twitter Profile
abhijitanand

@abhijit_ai

IR Researcher @ L3S Research Center

ID: 1511614695672885248

calendar_today06-04-2022 08:00:24

23 Tweet

15 Followers

37 Following

Avishek Anand (@run4avi) 's Twitter Profile Photo

Three papers from my group in ICTIR and SIGIR .. 1/ with Yumeng Wang Lijun Lyu , investigates the brittleness of neural rankers.. fun fact: recurring adversarial words like “acceptable” demotes relevant documents .. #ictir2022 L3S Research Center @L3S_Research_Center@wisskomm TU Delft

Three papers from my group in ICTIR and SIGIR .. 1/ with <a href="/Yumeng963/">Yumeng Wang</a> <a href="/LijunLyu/">Lijun Lyu</a> , investigates the brittleness of neural rankers.. fun fact: recurring adversarial words like “acceptable” demotes relevant documents .. #ictir2022 <a href="/l3s_luh/">L3S Research Center @L3S_Research_Center@wisskomm</a> <a href="/tudelft/">TU Delft</a>
Avishek Anand (@run4avi) 's Twitter Profile Photo

2/ wanna train Neural re-rankers but only have small training data ? with abhijitanand Jurek Leonhardt Koustav Rudra we propose supervised contrastive losses for data augmentation methods to training cross encoders. Paper: arxiv.org/abs/2207.03153

Avishek Anand (@run4avi) 's Twitter Profile Photo

2/ we find that simple data augmentation schemes improve performance on a wide variety of small datasets. Interestingly our data augmentation actually only works with ranking supervised contrastive losses (SCL)

2/ we find that simple data augmentation schemes improve performance on a wide variety of small datasets. Interestingly our data augmentation actually only works with ranking supervised contrastive losses (SCL)
Avishek Anand (@run4avi) 's Twitter Profile Photo

I am organising a summer school for Explainable AI. We have a session on explainable IR as well 😀. Register if you want a fun summer school with amazing talks and socials. Link: xaiss.eu If you’re interested I am attending #sigir2022

Sole Pera (@drch0le) 's Twitter Profile Photo

Today Avishek Anand shares a bit of the history of #InformationRetrieval with TU Delft Web Science & Engineer students -- and I get to visit his class and take a stroll down IR memory lane 😉#aGoodDayAtTheOffice

Today <a href="/run4avi/">Avishek Anand</a> shares a bit of the history of #InformationRetrieval with <a href="/tudelft/">TU Delft</a> Web Science &amp; Engineer students -- and I get to visit his class and take a stroll down IR memory lane 😉#aGoodDayAtTheOffice
Jonas Wallat (@jonaswallat) 's Twitter Profile Photo

Excited to be at #ECIR2023 to present our paper "Probing BERT for Ranking Abilities" with Fabian Beringer, abhijitanand and Avishek Anand! Paper: link.springer.com/chapter/10.100…

Jonas Wallat (@jonaswallat) 's Twitter Profile Photo

When probing BERT rankers for ranking abilities - such as the ability to estimate BM25 scores - we find these abilities to be best captured at intermediate layers

When probing BERT rankers for ranking abilities - such as the ability to estimate BM25 scores - we find these abilities to be best captured at intermediate layers
Jonas Wallat (@jonaswallat) 's Twitter Profile Photo

In a second step, we show that the information where the ability is best encoded can be used to train better ranking models. To do so, we devise a MTL setup where we have the ranking objective on the last layer and switch the layers for the ability (e.g., BM25)

In a second step, we show that the information where the ability is best encoded can be used to train better ranking models. To do so, we devise a MTL setup where we have the ranking objective on the last layer and switch the layers for the ability (e.g., BM25)
Vinay Setty (@vinaysetty) 's Twitter Profile Photo

If you want work on XAI for LLMs and fact-checking IAI group Universitetet i Stavanger consider applying for this position or please forward it to someone who may be interested. jobbnorge.no/en/available-j…

Sumit (@_reachsumit) 's Twitter Profile Photo

Context Aware Query Rewriting for Text Rankers using LLM Proposes context-aware query rewriting with LLMs during training to improve ranking, avoiding expensive LLM inference during query processing. 📝arxiv.org/abs/2308.16753

Context Aware Query Rewriting for Text Rankers using LLM

Proposes context-aware query rewriting with LLMs during training to improve ranking, avoiding expensive LLM inference during query processing.

📝arxiv.org/abs/2308.16753
Sole Pera (@drch0le) 's Twitter Profile Photo

Reminder! #DIR2023 is almost upon us 😉 Interested in presenting your published work, emerging research direction, & even resources of interest to #IR community during special sessions? Submit the contribution form by October 14. More details: dir2023.github.io/DIR2023/

Reminder!  #DIR2023 is almost upon us 😉
Interested in presenting your published work, emerging research direction, &amp; even resources of interest to #IR community during special sessions?  Submit the contribution form by October 14.
More details: dir2023.github.io/DIR2023/
Avishek Anand (@run4avi) 's Twitter Profile Photo

Amazing value for money ;-). Join us in Delft on NOVEMBER 27 for #dir2023 register soon #tudelft_ai . And thanks #sigir #siks for the generous support