Freda Shi (@fredahshi) 's Twitter Profile
Freda Shi

@fredahshi

Assistant Professor @UWCheritonCS @VectorInst, @CIFAR_News AI Chair, @ReviewAcl Co-CTO | Excited about "grounding" in any form

ID: 807572127884845056

linkhttps://cs.uwaterloo.ca/~fhs calendar_today10-12-2016 13:06:14

450 Tweet

2,2K Followers

793 Following

Dan Roy (@roydanroy) 's Twitter Profile Photo

Dear PhD students now regretting taking offers at US schools: If you turned down PhD offers in Canada, but want to rethink that, email the professors who were trying to recruit you. They might be able to pull some strings. Your sane neighbor to the north, Canada

CLS (@chengleisi) 's Twitter Profile Photo

This year, there have been various pieces of evidence that AI agents are starting to be able to conduct scientific research and produce papers end-to-end, at a level where some of these generated papers were already accepted by top-tier conferences/workshops. Intology’s

Alper Canberk (@alpercanbe) 's Twitter Profile Photo

how much does the last layer of a VLM retain the original image? i trained a linear probe on the output features of several CLIP/SigLIP models on image reconstruction, and found that *only* with SigLIP, if you multiply the input by 10-100, pixels get reconstructed perfectly??

how much does the last layer of a VLM retain the original image?

i trained a linear probe on the output features of several CLIP/SigLIP models on image reconstruction, and found that *only* with SigLIP, if you multiply the input by 10-100, pixels get reconstructed perfectly??
Hokin Deng (@denghokin) 's Twitter Profile Photo

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). We spent another year to get 230 MLLMs evaluated

#ICML #cognition #GrowAI We spent 2 years carefully curated every single experiment (i.e. object permanence, A-not-B task, visual cliff task) in this dataset (total: 1503 classic experiments spanning 12 core cognitive concepts). 

We spent another year to get 230 MLLMs evaluated
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Our study on pragmatic generation is accepted to #COLM2025! Missed the first COLM last year (no suitable ongoing project at the time😅). Heard it’s a great place to connect with LM folks, excited to join for round two finally.

Han Shao (@hanshao16) 's Twitter Profile Photo

Otw to Vancouver for #ICML25! Shuo Xie and I will be presenting our joint work with Kunhe Yang(Kunhe Yang): “Should Decision-Makers Reveal Classifiers in Online Strategic Classification?” We ask: can hiding the model help reduce errors caused by human strategic manipulation?

Otw to Vancouver for #ICML25! Shuo Xie and I will be presenting our joint work with Kunhe Yang(<a href="/KunheYang/">Kunhe Yang</a>): 
“Should Decision-Makers Reveal Classifiers in Online Strategic Classification?”
We ask: can hiding the model help reduce errors caused by human strategic manipulation?
Sebastian Schuster (@sebschu) 's Twitter Profile Photo

The Austrian Academy of Sciences is offering a pretty generous package to researchers in the US who would like to come to Austria for a postdoc. stipendien.oeaw.ac.at/en/fellowships…. Please email me if you're interested in applying for this by Jul 25 🧑‍🔬

Freda Shi (@fredahshi) 's Twitter Profile Photo

I also forgot to mention that we're organizing a workshop that particularly focused on space, a highly interesting and interdisciplinary field with an amazing lineup of speakers (see poster!) You are all invited to submit & participate!

Yulu Qin (@yulu_qin) 's Twitter Profile Photo

Does vision training change how language is represented and used in meaningful ways?🤔 The answer is a nuanced yes! Comparing VLM-LM minimal pairs, we find that while the taxonomic organization of the lexicon is similar, VLMs are better at _deploying_ this knowledge. [1/9]

Does vision training change how language is represented and used in meaningful ways?🤔 The answer is a nuanced yes! Comparing VLM-LM minimal pairs, we find that while the taxonomic organization of the lexicon is similar, VLMs are better at _deploying_ this knowledge. [1/9]
ACL 2025 (@aclmeeting) 's Twitter Profile Photo

🛬The City Airport Train (CAT) 🚆 offers the fastest connection from Vienna Airport to the city. Purchase your tickets online in advance and get 20% off with code ACL2025. 👉 cityairporttrain.com/en/