Arna Ghosh (@arna_ghosh) 's Twitter Profile
Arna Ghosh

@arna_ghosh

PhD student @Mila_Quebec & @McGillU, Vanier scholar โ€ข ๐Ÿง +๐Ÿค– grad studentโ€ข Ex-@RealityLabs @AIatMetaโ€ข Believer in Bio-inspired AI โ€ข Comedy+Cricket enthusiast

ID: 1055730611317628928

calendar_today26-10-2018 07:59:27

1,1K Tweet

1,1K Followers

901 Following

Arna Ghosh (@arna_ghosh) 's Twitter Profile Photo

Super cool paper! It formalizes a lot of ideas I have been mulling over the past year, and connects so many historical ideas neatly. Definitely worth a read if you are working/interested in mechanistic interp and neural representations.

Somnath Basu Roy Chowdhury (@somnathbrc) 's Twitter Profile Photo

๐‡๐จ๐ฐ ๐œ๐š๐ง ๐ฐ๐ž ๐ฉ๐ž๐ซ๐Ÿ๐ž๐œ๐ญ๐ฅ๐ฒ ๐ž๐ซ๐š๐ฌ๐ž ๐œ๐จ๐ง๐œ๐ž๐ฉ๐ญ๐ฌ ๐Ÿ๐ซ๐จ๐ฆ ๐‹๐‹๐Œ๐ฌ? Our method, Perfect Erasure Functions (PEF), erases concepts from LLM representations w/o parameter estimation, achieving pareto optimal erasure-utility tradeoff w/ guarantees. #AISTATS2025 ๐Ÿงต

๐‡๐จ๐ฐ ๐œ๐š๐ง ๐ฐ๐ž ๐ฉ๐ž๐ซ๐Ÿ๐ž๐œ๐ญ๐ฅ๐ฒ ๐ž๐ซ๐š๐ฌ๐ž ๐œ๐จ๐ง๐œ๐ž๐ฉ๐ญ๐ฌ ๐Ÿ๐ซ๐จ๐ฆ ๐‹๐‹๐Œ๐ฌ?

Our method, Perfect Erasure Functions (PEF), erases concepts from LLM representations w/o parameter estimation, achieving pareto optimal erasure-utility tradeoff w/ guarantees. #AISTATS2025 ๐Ÿงต
Koustuv Sinha (@koustuvsinha) 's Twitter Profile Photo

Training VLMs using SSL encoders? With scale you can get better performance than CLIP! Checkout our latest work led by our amazing David Fan & Peter Tong !

Somnath Basu Roy Chowdhury (@somnathbrc) 's Twitter Profile Photo

๐‡๐จ๐ฐ ๐œ๐š๐ง ๐ฐ๐ž ๐ฉ๐ž๐ซ๐Ÿ๐ž๐œ๐ญ๐ฅ๐ฒ ๐ฎ๐ง๐ฅ๐ž๐š๐ซ๐ง ๐๐š๐ญ๐š ๐Ÿ๐ซ๐จ๐ฆ ๐‹๐‹๐Œ๐ฌ ๐ฐ๐ก๐ข๐ฅ๐ž ๐ฉ๐ซ๐จ๐ฏ๐ข๐๐ข๐ง๐  ๐ ๐ฎ๐š๐ซ๐š๐ง๐ญ๐ž๐ž๐ฌ? We present SยณT, a scalable unlearning framework that guarantees data deletion from LLMs by leveraging parameter-efficient fine-tuning. #ICLR2025 ๐Ÿงต

๐‡๐จ๐ฐ ๐œ๐š๐ง ๐ฐ๐ž ๐ฉ๐ž๐ซ๐Ÿ๐ž๐œ๐ญ๐ฅ๐ฒ ๐ฎ๐ง๐ฅ๐ž๐š๐ซ๐ง ๐๐š๐ญ๐š ๐Ÿ๐ซ๐จ๐ฆ ๐‹๐‹๐Œ๐ฌ ๐ฐ๐ก๐ข๐ฅ๐ž ๐ฉ๐ซ๐จ๐ฏ๐ข๐๐ข๐ง๐  ๐ ๐ฎ๐š๐ซ๐š๐ง๐ญ๐ž๐ž๐ฌ?

We present SยณT, a scalable unlearning framework that guarantees data deletion from LLMs by leveraging parameter-efficient fine-tuning. #ICLR2025 ๐Ÿงต
Richard Song @ ICLR 2025 (@xingyousong) 's Twitter Profile Photo

Iโ€™ve done way better jobs as an AC than what we got for ICMLโ€ฆ esp. in borderline cases, I personally spend hours reading the papers and reviews in detail, overriding mediocre or wrong reviews if necessary. Papers are the lifeline of early-stage scientists, and I wouldnโ€™t dare

Michalis Kassinopoulos (@kassinopoulos) 's Twitter Profile Photo

๐Ÿ“ข [Job Alert] We're hiring an MRI Analyst at the BBRC Neuroimaging Facility team in Barcelona! ๐Ÿง  Interested in neuroimaging and brain research? Apply now ๐Ÿ‘‰ lnkd.in/dXrCxqpP Questions? Feel free to DM me! ๐Ÿ’ผ Fundaciรณn Pasqual Maragall

Jinay (@jinaycodes) 's Twitter Profile Photo

Introducing soarXiv โœˆ๏ธ, the most beautiful way to explore human knowledge Take any paper's URL and replace arxiv with soarxiv (show in video) to teleport to its place in the universe I've embedded all 2.8M papers up until April 2025 Try it at: soarxiv dot org

Arnab (@arnabmondal96) 's Twitter Profile Photo

I am excited to share our #CVPR2025 paper on combining two topics close to my heart: equivariance with canonicalization and state space models. In this work, we provide a recipe to make VisionMamba equivariant/invariant by canonicalizing the patch token representations before

I am excited to share our #CVPR2025 paper on combining two topics close to my heart: equivariance with canonicalization and state space models. In this work, we provide a recipe to make VisionMamba equivariant/invariant by canonicalizing the patch token representations before
Nanda H Krishna (@nandahkrishna) 's Twitter Profile Photo

New preprint! ๐Ÿง ๐Ÿค– How do we build neural decoders that are: โšก๏ธ fast enough for real-time use ๐ŸŽฏ accurate across diverse tasks ๐ŸŒ generalizable to new sessions, subjects, and species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! ๐Ÿงต1/7

New preprint! ๐Ÿง ๐Ÿค–
How do we build neural decoders that are:
โšก๏ธ fast enough for real-time use
๐ŸŽฏ accurate across diverse tasks
๐ŸŒ generalizable to new sessions, subjects, and species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
๐Ÿงต1/7
Johannes Oswald (@oswaldjoh) 's Twitter Profile Photo

Super happy and proud to share our novel scalable RNN model - the MesaNet! This work builds upon beautiful ideas of ๐—น๐—ผ๐—ฐ๐—ฎ๐—น๐—น๐˜† ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ฎ๐—น ๐˜๐—ฒ๐˜€๐˜-๐˜๐—ถ๐—บ๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด (TTT), and combines ideas of in-context learning, test-time training and mesa-optimization.

Super happy and proud to share our novel scalable RNN model - the MesaNet! 

This work builds upon beautiful ideas of ๐—น๐—ผ๐—ฐ๐—ฎ๐—น๐—น๐˜† ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ฎ๐—น ๐˜๐—ฒ๐˜€๐˜-๐˜๐—ถ๐—บ๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด (TTT), and combines ideas of in-context learning, test-time training and mesa-optimization.