Alekh Karkada Ashok (@alekhka) 's Twitter Profile
Alekh Karkada Ashok

@alekhka

Computational Neuroscience, Machine Learning | PhD Student in Serre Lab, Brown University. Prev: RVCE | He/Him |

ID: 109503768

linkhttp://alekhka.github.io calendar_today29-01-2010 07:54:05

208 Tweet

175 Followers

1,1K Following

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

With co-authors Thomas Serre, Lakshmi Govindarajan, Rex, and Alekh Karkada Ashok (an exceptionally talented RA who is applying for grad school this cycle, in case you have any interest in recruiting PhD students who are brilliant-beyond-belief). Check out the work and get in touch!

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

Imagine trying to track a single fly in a swarm. How would you do it? Can machine vision do it? Does it need to? Introducing a new computational challenge + solution: Tracking without re-recognition in humans and machines. arxiv.org/abs/2105.13351

For Audrey (@for_audrey_grey) 's Twitter Profile Photo

Shamelessly asking for your likes and retweets to help me get the word out for Audrey, who canโ€™t yet do it herself. She is in need of a part of someoneโ€™s liver ๐Ÿ™

Shamelessly asking for your likes and retweets to help me get the word out for Audrey, who canโ€™t yet do it herself. She is in need of a part of someoneโ€™s liver ๐Ÿ™
Drew Linsley (@drewlinsley) 's Twitter Profile Photo

Excited to present this work next week at NeurIPS 2021! In it, we ask how humans are able to track objects when they look identical, whether neural networks can do the same, and why this matters in the real world. papers.nips.cc/paper/2021/hasโ€ฆ #NeurIPS2021

LoreGoetschalckx (@l_goetschalckx) 's Twitter Profile Photo

#VSS2022 I'll be presenting my work titled "Toward modeling visual routines of object segmentation with biologically inspired recurrent vision models" at 6.30 PM today, Talk Room 1. Come check it out :)

#VSS2022 I'll be presenting my work titled "Toward modeling visual routines of object segmentation with biologically inspired recurrent vision models" at 6.30 PM today, Talk Room 1. Come check it out :)
Roman Feiman (@romanfeiman) 's Twitter Profile Photo

Every year I read a lot of grad school applications from accomplished people that don't give me the info I'm looking for. It feels like a major hidden curriculum thing. So here's (my opinion on) how to write a great Statement of Purpose/Research for a PhD program. ๐Ÿงต 1/

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

Check out our new paper, to appear at NeurIPS. We show that DNNs are becoming progressively *less* aligned with human perception as their ImageNet accuracy increases. Ignore the elections, Elon, and FTX for a moment โ€” this is important! serre-lab.github.io/Harmonization/

Check out our new paper, to appear at NeurIPS. We show that DNNs are becoming progressively *less* aligned with human perception as their ImageNet accuracy increases. Ignore the elections, Elon, and FTX for a moment โ€” this is important!
serre-lab.github.io/Harmonization/
Julien Colin (@juliencolin_) 's Twitter Profile Photo

[1/5] Much of the progress on attribution methods has been driven by theoretical metrics --without much consideration for human end-users Our #NeurIPS22 paper investigates whether progress has translated to explanations more useful in real-world scenarios serre-lab.github.io/Meta-predictor

Drew Linsley (@drewlinsley) 's Twitter Profile Photo

Cem Anil Cool! You should also check out our paper (Alekh Karkada Ashok Lakshmi Govindarajan Thomas Serre) from NeurIPS 2020: arxiv.org/abs/2005.11362 Similar insights with recurrent backprop: more processing time helps systematic generalization on a line-tracing task and extends to panoptic seg on MS-COCO.

Thomas Fel (@napoolar) 's Twitter Profile Photo

Hi ๐Ÿ‘‹ ! Interested in explainability or planning to use XAI in your future projects? Then you might be interested in this ๐Ÿงต, let me introduce you to Xplique! github.com/deel-ai/xplique with Thomas Serre, Remi Cadene, Mathieu Chalvidal, Julien Colin, Louis Bรฉthune, Paul Novello, ANITI Toulouse

Hi ๐Ÿ‘‹ !

Interested in explainability or planning to use XAI in your future projects?

Then you might be interested in this ๐Ÿงต, let me introduce you to Xplique! github.com/deel-ai/xplique

with <a href="/tserre/">Thomas Serre</a>, <a href="/RemiCadene/">Remi Cadene</a>, <a href="/ChalviM/">Mathieu Chalvidal</a>, <a href="/juliencolin_/">Julien Colin</a>, <a href="/LouisBAlgue/">Louis Bรฉthune</a>, <a href="/Novelloop/">Paul Novello</a>, <a href="/ANITI_Toulouse/">ANITI Toulouse</a>
Victor Boutin (@victorboutin) 's Twitter Profile Photo

Our article "Diffusion Models as Artist: Are we Closing the Gap between Humans and Machine"ย (arxiv.org/abs/2301.11722)ย has been accepted at #icml2023 and selected as an oral ๐ŸŽ‰๐ŸŽŠ! ๐Ÿค– = ๐Ÿ‘จ๐Ÿปโ€๐ŸŽจ ?? ย (1/5)ย ๐Ÿงต

Our article "Diffusion Models as Artist: Are we Closing the Gap between Humans and Machine"ย (arxiv.org/abs/2301.11722)ย has been accepted at #icml2023 and selected as an oral ๐ŸŽ‰๐ŸŽŠ!
๐Ÿค– = ๐Ÿ‘จ๐Ÿปโ€๐ŸŽจ ?? ย (1/5)ย ๐Ÿงต
Drew Linsley (@drewlinsley) 's Twitter Profile Photo

The sensitivity of DNNs to adversarial attacks has long been thought to be an "Achilles heel" that will ultimately make them unsafe in real-world applications. Has this sensitivity changed as DNNs have scaled-up and rivaled or beat human-level accuracy? serre-lab.github.io/Adversarial-Alโ€ฆ

The sensitivity of DNNs to adversarial attacks has long been thought to be an "Achilles heel" that will ultimately make them unsafe in real-world applications.

Has this sensitivity changed as DNNs have scaled-up and rivaled or beat human-level accuracy?

serre-lab.github.io/Adversarial-Alโ€ฆ
LoreGoetschalckx (@l_goetschalckx) 's Twitter Profile Photo

โœจNEW PREPRINTโœจ Visual cognition in the brain is dynamic. Time to consider time ๐Ÿฅ in models! We present a novel human-like reaction time metric computed from stable recurrent vision models and study ๐˜ต๐˜ฆ๐˜ฎ๐˜ฑ๐˜ฐ๐˜ณ๐˜ข๐˜ญ human-model alignment. Read onโ€ฆ๐Ÿค“ arxiv.org/abs/2306.11582 1/n

Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ“… Last month, we presented ๐—˜๐—ฉ๐—” at #CVPR2023 the first attribution method using ๐…๐จ๐ซ๐ฆ๐š๐ฅ ๐Œ๐ž๐ญ๐ก๐จ๐๐ฌ! We leverage recent advances in formal methods to propagate bounds through a neural network, exploring a potentially infinite number of perturbations. ๐Ÿงต

๐Ÿ“… Last month, we presented ๐—˜๐—ฉ๐—” at #CVPR2023
the first attribution method using ๐…๐จ๐ซ๐ฆ๐š๐ฅ ๐Œ๐ž๐ญ๐ก๐จ๐๐ฌ!

We leverage recent advances in formal methods to propagate bounds through a neural network, exploring a potentially infinite number of perturbations. 

๐Ÿงต
Thomas Fel (@napoolar) 's Twitter Profile Photo

๐Ÿ‘‹ Explain big vision model with ๐‚๐‘๐€๐…๐“ ๐Ÿช„๐Ÿฐ A method that ๐™–๐™ช๐™ฉ๐™ค๐™ข๐™–๐™ฉ๐™ž๐™˜๐™–๐™ก๐™ก๐™ฎ extracts the most important concepts for your favorite pre-trained vision model. e.g., we automatically discover the most important concepts on a ResNet50 for rabbits: eyes, ears, fur. ๐Ÿงถ

๐Ÿ‘‹ Explain big vision model with ๐‚๐‘๐€๐…๐“ ๐Ÿช„๐Ÿฐ

A method that ๐™–๐™ช๐™ฉ๐™ค๐™ข๐™–๐™ฉ๐™ž๐™˜๐™–๐™ก๐™ก๐™ฎ extracts the most important concepts for your favorite pre-trained vision model.

e.g., we automatically discover the most important concepts on a ResNet50 for rabbits: eyes, ears, fur.

๐Ÿงถ
Serre Lab @ Brown (@serre_lab) 's Twitter Profile Photo

๐ŸŽ‰ Come join us at some exciting events! TOMORROW: Pawtucket Brain Fair on March 9 (Hope Artiste Village, 1005 Main Street, 10AM-2PM) facebook.com/events/7503784โ€ฆ Brown Brain Fair on March 16 (Brown Engineering Research Center, 10AM-2PM)! facebook.com/events/2568766โ€ฆ

Michael Lepori (@michael_lepori) 's Twitter Profile Photo

I'll be presenting this work Conference on Language Modeling on Monday afternoon, and I'll be hanging out in Philly until Wednesday! Feel free to reach out if you want to chat about mechanistic interpretability and/or cognitive science.

Thomas Serre (@tserre) 's Twitter Profile Photo

Our work on mental simulation in monkeys and RNNs just got published! This was a fantastic collaboration between the Sheinberg lab and Theresa on the experimental side and our group, with Alekh Karkada Ashok leading the computational work. Funded in part by a Zimmerman Innovation

Our work on mental simulation in monkeys and RNNs just got published! This was a fantastic collaboration between the Sheinberg lab and <a href="/TheresasTree/">Theresa</a> on the experimental side and our group, with <a href="/alekhka/">Alekh Karkada Ashok</a> leading the computational work. Funded in part by a Zimmerman Innovation