Kelsey Han
@ckelseyhan
PhD student @JHUCogSci
ID: 1126334060039557120
https://kelseyhan-jhu.github.io/ 09-05-2019 03:52:21
64 Tweet
82 Followers
143 Following
New publication in Nature Communications on modeling brain representations of object context using the co-occurrence statistics of vision and language. Work with The Epstein Lab. Here's what we found... nature.com/articles/s4146…
Memory is not like a camera; it often fills in extra visual details. In a new paper Association for Psychological Science, we (Shreya Wadhwa & Mick Bonner) use "tilt-shift" to explore what causes distortions in scene memory—and find that spatial scale plays a crucial role! psyarxiv.com/hy3qs
Why are deep neural nets (DNNs) so good at modeling the brain? Our new paper with Mick Bonner reveals a striking geometric explanation for DNN models of visual cortex: performance scales with the latent dimensionality of a DNN's natural image manifold. biorxiv.org/cgi/content/sh…
New paper out in PLOS Comp Biol. The best deep neural network models of visual cortex do not reduce representations to low-dimensional manifolds—instead, they benefit from high-dimensionality. Led by a fantastic student, Eric Elmoznino. journals.plos.org/ploscompbiol/a…
Standing room only at #phiVis! So encouraging to see this much interest in the intersection between philosophy and vision science. #VSS2024 VSS Meeting
Why do varied DNN designs yield equally good models of human vision? Our preprint with Mick Bonner shows that diverse DNNs represent images with a shared set of latent dimensions, and these shared dimensions turn out to also be the most brain-aligned. arxiv.org/abs/2408.12804