Solim LeGris (@solimlegris) 's Twitter Profile
Solim LeGris

@solimlegris

phd student at nyu. figuring out how to make computers behave like humans in the relevant ways.

ID: 829775316

linkhttp://solimlegris.com calendar_today17-09-2012 20:44:36

55 Tweet

152 Followers

279 Following

Solim LeGris (@solimlegris) 's Twitter Profile Photo

After completing the Cognitive Science program at McGill University with First Class Honours, I am thrilled to announce that I will be joining the Human and Machine Learning Lab as well as the Computation and Cognition Lab at NYU as a PhD student in the f…lnkd.in/dz-Pg_K8

Jeffrey Bowers (@jeffrey_bowers) 's Twitter Profile Photo

It is widely claimed that DNNs are the “best” models of human vision. Our article entitled “Deep Problems with Neural Network Models of Human Vision” challenges this view and proposes a way forward. Thread: 1/25 psyarxiv.com/5zf4s/

François Chollet (@fchollet) 's Twitter Profile Photo

Much of our lives is shaped by the people we choose to hang out with. And that's why you should read books. Good ones. Easiest way to spend quality time with interesting people of your choice.

François Chollet (@fchollet) 's Twitter Profile Photo

I could never get used to the fact that English uses 1-indexing when referring to floors in a building. I much prefer 0-indexing, like in French or in Python.

Brenden Lake (@lakebrenden) 's Twitter Profile Photo

Today (now!), Solim LeGris explains how we predict insight using behavioural data in a physical reasoning game. Check out poster #CogSci2024 P3-LL-867 at 1pm today! Learn about the project here: exps.gureckislab.org/e/blue-giganti…

ARC Prize (@arcprize) 's Twitter Profile Photo

One inspiration for ARC-AGI solutions is the psychology how humans solve novel tasks. A new study by todd gureckis Brenden Lake Solim LeGris Wai Keen Vong @ NYU explores human performance on ARC, finding that 98.7% of the public tasks are solvable by at least 1 MTurker.

todd gureckis (@todd_gureckis) 's Twitter Profile Photo

We have a new preprint led by Solim LeGris and Wai Keen Vong (w/ Brenden Lake ) looking at how people solve the Abstraction and Reasoning benchmark. arxiv.org/abs/2409.01374 ARC is one of the most challenging & long standing AGI benchmarks and the focus of a large $$ prize ARC Prize 🧵

tyler bonnen (@tylerraye) 's Twitter Profile Photo

do large-scale vision models represent the 3D structure of objects? excited to share our benchmark: multiview object consistency in humans and image models (MOCHI) with Stephanie Fu Yutong Bai Thomas O'Connell @_yonifriedman Nancy Kanwisher @[email protected] Josh Tenenbaum and Alexei Efros 1/👀

do large-scale vision models represent the 3D structure of objects?

excited to share our benchmark: multiview object consistency in humans and image models (MOCHI)

with <a href="/xkungfu/">Stephanie Fu</a> <a href="/YutongBAI1002/">Yutong Bai</a> <a href="/thomaspocon/">Thomas O'Connell</a> @_yonifriedman <a href="/Nancy_Kanwisher/">Nancy Kanwisher @NancyKanwisher@mas.to</a> Josh Tenenbaum and Alexei Efros

1/👀
NYU Center for Data Science (@nyudatascience) 's Twitter Profile Photo

Humans still outperform AI on visual reasoning tasks, according to a new study by Solim LeGris, Wai Keen Vong, Brenden Lake, and todd gureckis. Despite advances, the top AI models are still significantly worse than humans on the ARC Prize's ARC benchmark. nyudatascience.medium.com/human-intellig…

Nikhil Bhattasali (@nxbhattasali) 's Twitter Profile Photo

Can neuro-inspired ANN architectures be useful for motor control in quadruped robots? We translate neural circuits in the limbs and spinal cord of mammals into an ANN architecture controlling quadruped locomotion. w/ Venkatesh, Lerrel Pinto, @neurograce

Andrej Karpathy (@karpathy) 's Twitter Profile Photo

The YouTube video I want to watch is any highly rated, 1hr long, information dense lecture on anything esoteric and the algorithm just doesn’t get it. It’s too content-driven and too narrow-minded

Ulyana Piterbarg (@ulyanapiterbarg) 's Twitter Profile Photo

There are infinitely many ways to write a program. In our new work, we show that training autoregressive LMs to synthesize programs with sequences of edits improves the trade-off between zero-shot generation quality and inference-time compute. (1/8) w/ Rob Fergus Lerrel Pinto

Brenden Lake (@lakebrenden) 's Twitter Profile Photo

Classic challenges for neural nets focus on human strengths that are model weaknesses. With Kazuki Irie, we discuss 4 cases of neural nets overcoming weaknesses with practice (meta-learning). A framework for bringing machine and human intelligence closer? arxiv.org/abs/2410.10596

Classic challenges for neural nets focus on human strengths that are model weaknesses. With Kazuki Irie, we discuss 4 cases of neural nets overcoming weaknesses with practice (meta-learning). A framework for bringing machine and human intelligence closer? arxiv.org/abs/2410.10596
Felix Hill (@felixhill84) 's Twitter Profile Photo

Of those who made the critical contributions on the path from the first programmable computers to today's LLMs ~ half are psychologists: Rosenblatt (perceptron) Rumelhart, McClelland, Hinton (backprop, deeper nets) Elman (RNN, autoregressive training)

Brenden Lake (@lakebrenden) 's Twitter Profile Photo

.Solim LeGris and Wai Keen Vong estimated that average human performance on ARC is about 64% correct (on the public eval set). This would make o3 clearly better than the average human. Notably, almost all tasks were solvable by at least one person. arxiv.org/abs/2409.01374

.<a href="/solimlegris/">Solim LeGris</a> and <a href="/wkvong/">Wai Keen Vong</a> estimated that average human performance on ARC is about 64% correct (on the public eval set). This would make o3 clearly better than the average human. Notably, almost all tasks were solvable by at least one person. arxiv.org/abs/2409.01374