haley keglovits (@haleykegl) 's Twitter Profile
haley keglovits

@haleykegl

no longer using this platform, sorry if I ignore you // phd student @brownclps studying cog control @badrelab // previously @cal @ccnlab // go bears // she/her

ID: 2239734584

linkhttp://haleyk.github.io calendar_today10-12-2013 20:34:35

323 Tweet

194 Followers

323 Following

Brown Dog Lab (@browndoglab) 's Twitter Profile Photo

🔍Recent publication highlight: our grad student, Madeline Pelgrim, and PI, Dr. Daphna Buchsbaum, recently published in Proceedings of the Cognitive Science! They analyze what dogs see & pay attention to based on eye-tracking data from a walk outdoors! 👀🌳 escholarship.org/uc/item/769022…

🔍Recent publication highlight: our grad student, <a href="/madelinepelgrim/">Madeline Pelgrim</a>, and PI, Dr. Daphna Buchsbaum, recently published in Proceedings of the <a href="/CogSciSoc/">Cognitive Science</a>! They analyze what dogs see &amp; pay attention to based on eye-tracking data from a walk outdoors! 👀🌳 escholarship.org/uc/item/769022…
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Dozens of studies show that lPFC neurons are highly flexible, coding whatever task is being performed. How lPFC accommodates different tasks of varying structures at the population level remains unsettled.

Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

One account, popularized by Mattia Rigotti & Stefano Fusi, is that lPFC non-linearly mixes inputs, projecting them on a high-dim, task-agnostic manifold from which any task mapping can be read out without any representation learning.

One account, popularized by Mattia Rigotti &amp; Stefano Fusi, is that lPFC non-linearly mixes inputs, projecting them on a high-dim, task-agnostic manifold from which any task mapping can be read out without any representation learning.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Another account is that lPFC flexibility is a consequence of representation learning. lPFC just learns specialized, task-tailored representations suited to each task, coding them in orthogonal subspaces.

Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

To really test these two accounts apart, one needs to characterize lPFC representations in 2 very different tasks in the same subject. We focused on characterizing content, separability & generalizability of lPFC representations as people did two different categorization tasks.

Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

One task used a categorization rule for mapping inputs to outputs. The other used a flat, XOR structure. We yoked & counterbalanced inputs and output across the two tasks, focusing the comparison on the structure of input-output mappings.

One task used a categorization rule for mapping inputs to outputs. The other used a flat, XOR structure. We yoked &amp; counterbalanced inputs and output across the two tasks, focusing the comparison on the structure of input-output mappings.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

As we have previously shown, lPFC representations are hard to study with fMRI, with poor pattern reliability and small effects. haley keglovits tackled this head-on with deep sampling, heroically collecting 200+ minutes of fMRI data on each task from 20 subjects.

As we have previously shown, lPFC representations are hard to study with fMRI, with poor pattern reliability and small effects. <a href="/haleykegl/">haley keglovits</a> tackled this head-on with deep sampling, heroically collecting 200+ minutes of fMRI data on each task from 20 subjects.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

With decoding analyses, across both task structures, we found lPFC coding diverse task-relevant information. On the other hand, primary auditory cortex showed obligatory coding of only auditory information, whether or not it was task-relevant.

With decoding analyses, across both task structures, we found lPFC coding diverse task-relevant information. On the other hand, primary auditory cortex showed obligatory coding of only auditory information, whether or not it was task-relevant.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Across both tasks, inputs were encoded on manifolds of intermediate dimensionality, with at least some non-linear mixing of inputs. These representations did not differ in their overall separability, or degree of non-linear mixing.

Across both tasks, inputs were encoded on manifolds of intermediate dimensionality, with at least some non-linear mixing of inputs. These representations did not differ in their overall separability, or degree of non-linear mixing.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Nevertheless, lPFC representational geometry for each task was highly tailored to its structure. In each task, clustering created subspaces along a global axis - context subspaces in the hierarchy task, and response category subspaces in the flat task.

Nevertheless, lPFC representational geometry for each task was highly tailored to its structure. In each task,   clustering created subspaces along a global axis -  context subspaces in the hierarchy task, and response category subspaces in the flat task.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Using a series of decoding analyses, we comprehensively worked out the detailed local structure within each cluster in both tasks.

Using a series of decoding analyses, we comprehensively worked out the detailed local structure within each cluster in both tasks.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

In the hierarchy task, the global axis abstractly encoded higher-level context, while low-dimensional, context-specific local geometries compressed context-irrelevant information & abstractly encoded context-relevant response-relevant category.

In the hierarchy task, the global axis abstractly encoded higher-level context, while low-dimensional, context-specific local geometries compressed context-irrelevant information &amp; abstractly encoded context-relevant response-relevant category.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

On the other hand, in the flat task, a global axis encoded the response-relevant, XOR categories abstractly. Category-specific local geometries were high-dimensional, retaining stimulus information that was not strictly required for readout.

On the other hand, in the flat task, a global axis encoded the response-relevant, XOR categories abstractly. Category-specific local geometries were high-dimensional, retaining stimulus information that was not strictly required for readout.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Therefore, at least in highly trained subjects, lPFC learned task-tailored representations that recapitulated the structure of the task, showing that lPFC representations are shaped by representation learning.

Therefore, at least in highly trained subjects, lPFC learned task-tailored representations that recapitulated the structure of the task, showing that lPFC representations are shaped by representation learning.
Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

However, there were clues in the data suggesting lPFC may have started with a task-agnostic, high-dim representation with learning-driven dimensionality reduction helping reshape it to fit the task structure.

Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

The flat task showed local high-dim structure and orthogonality across clusters that were unrelated to the structure of the task. These may have been vestiges of an expressive, task-agnostic representation. Such a process has been observed in monkey lPFC biorxiv.org/content/10.110…

Apoorva Bhandari (@apaxon) 's Twitter Profile Photo

Collectively, studying representations of two different task structures in the same subjects revealed generalizable principles by which lPFC tailors representations to different tasks.

haley keglovits (@haleykegl) 's Twitter Profile Photo

Apoorva's comparison is right, and he really carried this ring through the finish line! 💍 I feel so lucky to have this project be my first major work in grad school and learned so much from him