Andrew Sedler (@arsedle) 's Twitter Profile
Andrew Sedler

@arsedle

Postdoc at @EmoryUniversity with @chethan
ML PhD '23 from @GeorgiaTech

ID: 915765923499462657

linkhttps://arsedler9.github.io/ calendar_today05-10-2017 02:29:45

78 Tweet

174 Followers

643 Following

Chethan Pandarinath (@chethan) 's Twitter Profile Photo

The Neural Latents Benchmark '21 competition deadline is tomorrow! We've seen some really strong submissions from AE Studio Guillaume Hennequin lab Mark Churchland lab and others! πŸ“ˆπŸ§  Get yours in!!! πŸ†πŸ₯‡πŸ₯ˆπŸ₯‰ x.com/chethan/status…

Nature Methods (@naturemethods) 's Twitter Profile Photo

AutoLFADS from the Pandarinath lab models neural population activity via a deep learning-based approach with automated hyperparameter optimization. Chethan Pandarinath M. Reza Keshtkaran Andrew Sedler Mehrdad Jazayeri Kathleen Cullen nature.com/articles/s4159…

AutoLFADS from the Pandarinath lab models neural population activity via a deep learning-based approach with automated hyperparameter optimization.
<a href="/chethan/">Chethan Pandarinath</a> <a href="/MoRezaK/">M. Reza Keshtkaran</a> <a href="/arsedle/">Andrew Sedler</a> <a href="/mjaz_jazlab/">Mehrdad Jazayeri</a> <a href="/PresNCM/">Kathleen Cullen</a>
nature.com/articles/s4159…
Chethan Pandarinath (@chethan) 's Twitter Profile Photo

Excited about this new line of work in my lab, led by Andrew Sedler w/ Chris Versteeg, to probe the relationship between expressivity and interpretability in models of neural population dynamics. arxiv.org/abs/2212.03771

Excited about this new line of work in my lab, led by <a href="/arsedle/">Andrew Sedler</a> w/ <a href="/chris_versteeg/">Chris Versteeg</a>, to probe the relationship between expressivity and interpretability in models of neural population dynamics. 
arxiv.org/abs/2212.03771
Chethan Pandarinath (@chethan) 's Twitter Profile Photo

A wise mentor once helped me understand that this job β€œis all about the people.” I am so proud that these two amazing people are the first PhD graduates from my lab. Congratulations, Feng Zhu and Andrew Sedler!

A wise mentor once helped me understand that this job β€œis all about the people.”

I am so proud that these two amazing people are the first PhD graduates from my lab.

Congratulations, Feng Zhu and Andrew Sedler!
Chris Versteeg (@chris_versteeg) 's Twitter Profile Photo

Ever wondered whether the dynamics learned by LFADS-like models could help us understand neural computation? Chethan Pandarinath,Andrew Sedler, Jonathan McCart, and I developed ODIN to robustly recover latent dynamical features through the power of injectivity! πŸ“œ 1/ arxiv.org/abs/2309.06402…

Ever wondered whether the dynamics learned by LFADS-like models could help us understand neural computation? <a href="/chethan/">Chethan Pandarinath</a>,<a href="/arsedle/">Andrew Sedler</a>, <a href="/JonathanDMcCart/">Jonathan McCart</a>, and I developed ODIN to robustly recover latent dynamical features through the power of injectivity! πŸ“œ 1/ arxiv.org/abs/2309.06402…
Chethan Pandarinath (@chethan) 's Twitter Profile Photo

Excited about making models of neural dynamics more interpretable. An ongoing, multi-year project w/efforts led by Andrew Sedler and then Chris Versteeg. Grateful for convos w/David Sussillo Matt Golub Brody Lab Tim Kim, and our friend Krishna Shenoy To be continued...

Joel Ye (@_joelye) 's Twitter Profile Photo

Neural Data Transformer 2 (NDT2), preprint + accepted to NeurIPS 23! A study on Transformer pretraining neuronal spiking activity across multiple sessions, subjects, and experimental tasks! With Jen Collinger, Leila Wehbe, and Robert Gaunt! 1/7

Neural Data Transformer 2 (NDT2), preprint + accepted to NeurIPS 23! A study on Transformer pretraining neuronal spiking activity across multiple sessions, subjects, and experimental tasks! With <a href="/jenpgh/">Jen Collinger</a>, Leila Wehbe, and <a href="/robert_gaunt1/">Robert Gaunt</a>! 1/7
Mehdi Azabou (@mehdiazabou) 's Twitter Profile Photo

Is a universal brain decoder possible? Can we train a decoding system that easily transfers to new individuals/tasks? Check out our #NeurIPS2023 paper where we show that it’s possible to transfer from a large pretrained model to achieve SOTA 🧠! Link: poyo-brain.github.io 🧡

Chethan Pandarinath (@chethan) 's Twitter Profile Photo

So fortunate to lead an amazingly talented lab. These folks are π’„π’“π’‚π’π’Œπ’Šπ’π’ˆ 𝒐𝒖𝒕 great science. 🧡 of must-see posters, which span NeuroML / AI, clinical BCI (+BrainGate Team), population dynamics, 2P Ca imaging, spinal pattern generation, EMG… πŸ§ πŸ”¬πŸ‘©β€πŸ”¬πŸ‘¨β€πŸ”¬πŸ“ˆ #SfN23 ⬇️

So fortunate to lead an amazingly talented lab. These folks are π’„π’“π’‚π’π’Œπ’Šπ’π’ˆ 𝒐𝒖𝒕 great science.

🧡 of must-see posters, which span NeuroML / AI, clinical BCI (+<a href="/BrainGateTeam/">BrainGate Team</a>), population dynamics, 2P Ca imaging, spinal pattern generation, EMG… πŸ§ πŸ”¬πŸ‘©β€πŸ”¬πŸ‘¨β€πŸ”¬πŸ“ˆ #SfN23
⬇️
Judd Rosenblatt β€” d/acc (@juddrosenblatt) 's Twitter Profile Photo

I started my company originally to build maximally agency-increasing BCI without profit motive, and that's worked decently well so far BUT AGI timelines are shortening, so we are pivoting to work on neglected approaches to alignment

Jonathan McCart (@jonathandmccart) 's Twitter Profile Photo

Modern neuroscientists routinely record the complex, goal-oriented, and time-varying activity of thousands of neurons. Can we find representations of neural activity that 1) are human-interpretable and 2) enable the generation of neural activity for unrecorded behavioral

Modern neuroscientists routinely record the complex, goal-oriented, and time-varying activity of thousands of neurons. Can we find representations of neural activity that 1) are human-interpretable and 2) enable the generation of neural activity for unrecorded behavioral