Manuel Brenner (@brenner_manuel) 's Twitter Profile
Manuel Brenner

@brenner_manuel

Theoretical neuroscience @durstewitzlab with applications to psychiatry. Medium writer @ manuel-brenner.medium.com. Science podcaster @ anchor.fm/acit-science

ID: 1204774424739766274

calendar_today11-12-2019 14:47:14

96 Tweet

246 Followers

228 Following

AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

Thanks Manuel Brenner for your knowledgeable questions and for having me on your great podcast! 🎙️ 🎧 Listen on Spotify: open.spotify.com/episode/2sUYtw… 🎧 or Apple Podcasts: podcasts.apple.com/gb/podcast/35-…

Towards Data Science (@tdatascience) 's Twitter Profile Photo

"SGD, along with its derivative optimizers, forms the core of many self-learning algorithms." Manuel Brenner walks us through the inner workings of stochastic gradient descent. buff.ly/3DX1epb

Towards Data Science (@tdatascience) 's Twitter Profile Photo

In this article, Manuel Brenner dives into the fundamental mechanisms of several classes of generative models, shedding light on their inner workings and exploring their origins in and connections to neuroscience and cognition. buff.ly/3LSDiYx

DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Our Perspective on reconstructing computat. system dynamics from neural data finally out in Nature Rev Neurosci! nature.com/articles/s4158… We survey generative models that can be trained on time series to mimic the behavior of the neural substrate. #AI #neuroscience #DynamicalSystems

Our Perspective on reconstructing computat. system dynamics from neural data finally out in <a href="/NatRevNeurosci/">Nature Rev Neurosci</a>!
nature.com/articles/s4158…
We survey generative models that can be trained on time series to mimic the behavior of the neural substrate.
#AI #neuroscience #DynamicalSystems
IMMERSE Project 🧠📱 (@immerse_project) 's Twitter Profile Photo

⚠️ New PhD blog alert ⚠️ Dive into the future of mental health care! Explore the incredible potential of 𝐀.𝐈. in predicting trends and uncovering hidden patterns. Curious? Check out this month's blog by Manuel Brenner immerse-project.eu/home-en/blog/o…

DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

How to analyze comput. & dynamic mechanisms of RNNs? 1/2 Our #NeurIPS2023 spotlight on a highly efficient algo for locating all fixed points, cycles, and bifurcation manifolds in RNNs: arxiv.org/abs/2310.17561 NeurIPS Conference By brilliant Zahra Monfared, Lukas Eisenmann, Nic Göring

How to analyze comput. &amp; dynamic mechanisms of RNNs?

1/2 Our #NeurIPS2023 spotlight on a highly efficient algo for locating all fixed points, cycles, and bifurcation manifolds in RNNs: arxiv.org/abs/2310.17561
<a href="/NeurIPSConf/">NeurIPS Conference</a>
By brilliant <a href="/Zahra__Monfared/">Zahra Monfared</a>, <a href="/lukasironman/">Lukas Eisenmann</a>, Nic Göring
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

How to reconstruct #DynamicalSystems from many different data modalities observed simultaneously? Here we introduce a novel generative modeling framework for this, based on control-theoretic ideas for efficiently guiding the training process: arxiv.org/abs/2212.07892 #AI #ML (1/4)

How to reconstruct #DynamicalSystems from many different data modalities observed simultaneously?
Here we introduce a novel generative modeling framework for this, based on control-theoretic ideas for efficiently guiding the training process: arxiv.org/abs/2212.07892
#AI #ML
(1/4)
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Surprisingly, our framework enables to reconstruct chaotic attractors from just *symbolic* time series alone under certain conditions. This gives hope that it may be possible to infer dynamical systems just from behavioral class labels or language. (3/4)

Surprisingly, our framework enables to reconstruct chaotic attractors from just *symbolic* time series alone under certain conditions. This gives hope that it may be possible to infer dynamical systems just from behavioral class labels or language.
(3/4)
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Can we learn from time series data a dynamical systems model that *generalizes* to unobserved dynamical regimes (basins of attraction), like a good scientific theory should? Out-of-domain generalization in #DynamicalSystems reconstruction: arxiv.org/abs/2402.18377 #AI #ML (1/3)

Can we learn from time series data a dynamical systems model that *generalizes* to unobserved dynamical regimes (basins of attraction), like a good scientific theory should?

Out-of-domain generalization in #DynamicalSystems reconstruction: 
arxiv.org/abs/2402.18377

#AI #ML
(1/3)
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Cool, 3 papers accepted at #icml2024: 1) Out-of-Domain Generalization in Dynamical Systems Reconstruction (prelim. vers.: arxiv.org/abs/2402.18377) 2) Optimal Recurrent Network Topologies for Dynamical Systems Reconstruction (details to follow) ...

DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

3) Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics (prelim. vers.: arxiv.org/abs/2212.07892) Fantastic teamwork, as usual, by Niclas Göring, Florian Hess, Manuel Brenner, Zahra Monfared, Jürgen Hemmer, Georgia Koppe

DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Weight pruning by size is a standard #ML #AI technique to produce sparse models, but in our ICML Conference paper arxiv.org/abs/2406.04934 we find it doesn’t work for learning #DynamicalSystems! Instead, via geometry-based pruning we find *network topology* is far more important! (1/5)

IMMERSE Project 🧠📱 (@immerse_project) 's Twitter Profile Photo

⚠️New paper out in Psychological Medicine⚠️ The experience sampling methodology as a digital clinical tool for more person-centered mental health care: an implementation research agenda. Read it here doi.org/10.1017/S00332…

⚠️New paper out in Psychological Medicine⚠️ The experience sampling methodology as a digital clinical tool for more person-centered mental health care: an implementation research agenda.

Read it here doi.org/10.1017/S00332…
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Just wanted to stop by & say: We have 2 new accepted #NeurIPS2024 papers: 1) Manuel Brenner , Hemmer, Zahra Monfared, DD: Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction --> *this takes DSR to a new level!*, details to follow

Georgia Koppe (@georgiakoppe) 's Twitter Profile Photo

Creating digital twins of social interaction behavior with #AI! Our study shows how generative models can predict interactions from limited data, revealing hidden dynamics. Together with Manuel Brenner DurstewitzLab. Explore: osf.io/preprints/psya… #DigitalTwin #SocialBehavior

Creating digital twins of social interaction behavior with #AI! Our study shows how generative models can predict interactions from limited data, revealing hidden dynamics. Together with <a href="/brenner_manuel/">Manuel Brenner</a> <a href="/DurstewitzLab/">DurstewitzLab</a>. Explore: osf.io/preprints/psya… #DigitalTwin #SocialBehavior
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

This is awesome - using neural flow operators trained by multimodal teacher forcing to produce generative dynamics models of human behavior in social contexts ... social strategies as attractors in state space!

DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Interested in interpretable #AI foundation models for #DynamicalSystems reconstruction? In a new paper we move into this direction, training common latent DSR models with system-specific features on data from multiple different dynamic regimes and DS: arxiv.org/pdf/2410.04814 1/4

Interested in interpretable #AI foundation models for #DynamicalSystems reconstruction?
In a new paper we move into this direction, training common latent DSR models with system-specific features on data from multiple different dynamic regimes and DS:
arxiv.org/pdf/2410.04814
1/4
DurstewitzLab (@durstewitzlab) 's Twitter Profile Photo

Symbolic dynamics bridges from #DynamicalSystems to computation/ #AI! In our #NeurIPS2024 (NeurIPS Conference) paper we present a new network architecture, Almost-Linear RNNs, that finds most parsimonious piecewise-linear representations of DS from data: arxiv.org/abs/2410.14240 1/4

Symbolic dynamics bridges from #DynamicalSystems to computation/ #AI!

In our #NeurIPS2024 (<a href="/NeurIPSConf/">NeurIPS Conference</a>) paper we present a new network architecture, Almost-Linear RNNs, that finds most parsimonious piecewise-linear representations of DS from data: arxiv.org/abs/2410.14240

1/4
Towards Data Science (@tdatascience) 's Twitter Profile Photo

The field of ML has evolved to balance these 2 forces: leveraging the simplicity of linear models where possible, while incorporating nonlinearity to handle the complexity of the world. Read more from Manuel Brenner. #LLM #MachineLearning towardsdatascience.com/a-guide-to-lin…