Charlotte Frenkel (@c_frenkel) 's Twitter Profile
Charlotte Frenkel

@c_frenkel

Assistant professor at TU Delft.
Neuromorphic engineer, chip designer, machine learner and music composer.

ID: 1051807426633175040

linkhttp://cogsys.tudelft.nl calendar_today15-10-2018 12:10:06

257 Tweet

1,1K Followers

479 Following

Richard Naud (@neuronaud) 's Twitter Profile Photo

Job alert! The Naud lab is looking for a postdoc in neuroAI to work on a project close to Neuromorphic computing. See the job post here: neurodynamic.uottawa.ca/opportunities.โ€ฆ

Jamie Knight (@neworderofjamie) 's Twitter Profile Photo

Several fully-funded PhD positions are available in the new AI centre of excellence at Sussex. Anyone interested in a project involving neuromorphic engineering, spiking neural networks please get in touch! GeNN Team SussexNeuro @SussexEASy sussex.ac.uk/study/fees-funโ€ฆ

Selina La Barbera (@selinitter) 's Twitter Profile Photo

.Melika Payvand @YigitDemirag Filippo_Moro et al. propose on-chip in-memory #spike #routing using #memristors, optimized for small-world graphs, offering orders of magnitude reduction in routing events compared to current approaches. Now out๐Ÿ‘‰Nature Communications nature.com/articles/s4146โ€ฆ

NICE Conference (@nice_workshop) 's Twitter Profile Photo

We've extended the deadline for submissions to NICE this year until January 22, 2024! One extra week for you all to get your papers in to join us in sunny San Diego this April!

Jens E. Pedersen - @jegp@mastodon.social (@jensegholm) 's Twitter Profile Photo

Our talk about the #Neuromorphic Intermediate Representation (NIR) for the wonderful Open Neuromorphic is online! We'll show you how NIR works and demo it on GPUs, SynSense and #SpiNNaker2 hardware. See you on February 5th 6PM CET ๐Ÿš€ open-neuromorphic.org/workshops/neurโ€ฆ

The Bioelectronics Section in Delft (@be_tudelft) 's Twitter Profile Photo

Muratore's team on fire!๐Ÿ”ฅ Dante Muratore. Check these out! A low power accelerator for closed-loop neuromodulation ieeexplore.ieee.org/document/10399โ€ฆ Spike sorting in the presence of stimulation artifacts: a dynamical control systems approach iopscience.iop.org/article/10.108โ€ฆ

Damien Querlioz (@damienquerlioz) 's Twitter Profile Photo

Our self-powered memristor neural network! It harvests energy with a tiny solar cell, and adjusts its accuracy depending on available energy! Very proud of this one IM2NP UMR 7334 C2N, Centre de Nanosciences et de Nanotechnologies CEA-Leti @IPVF_institute Nature Communications๐ŸŒž 1/2

Nicolas Zucchet (@nicolaszucchet) 's Twitter Profile Photo

How do modern RNNs/SSMs such as Mamba perform on in-context learning tasks? How do they relate to attention-based models like Transformers? We find that modern RNNs can implement attention and that they leverage it to solve ICL tasks in an attention-based manner! (1/6)

How do modern RNNs/SSMs such as Mamba perform on in-context learning tasks? How do they relate to attention-based models like Transformers?

We find that modern RNNs can implement attention and that they leverage it to solve ICL tasks in an attention-based manner!

(1/6)
Brad Aimone (@jbimaknee) 's Twitter Profile Photo

Great venue, great lineup of speakers, and a great week planned in San Diego in just over a month! Make your plans now!

NICE Conference (@nice_workshop) 's Twitter Profile Photo

Coming - or thinking of coming - to San Diego for an amazing neuromorphic conference on the beach? But your amazing neural computing results just came in last week? NICE is still accepting late-breaking abstracts! easychair.org/conferences/?cโ€ฆ We look forward to seeing you!

NICE Conference (@nice_workshop) 's Twitter Profile Photo

Check out the NICE 2024 agenda! You too can stroll up the beach from your hotel and watch these amazing talks! A few discounted rooms left and a few days left to register! flagship.kip.uni-heidelberg.de/jss/HBPm?mI=25โ€ฆ

Charlotte Frenkel (@c_frenkel) 's Twitter Profile Photo

๐Ÿ“ข Wondering how the neocortex works, how it is related to modern machine learning algorithms, and how this insight can be used to fuel next-gen neuromorphic hardware? Have a look at this PhD opening in my team: tudelft.nl/over-tu-delft/โ€ฆ Position open until filled, apply early!

๐Ÿ“ข Wondering how the neocortex works, how it is related to modern machine learning algorithms, and how this insight can be used to fuel next-gen neuromorphic hardware?
Have a look at this PhD opening in my team: tudelft.nl/over-tu-delft/โ€ฆ
Position open until filled, apply early!
Selina La Barbera (@selinitter) 's Twitter Profile Photo

.Filippo_Moro Tristan Yigit Melika Payvand et al. present a #Spiking #NeuralNetwork #hardware with #dendritic architecture based on #memristive #devices for low-power signal processing with reduced memory footprint. Now out ๐Ÿ‘‰ Nature Communications nature.com/articles/s4146โ€ฆ

Melika Payvand (@melikapayvand) 's Twitter Profile Photo

Check out "DenRAM", the first implementation of delay-based dendritic architecture using #RRAM+CMOS technology, which we show is a natural match for efficient temporal signal processing Nature Communications. paper: nature.com/articles/s4146โ€ฆ code: github.com/EIS-Hub/DenRAM Thread ๐Ÿงต

Check out "DenRAM", the first implementation of delay-based dendritic architecture using #RRAM+CMOS technology, which we show is a natural match for efficient temporal signal processing <a href="/NatureComms/">Nature Communications</a>.
paper: nature.com/articles/s4146โ€ฆ
code: github.com/EIS-Hub/DenRAM
Thread ๐Ÿงต
Friedemann Zenke (@hisspikeness) 's Twitter Profile Photo

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint led by Julia Gygax, we provide the answers: arxiv.org/abs/2404.14964

1/6 Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint  led by <a href="/JuliaGygax4/">Julia Gygax</a>, we provide the answers: arxiv.org/abs/2404.14964
Dan Goodman (@neuralreckoning) 's Twitter Profile Photo

SPIKING NEURAL NETWORKS! If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants. Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens Posters + 8 contributed talks selected by participant vote.

SPIKING NEURAL NETWORKS!

If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants.

Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens

Posters + 8 contributed talks selected by participant vote.
Friedemann Zenke (@hisspikeness) 's Twitter Profile Photo

We're hiring! Come build models of how the brain learns and simulates a world model. We have several openings at PhD and postdoc levels, including a collab with Georg Keller lab on designing regulatory elements to target distinct neuronal cell types. zenkelab.org/jobs

We're hiring! Come build models of how the brain learns and simulates a world model. We have several openings at PhD and postdoc levels, including a collab with <a href="/georg98keller/">Georg Keller</a> lab on designing regulatory elements to target distinct neuronal cell types.
zenkelab.org/jobs
Jens E. Pedersen - @jegp@mastodon.social (@jensegholm) 's Twitter Profile Photo

#Neuromorphic #computing just got more accessible! Our work on a Neuromorphic Intermediate Representation (NIR) is out in Springer Nature Communications. We demonstrate interoperability with 11 platforms. And more to come! nature.com/articles/s4146โ€ฆ A thread ๐Ÿงต 1/5

Johannes Oswald (@oswaldjoh) 's Twitter Profile Photo

Super happy and proud to share our novel scalable RNN model - the MesaNet! This work builds upon beautiful ideas of ๐—น๐—ผ๐—ฐ๐—ฎ๐—น๐—น๐˜† ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ฎ๐—น ๐˜๐—ฒ๐˜€๐˜-๐˜๐—ถ๐—บ๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด (TTT), and combines ideas of in-context learning, test-time training and mesa-optimization.

Super happy and proud to share our novel scalable RNN model - the MesaNet! 

This work builds upon beautiful ideas of ๐—น๐—ผ๐—ฐ๐—ฎ๐—น๐—น๐˜† ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ฎ๐—น ๐˜๐—ฒ๐˜€๐˜-๐˜๐—ถ๐—บ๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด (TTT), and combines ideas of in-context learning, test-time training and mesa-optimization.