Amir Daraie (@neuro_amir) 's Twitter Profile
Amir Daraie

@neuro_amir

Neuroengineer; Ph.D. student @JHUBME @LabSarma 🧠🇺🇸 photography 📸 prev. @DondersInst, @DreslerLab 🧪🇳🇱

ID: 1199035110085079041

linkhttp://adaraie.com calendar_today25-11-2019 18:40:39

1,1K Tweet

389 Followers

1,1K Following

Michael Okun (@michaelokun) 's Twitter Profile Photo

Deep Brain Stimulation (DBS) for Depression: Can you name the relevant symptom markers, neuroimaging, electrophysiology, symptoms, sensors, network based targeting and evoked potentials? If NO, then you should read this comprehensive and fresh new review in molecularpsychiatry by

Deep Brain Stimulation (DBS) for Depression: Can you name the relevant symptom markers, neuroimaging, electrophysiology, symptoms, sensors, network based targeting and evoked potentials? If NO, then you should read this comprehensive and fresh new review in <a href="/molpsychiatry/">molecularpsychiatry</a> by
ZivLab (@zivwilliamslab) 's Twitter Profile Photo

Our new study is out today in nature! We employed Neuropixels recordings to understand how speech is produced by single neurons in the human brain. nature.com/articles/s4158…

Spotify Research (@spotifyresearch) 's Twitter Profile Photo

Delighted to share that our paper "Personalized Audiobook Recommendations at Spotify Through Graph Neural Networks" has been accepted to #TheWebConf2024! We introduce 2T-HGNN, a scalable recommendation system powering personalized audiobook recommendations on Spotify.

Delighted to share that our paper "Personalized Audiobook Recommendations at Spotify Through Graph Neural Networks" has been accepted to #TheWebConf2024! We introduce 2T-HGNN, a scalable recommendation system powering personalized audiobook recommendations on Spotify.
Yasser Khan (@khanlabusc) 's Twitter Profile Photo

The first paper from my group. Yay! Published today in Nature Electronics, we review the literature and present a step-by-step guide for designing ingestible electronics. Link: rdcu.be/dyfg6 USC Viterbi School USC

The first paper from my group. Yay! Published today in <a href="/NatureElectron/">Nature Electronics</a>, we review the literature and present a step-by-step guide for designing ingestible electronics. 

Link: rdcu.be/dyfg6

<a href="/USCViterbi/">USC Viterbi School</a> <a href="/USC/">USC</a>
Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

Temporarily escaped the winter storm in Boston to give a talk on "Goal-Driven Models of Physical Understanding" at Stanford University's NeuroAI course (CS375). Slides here: anayebi.github.io/files/slides/A… Always good to be back in my intellectual home! ☀️

Temporarily escaped the winter storm in Boston to give a talk on "Goal-Driven Models of Physical Understanding" at <a href="/Stanford/">Stanford University</a>'s NeuroAI course (CS375).

Slides here: anayebi.github.io/files/slides/A…

Always good to be back in my intellectual home! ☀️
Mahdad Jafarzadeh (@m_jafarzadeh_es) 's Twitter Profile Photo

🧠 Delve into citizen neuroscience, unraveling brain mysteries in its natural habitat! Our paper explores sleep & beyond. Intrigued? Dive into discussions on projects, open hardware/software, challenges, opportunities, and future directions in doi.org/10.1111/ejn.16….

🧠 Delve into citizen neuroscience, unraveling brain mysteries in its natural habitat! Our paper explores sleep &amp; beyond. Intrigued? Dive into discussions on projects, open hardware/software, challenges, opportunities, and future directions in doi.org/10.1111/ejn.16….
Khaled Saab (@khaledsaab11) 's Twitter Profile Photo

My final PhD chapter on improving seizure detection with hazyresearch and Daniel Rubin QILab was just published npj Digital Medicine. TL;DR We found that scaling two dimensions of model supervision: (1) coverage of training data and (2) granularity of class labels– has a large impact on

My final PhD chapter on improving seizure detection with <a href="/HazyResearch/">hazyresearch</a> and <a href="/rubinqilab/">Daniel Rubin QILab</a> was just published <a href="/npjDigitalMed/">npj Digital Medicine</a>.

TL;DR We found that scaling two dimensions of model supervision: (1) coverage of training data and (2) granularity of class labels– has a large impact on
Atoosa Chegini (@atoosachegini) 's Twitter Profile Photo

🔔Take a look at our work on CLIP Inversion and the insights it unveils about its training data. 🗨️ We sure need to curate data more carefully when we train big models on web-scale data. Link to the paper: huggingface.co/papers/2403.02… Link to the code: github.com/hamidkazemi22/…

SarmaLab (@labsarma) 's Twitter Profile Photo

We gathered with our lab members to celebrate the success of all presentations and posters, marking a collective achievement.

We gathered with our lab members to celebrate the success of all presentations and posters, marking a collective achievement.
Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Our computer vision textbook is released! Foundations of Computer Vision with Antonio Torralba and Bill Freeman mitpress.mit.edu/9780262048972/… It’s been in the works for >10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields. 1/4

Our computer vision textbook is released!

Foundations of Computer Vision
with Antonio Torralba and Bill Freeman
mitpress.mit.edu/9780262048972/…

It’s been in the works for &gt;10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields.

1/4
Yasser Khan (@khanlabusc) 's Twitter Profile Photo

Our #CHI2024 paper is now online -- the development of wearable technology has mostly focused on technical aspects, often ignoring what users actually want. This work introduces a plan to balance technical design with user preferences, making wearables much easier and more

Our #CHI2024 paper is now online -- the development of wearable technology has mostly focused on technical aspects, often ignoring what users actually want. This work introduces a plan to balance technical design with user preferences, making wearables much easier and more
Mehrdad Jazayeri (@mjaz_jazlab) 's Twitter Profile Photo

Dear friends and colleagues -- My lab has a postdoc opening for an exciting new project. We are looking for someone with top software engineering skills. I'd appreciate it if you could RT and distribute widely. Jib posting below: dropbox.com/scl/fi/r9m2544…

Erinn M Grigsby (@erinngrigsby) 's Twitter Profile Photo

🚨New #preprints alert!🚨 Excited to share our work on #DeepBrainStimulation of the motor thalamus to enhance speech & swallowing. This continues our research led by Lilly Tang & me, with guidance from Elvira Pirondini & Jorge Gonzalez-Martinez. 🧠1/8 medrxiv.org/content/10.110…

Afshine Amidi (@afshinea) 's Twitter Profile Photo

This Spring, my twin brother Shervine Amidi and I will be teaching a new class at Stanford called "Transformers & Large Language Models" (CME 295). The goal of this class is to understand where LLMs come from, how they are trained, and where they are most used. We will also explore

This Spring, my twin brother <a href="/shervinea/">Shervine Amidi</a> and I will be teaching a new class at Stanford called "Transformers &amp; Large Language Models" (CME 295).

The goal of this class is to understand where LLMs come from, how they are trained, and where they are most used. We will also explore
Afshine Amidi (@afshinea) 's Twitter Profile Photo

Announcing the VIP Cheatsheet for Stanford's CME 295 Transformers & Large Language Models class. Topics include: - Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention) - LLMs: prompting, finetuning

Announcing the VIP Cheatsheet for Stanford's CME 295 Transformers &amp; Large Language Models class.

Topics include:
- Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention)
- LLMs: prompting, finetuning