@spatial (@spatialneuron) 's Twitter Profile
@spatial

@spatialneuron

iykyk

ID: 1127765607053692932

calendar_today13-05-2019 02:40:48

1,1K Tweet

195 Followers

306 Following

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models "We demonstrate the effectiveness of VRPO by applying it to LLaDA, and the resulting model, LLaDA 1.5, outperforms its SFT-only predecessor consistently and significantly across mathematical

LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models

"We demonstrate the effectiveness of VRPO by applying it to LLaDA, and the resulting model, LLaDA 1.5, outperforms its SFT-only predecessor consistently and significantly across mathematical
WiFi Money Guy (@wifimoneyguy) 's Twitter Profile Photo

I guess people really liked this tweet. I forgot to mention that I have an AI agent that can build an entire health VSL in a couple of minutes. It's a complicated workflow and I am putting as part of PumpAds. What's that? Check it out here: waitlist.pumpads.ai

I guess people really liked this tweet. 

I forgot to mention that I have an AI agent that can build an entire health VSL in a couple of minutes. 

It's a complicated workflow and I am putting as part of PumpAds. 

What's that? 

Check it out here: waitlist.pumpads.ai
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

How much do language models memorize? "We formally separate memorization into two components: unintended memorization, the information a model contains about a specific dataset, and generalization, the information a model contains about the true data-generation process. When we

How much do language models memorize?

"We formally separate memorization into two components: unintended memorization, the information a model contains about a specific dataset, and generalization, the information a model contains about the true data-generation process. When we
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning "We apply Sparse Autoencoders (SAEs) and ITDA to a large text-to-image diffusion model, Flux 1, and consider the interpretability of embeddings of both by introducing a visual automated interpretation

Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning

"We apply Sparse Autoencoders (SAEs) and ITDA to a large text-to-image  diffusion model, Flux 1, and consider the interpretability of embeddings  of both by introducing a visual automated interpretation
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

it's kinda wild that like AGI is now a completely normal thing for people to take seriously that's even talked about in mainstream media/movies/etc. just 3-5 years ago it was only taken seriously by rationalists and researchers in a few labs Overton window has shifted greatly

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Beautiful paper, collab between AI at Meta , Google DeepMind, NVIDIA AI Developer Language models keep personal facts in a measurable amount of “storage”. This study shows how to count that storage—and when models swap memorization for real learning. 📡 The Question Can we separate

Beautiful paper, collab between <a href="/AIatMeta/">AI at Meta</a> , <a href="/GoogleDeepMind/">Google DeepMind</a>, <a href="/NVIDIAAIDev/">NVIDIA AI Developer</a> 

Language models keep personal facts in a measurable amount of “storage”. This study shows how to count that storage—and when models swap memorization for real learning. 

📡 The Question

Can we separate
hardmaru (@hardmaru) 's Twitter Profile Photo

AI that can improve itself: A deep dive into self-improving AI and the Darwin-Gödel Machine. richardcsuwandi.github.io/blog/2025/dgm/ Excellent blog post by Richard C. Suwandi reviewing the Darwin Gödel Machine (DGM) and future implications.

AI that can improve itself: A deep dive into self-improving AI and the Darwin-Gödel Machine.

richardcsuwandi.github.io/blog/2025/dgm/

Excellent blog post by <a href="/richardcsuwandi/">Richard C. Suwandi</a> reviewing the Darwin Gödel Machine (DGM) and future implications.
Stuart Hameroff (@stuarthameroff) 's Twitter Profile Photo

Sir Demis Hassabis is quite wrong that ‘Penrose and Hameroff haven’t found functional quantum effects in the brain’. At least our friends have. 1) AnirbanBandyopadhyay group has found coherent megahertz, gigahertz and terahertz oscillations in microtubules which are temperature

Grady Booch (@grady_booch) 's Twitter Profile Photo

Large Language Models are architecturally incapable of reasoning or understanding. Gary Marcus and I told everyone this, consistently, relentlessly, and from their very start. machinelearning.apple.com/research/illus…

Stuart Hameroff (@stuarthameroff) 's Twitter Profile Photo

Ilya Sutskever is wrong on this. The brain is not a digital computer and not really a computer at all, more like a quantum orchestra. Biology is based on organic carbon which supports quantum processes and self-similar dynamics in hertz, kilohertz, megahertz, gigahertz and

Stuart Hameroff (@stuarthameroff) 's Twitter Profile Photo

Ultrasound brain therapy reverses cortical atrophy, apparently by causing disassembled microtubules to repolymerize, regrow the brain. pubmed.ncbi.nlm.nih.gov/33681449/ ultradao.to/study

Grady Booch (@grady_booch) 's Twitter Profile Photo

Sentience is an exquisite consequence of the laws of physics, and the embodied mind of organisms shaped by millions of years of evolution is the remarkable product of a billions upon billions of failed experiments and dead ends and strange loops. It is a mark of profound hubris