David Manheim (@davidmanheim) 's Twitter Profile
David Manheim

@davidmanheim

Lecturer @TechnionLive, founder @alter_org_il, emeritus @superforecaster, PhD @PardeeRAND

Optimistic on AI, pessimistic on humanity managing the risks well.

ID: 19606850

linkhttps://about.me/davidmanheim calendar_today27-01-2009 18:31:39

56,56K Tweet

8,8K Followers

1,1K Following

David Manheim (@davidmanheim) 's Twitter Profile Photo

Séb Krier Unfortunately, it seems like a lot of the AI-denialism is self-justification of prior positions. It was reasonable to be very wrong about the usefulness of language models in the wake of GPT-2, or even GPT-3. The fact that these systems function on some level, to some extent, as

David Manheim (@davidmanheim) 's Twitter Profile Photo

The idea that we can have safe systems without addressing ethics seems difficult, but I agree this focus on security is better than undermining or disbanding this work - even though the approach accelerates race dynamics which continue to undermine safety of AI systems.

Rob Wiblin (@robertwiblin) 's Twitter Profile Photo

I think when the general public enters the AI regulation debate in a big way the tech industry will miss dealing with alignment people politely pushing SB1047 etc. Will be... messy.

Harlan Stewart (@humanharlan) 's Twitter Profile Photo

It is difficult to make the world understand the danger posed by ASI. Much of that difficulty is because of human psychology, but some of the difficulty is manufactured by the AI industry, which uses Tobacco industry playbook tactics to prevent interference

David Manheim (@davidmanheim) 's Twitter Profile Photo

Embodied cognition is plausible philosophically, but doesn't deliver empirically for AI. Its proponents are coherent, well-informed, and yet consistently surprised by what actually works.

David Manheim (@davidmanheim) 's Twitter Profile Photo

Looks like India is running the next AI Impact (formerly Safety) Summit, and it is seemingly likely to happen around February 2026?

David Manheim (@davidmanheim) 's Twitter Profile Photo

I asked different LLMs what it would look like if they won the AI race. Their answer shouldn't surprise you - any such race will be won by nobody.

I asked different LLMs what it would look like if they won the AI race. Their answer shouldn't surprise you - any such race will be won by nobody.
TESCREAL is bullshit. (But so is this account.) (@tescrealist) 's Twitter Profile Photo

No one reads newspapers. Today, the collective epistemology of the tech-illuminati is based on what the ChatGPT summaries of single-essay microdomains and some anonymous comments on privately circulated google docs tells their private chats to believe.

Kat Woods ⏸️ 🔶 (@kat__woods) 's Twitter Profile Photo

"My mother doesn't love me. It's just programmed into her by evolution to help humans survive and reproduce" Sounds silly when you say that, doesn't it? It sounds just as silly when people say "I don't believe what AIs say. It's just programmed into them by their training."

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

Apples of course do grow on vines, as anyone should remember who went to an apple orchard to pluck apples as a kid. My childhood home had apple vines growing up the rear fence. Mom always said that feeding them to the hens would produce sweeter chicken milk.

David Manheim (@davidmanheim) 's Twitter Profile Photo

Think about how you would have reacted if given information like this, given the little you actually know, and the trusted source of the information. And remember: reality doesn't have any rules requiring only presenting puzzles you can correctly solve.

Tao Burga (@taoburr) 's Twitter Profile Photo

AI competition discourse suffers from the illusion that *nations* are building AI systems (like they built nuclear weapons). Not really, or not yet. Because this time the real players are private actors, we should expect: - USG to have low(er) situational awareness - National