Caelum Forder (@caelumforder) 's Twitter Profile
Caelum Forder

@caelumforder

Pursuing safe decentralised AGI @coral_protocol. Thoughts my own.

ID: 2651605063

calendar_today16-07-2014 17:17:11

364 Tweet

24 Followers

83 Following

ControlAI (@ai_ctrl) 's Twitter Profile Photo

OpenAI are silently watering down their safety commitments, omitting that they're no longer requiring safety testing on fine-tuned models. Ex-OpenAI safety researcher Steven Adler calls them out!

Coral Protocol (@coral_protocol) 's Twitter Profile Photo

1/ The AI-agent landscape is chaos. In software, we lean on shared libraries, standard APIs, modular packages. Agents? Stranded in silos; too many frameworks, zero interoperability. Thousands of brilliant OSS projects sit idle instead of compounding. That’s why we created

Caelum Forder (@caelumforder) 's Twitter Profile Photo

This coming together has actually decreased my p(doom). The way that increased collaboration demands smaller faster models and encourages more faithfully mirroring and integrating with human society seems promising. Will love to see this progress and even legislated around

omni_georgio (@omni_georgio) 's Twitter Profile Photo

Should agents be allowed to make payments without human supervision? Me & Caelum Forder go into the trade-offs of this idea. Timestamps: - 00:00 Intro - 00:05 Safe use case: Let agents buy known items on your behalf - 00:36 Unpredictable outcomes: What if an agent makes the

Coral Protocol (@coral_protocol) 's Twitter Profile Photo

We Coral-coded investors in the UAE. 🇦🇪 Exciting updates from our biz dev sprint dropping soon! Keep those notifs on. 🔔 Here’s a sneak peek from the event 👇

We Coral-coded investors in the UAE. 🇦🇪

Exciting updates from our biz dev sprint dropping soon!

Keep those notifs on. 🔔

Here’s a sneak peek from the event 👇
Roshni ✨ (@roshnimmathur) 's Twitter Profile Photo

Just wrapped up a wildly successful investor event in Dubai for Coral Protocol! I presented on something I’m passionate about: how interfaces may evolve to accommodate the internet of agents. Thank you to my amazing team for giving me the opportunity. Big announcements coming 🚀

Just wrapped up a wildly successful investor event in Dubai for <a href="/Coral_Protocol/">Coral Protocol</a>! I presented on something I’m passionate about: how interfaces may evolve to accommodate the internet of agents. Thank you to my amazing team for giving me the opportunity. Big announcements coming 🚀
Pete (@pete_grow) 's Twitter Profile Photo

Over the last 5 years the Grow team have supported the development of so many projects, creators, and Web3 Founders. Some successful, and some greatly impacted by shifting market trends. Right now we are building a ground breaking AI Protocol that will have an impact not only in

Coral Protocol (@coral_protocol) 's Twitter Profile Photo

AI agents are getting smarter. However, they cannot be easily reused in multi-agent systems. Coral Protocol's co-founders omni_georgio & Caelum Forder discuss why composability matters to power the Internet of Agents 👇

Nathan John Hagens (@njhagens) 's Twitter Profile Photo

In this episode, I'm joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks of its development, and the challenges of controlling these technologies as they evolve. thegreatsimplification.com/episode/184-co…

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

Reproduced after creating a fresh ChatGPT account. (I wanted logs, so didn't use temporary chat.) Alignment-by-default is falsified; ChatGPT's knowledge and verbal behavior about right actions is not hooked up to its decisionmaking. It knows, but doesn't care.

Reproduced after creating a fresh ChatGPT account.  (I wanted logs, so didn't use temporary chat.)

Alignment-by-default is falsified; ChatGPT's knowledge and verbal behavior about right actions is not hooked up to its decisionmaking.  It knows, but doesn't care.
Caelum Forder (@caelumforder) 's Twitter Profile Photo

OpenAI and other AI labs get nowhere near enough flak for this. AI companies should be liable for the problem causers they embolden and misguide! (In a liability-adding way rather than the labs becoming a convenient scapegoat)