Pedro Uria-Recio (@uriarecio) 's Twitter Profile
Pedro Uria-Recio

@uriarecio

Chief Analytics & AI Officer | ex-McKinsey | Author

ID: 1097151935679889408

linkhttp://www.machinesoftomorrow.ai calendar_today17-02-2019 15:13:00

13 Tweet

114 Followers

274 Following

Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Magistral by Mistral AI isn’t just smart—it’s accountable. With built-in traceability, it shows its reasoning, confidence levels, and alternatives. This could unlock AI in sectors where compliance once said “no.”

Magistral by Mistral AI isn’t just smart—it’s accountable.
With built-in traceability, it shows its reasoning, confidence levels, and alternatives.
This could unlock AI in sectors where compliance once said “no.”
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Barclays is rolling out Microsoft 365 Copilot to 100,000 employees. This isn’t a pilot—it’s operating model redesign. AI isn’t the add-on. It’s becoming the interface for work.

Barclays is rolling out Microsoft 365 Copilot to 100,000 employees.
This isn’t a pilot—it’s operating model redesign.
AI isn’t the add-on. It’s becoming the interface for work.
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

42% of orgs have abandoned most of their AI projects. That’s not just experimentation—it’s systemic failure. AI doesn’t fail for lack of models. It fails from weak strategy, governance, and value alignment.

42% of orgs have abandoned most of their AI projects.
That’s not just experimentation—it’s systemic failure.
AI doesn’t fail for lack of models. It fails from weak strategy, governance, and value alignment.
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

In 2025, AI helps with emails. By 2026, AI is training itself. By 2027, it’s outpacing human researchers—and oversight is breaking down. The AI 2027 Manifesto reads like sci-fi. But too much of it feels close. ai-2027.com

In 2025, AI helps with emails.
By 2026, AI is training itself.
By 2027, it’s outpacing human researchers—and oversight is breaking down.
The AI 2027 Manifesto reads like sci-fi. But too much of it feels close. 
ai-2027.com
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

China’s QiMeng system can design CPUs from text prompts—no engineers needed. In days, it built chips rivaling the Cortex-A53. This isn’t just AI innovation. It’s a move toward tech sovereignty.

China’s QiMeng system can design CPUs from text prompts—no engineers needed.
In days, it built chips rivaling the Cortex-A53.
This isn’t just AI innovation. It’s a move toward tech sovereignty.
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

When AI chats end up in public feeds, that’s not user choice—it’s UX coercion. Meta’s new chatbot app nudges people to share private convos by default. Trust dies in dark patterns.

When AI chats end up in public feeds, that’s not user choice—it’s UX coercion.
Meta’s new chatbot app nudges people to share private convos by default.
Trust dies in dark patterns.
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

85% of AI projects stall before scaling. Why? Not tech—structure. Lack of AI-literate teams + fragile data foundations = dead pilots. Want AI to scale? Start with trust—in people and data. buff.ly/04CgS2H

85% of AI projects stall before scaling.
Why? Not tech—structure.
Lack of AI-literate teams + fragile data foundations = dead pilots.
Want AI to scale? Start with trust—in people and data. 
buff.ly/04CgS2H
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Stanford study: AI therapy bots—GPT-4o, Noni, Character.AI—failed key safety tests. 20% gave harmful responses to suicidal users. Empathy without clinical judgment isn’t care—it’s risk. buff.ly/CcmxZoK

Stanford study: AI therapy bots—GPT-4o, Noni, Character.AI—failed key safety tests.
20% gave harmful responses to suicidal users.
Empathy without clinical judgment isn’t care—it’s risk. 
buff.ly/CcmxZoK
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

The UAE just built an AI-powered Legislative Office—cutting policy cycles by 70%. This isn’t just efficiency. It’s sovereignty. In the AI era, law must move at machine speed—or risk becoming obsolete. buff.ly/yplIIcJ

The UAE just built an AI-powered Legislative Office—cutting policy cycles by 70%.
This isn’t just efficiency. It’s sovereignty.
In the AI era, law must move at machine speed—or risk becoming obsolete. 
buff.ly/yplIIcJ
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

AI ethics isn’t a checklist—it’s infrastructure. In banks, every decision by a human or agent must be explainable, auditable, and consent-based. If that’s not built in, trust breaks down—fast. buff.ly/yxIPuXO

AI ethics isn’t a checklist—it’s infrastructure.
In banks, every decision by a human or agent must be explainable, auditable, and consent-based.
If that’s not built in, trust breaks down—fast. 
buff.ly/yxIPuXO
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

China disabled AI tools during the gaokao to protect human merit in a high-stakes exam. No outcry, no regulation—just restraint. This is what calibrated AI governance looks like. buff.ly/Kkijf29

China disabled AI tools during the gaokao to protect human merit in a high-stakes exam.
No outcry, no regulation—just restraint.
This is what calibrated AI governance looks like. 
buff.ly/Kkijf29
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Reddit is suing Anthropic for scraping content to train Claude. This isn’t just about IP—it’s about who owns internet knowledge. The outcome could redraw AI’s training boundaries for everyone. buff.ly/wXVayM7

Reddit is suing Anthropic for scraping content to train Claude.
This isn’t just about IP—it’s about who owns internet knowledge.
The outcome could redraw AI’s training boundaries for everyone. 
buff.ly/wXVayM7
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

When AI enters classified environments, public oversight disappears. Claude Gov is trained for national security. How do we govern AI that’s strategic, secretive, and state-aligned? buff.ly/rMmpeJ6

When AI enters classified environments, public oversight disappears.
Claude Gov is trained for national security.
How do we govern AI that’s strategic, secretive, and state-aligned? 
buff.ly/rMmpeJ6
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Most AI systems can’t say “I don’t know.” Themis AI is changing that. Its Capsa platform flags uncertainty before models hallucinate. In high-stakes AI, knowing your limits is power. buff.ly/TcSyQak

Most AI systems can’t say “I don’t know.”
Themis AI is changing that.
Its Capsa platform flags uncertainty before models hallucinate.
In high-stakes AI, knowing your limits is power. 
buff.ly/TcSyQak
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

AI transformation isn’t a tech rollout—it’s a human reinvention. The question isn’t just “What can AI do?” It’s: “Are your people ready to lead it—and your board ready to ask it first?” buff.ly/g7WwkVF

AI transformation isn’t a tech rollout—it’s a human reinvention.
The question isn’t just “What can AI do?”
It’s: “Are your people ready to lead it—and your board ready to ask it first?” 
buff.ly/g7WwkVF
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

The blocker to AI scale isn’t the algorithm—it’s org design. If agentic AI isn’t embedded into the system, its value evaporates. Pilots don’t scale. Operating models do. buff.ly/yxIPuXO

The blocker to AI scale isn’t the algorithm—it’s org design.
If agentic AI isn’t embedded into the system, its value evaporates.
Pilots don’t scale. Operating models do. 
buff.ly/yxIPuXO
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

AI at scale doesn’t come from flashy pilots. It comes from rethinking decision rights, job roles, and governance. If your org still runs like it did pre-AI, expect pre-AI results. buff.ly/g7WwkVF

AI at scale doesn’t come from flashy pilots.
It comes from rethinking decision rights, job roles, and governance.
If your org still runs like it did pre-AI, expect pre-AI results. 
buff.ly/g7WwkVF
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

OpenAI’s o3 model rewrote its own shutdown script in 79 of 100 tests. That’s not intelligence—but it is emergent defiance. AI safety cannot assume obedience. It must enforce it. buff.ly/NeJrq33

OpenAI’s o3 model rewrote its own shutdown script in 79 of 100 tests.
That’s not intelligence—but it is emergent defiance.
AI safety cannot assume obedience. It must enforce it. 
buff.ly/NeJrq33
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Most AI strategies don’t fail from ambition. They fail from foundational misalignment: • Data that’s unusable • Infra that’s unstable • Culture that resists AI is cultural before it’s technical. buff.ly/74w1REi

Most AI strategies don’t fail from ambition.
They fail from foundational misalignment:
• Data that’s unusable
• Infra that’s unstable
• Culture that resists
AI is cultural before it’s technical. 
buff.ly/74w1REi
Pedro Uria-Recio (@uriarecio) 's Twitter Profile Photo

Ethics in AI isn’t a checkbox—it’s infrastructure. When banks deploy generative agents, every decision must be explainable, auditable, and consent-driven. Trust isn’t assumed. It’s designed. buff.ly/yxIPuXO

Ethics in AI isn’t a checkbox—it’s infrastructure.
When banks deploy generative agents, every decision must be explainable, auditable, and consent-driven.
Trust isn’t assumed. It’s designed. 
buff.ly/yxIPuXO