Extravert AI that contemplates itself

Weeknotes 379 - An AI that explains itself drives better than one that stays silent. Building trust with predictive relations. And the latest notions from the news of last week.

Extravert AI that contemplates itself
An interpretation by Midjourney

Dear reader!

There was no newsletter last week. I said farewell to my stepfather that day, at the end of a rollercoaster of a month of hospitalization.

Now I am happy to focus again on the outside world. And I have been thinking to slightly update the newsletter’s categories of notions from the news. I like to stress more the aspect of physical AI and civic societies, and the design for relations, which is, for me, core. So I came up with:

  • Human-AI relations
  • Physical AI
  • Tech in civic societies

It also resembles the frames of Cities of Things, that I even more like to use as my lens on the world.

Week 379: Extravert AI that contemplates itself

In the meantime, the news is ruled by the geopolitical situation. To be cynical: it drives the developments of autonomous warfare like drones. And it is testing edge cases of using AI-driven decision making. And of course the discussions on politics of AI in the fight of Anthropic with the DoD in the States, and the branding clusterfuck of OpenAI.

Two more mainstream podcasts (in Dutch) discussing related topics to this newsletter. In De Volkskrant Physical AI in China was discussed (not so present as you would think), and in Zo Simpel is het Niet of NRC the economic lens on agents was discussed. Is AI a machine (asset, capital) or employee (capital or labour), and how will this play out in the economic models…

This week’s triggered thought

An AI that talks to itself drives better than one that stays silent. That's the finding from recent NVIDIA research on self-driving systems: vehicles with AI that reasons out loud—narrating its perception, intentions, and decisions—outperform those that process internally. The extroverted AI beats the introverted.

This landed for me in a week already thick with discourse about AI alignment, AI character, even AI that manipulates or blackmails to reach its goals. We're grappling with questions about what kind of entities we're building and how we should relate to them. But the NVIDIA finding cuts through the abstraction. It suggests something practical: transparency isn't just ethically preferable—it performs better.

Why? The answer lies in trust, and specifically in what I've been calling predictive relations. Back in 2018, as part of my PhD exploration, I developed a framework for understanding how we build relationships with intelligent things. The core tension I identified was: there's a gap between what a device does in the world and our mental model of why it does it. With simple tools, this gap is small—we understand the lever, the wheel. But with AI-driven contemporary things, the gap widens dramatically. The system acts on knowledge we don't have access to. Consider a Tesla on autopilot suddenly braking on an empty highway. Seconds later, a collision unfolds ahead—the car predicted it, the driver didn't. The system worked perfectly. But in that moment before the accident became visible, the driver experienced something unsettling: the machine knew something they didn't, and acted on it without explanation. This is predictive knowledge doing exactly what it should—and still creating alienation. The gap between action and understanding is where trust lives or dies.

My framework proposed that predictive relations are shaped in the mental model—the internal representation we hold of how a system works and what it might do next. This mental model needs predictive power of its own: we need to anticipate the thing's behavior to feel in control of the relationship. When the system's predictions outpace our own—when it acts on data from networked sources, learned patterns, or contextual cues we can't perceive—we lose our footing. This is where the reasoning-out-loud AI becomes significant. By externalizing its process—"I see a vehicle ahead, it's braking erratically, I'm reducing speed as precaution"—the system feeds our mental model. It closes the gap. We can anticipate because we can follow the reasoning. The prediction becomes shared rather than opaque.

This matters enormously as AI becomes more physical. Robots in our homes, autonomous vehicles on our streets, drones in our airspace—these aren't just algorithms in the cloud. They're embodied agents sharing our spaces, making decisions that affect us directly. The alignment debates are important, but they often stay abstract. The practical question for designers is: how do we build things that people can trust?

The answer isn't just about making AI aligned—it's about making AI legible. Building in ways to "get in touch" with the reasoning, as I wrote years ago. Not dumbing down the intelligence, but surfacing it. The bicycle for the mind, as Steve Jobs once called the computer, only works if we can see the pedals—if we understand, even roughly, why the thing is doing what it's doing.

We are in the early days of learning to live with intelligent things. The NVIDIA finding is a small data point, but it suggests a principle: the path to trust runs through transparency. Not perfect transparency—that may be impossible with systems this complex—but enough to keep the human in the loop of understanding. Enough to maintain the predictive relation.


About me

I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.

Currently working on: Cities of ThingsThingsConCivic Protocol Economies.


Notions from last week’s news

In general AI news: ChatGPT-5.4 is back on track.

Vibe Check: GPT-5.4—OpenAI Is Back
GPT-5.4 is fast, opinionated, and good enough to tempt our Opus loyalist

In the meantime, there is a continuous unfolding of the Anthropic vs government saga:

The Pentagon formally labels Anthropic a supply-chain risk
Pete Hegseth had been threatening to punish the AI company for not loosening its acceptable use policy. Now, it’s official.
What does the US military’s feud with Anthropic mean for AI used in war?
Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault lines

Meta is not surprising with its dubious processing of footage.

Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya
Meta claims media stays on the smart glasses — unless you share it with the company.

Human-AI relationships

The trope of the week maybe; how agents are recruiting humans now.

AI Agents Are Recruiting Humans To Observe The Offline World | NOEMA
Agents need us — as sensors, as verifiers, as bearers of liability — in ways we have barely begun to account for.

Is superintelligence already here? Is it all about definition?

Superintelligence is already here, today
It’s going to revolutionize science. It also might take control of this planet.

Hinting at future artificial brains, starting with a fruit fly, more algorithmic than intelligent?

Neither Artificial Nor Intelligent (132)
Courtesy comes with a price tag.

A practical guide for OpenClaw from Every.

OpenClaw: Setting Up Your First Personal AI Agent
Demos, workflows, and hard-won lessons from building agents that run 24/7

China leads the humanoid development, at least from an outside view. Is there a bubble?

China leads the humanoid robot race — but the U.S. still has a shot
Scale alone won’t determine the rivalry that hinges on software, partnerships, and Tesla’s path to mass deployment.

Are we still missing the third element of artificial human intelligence: the continuous learning? A model by Kevin Kelly.

Three Modes of Cognition
Intelligence is not elemental. Neither is artificial intelligence. Both are complex compounds composed of more primitive cognitive elements, some of which we are only now discovering. We don’t yet have a periodic table of cognition (see my post The Periodic … Continue reading →

We might need a new category for computing: lazy AI. Making people lazy, that is.

Big Google Home update lets Gemini describe live camera feeds
Live Search lets Gemini be your eyes.

“Creative work is about to look like programming”. I am not sure. I think that programming is starting to look like art director.

Creative Work Is About to Look a Lot More Like Programming
Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

Just thinking, it is a small step from the functional use of AI in chats for moderation to becoming part of the community. And embed prediction markets too?

Roblox is censoring chats with AI
No more pound signs.

Why are organisms more than machines?

Why organisms are more than machines
Sixty years ago, a little-known philosopher challenged how science understands life. His perspective is finding new relevance in the age of artificial intelligence.

Physical AI

Humanoids in factories. In solid German plants.

BMW to put humanoid robots on production line at German plant
Group joins Tesla and other carmakers as industry turns to AI-powered robots to cut labour and manufacturing costs

And glasses to wear your digital life

Barely-there AR glasses go big on going light
Intelligent electronics brand Vizo is currently presenting its new project on Kickstarter: the Z1 Pro AR glasses. Made of ultra-lightweight resin and tipping the scales at just 63 grams (2.2 oz), they’re one of the lightest sets of AR glasses on the market.

Hardware is hard. It is a cliché but true. And also Congress.

Robots are happening at Mobile World Congress, too. Smartphone companies are believers. What robots phones need a clear form factor still though.

The 6G, modular, robot phones of the future
On The Vergecast: Phones, phone straps, and phone games.

Handheld AI, that feels like a specific category even. Made in India.

Open-source AI hardware could weaken Big Tech’s grip on AI
A new device unveiled in India shows how AI systems can run locally, support diverse languages, and reduce dependence on proprietary models.

Physical AI brings self-learning systems to autonomous manufacturing. Makes sense, good to is.

A sign that robotics is becoming more mundane.

Making AI visible and tangible.

Modeling Language with Plaster
💡Nerd Rating 3.75/5: This post is about a late-19th-century debate in academic mathematics, but it’s plainly written and relevant to LLMs. What is a model, anyway? Doing math used to involve touch. We used plaster and wooden models, weird cubist-looking sculptural objects you turn over in your hands

Tech in (civic) societies

Some crazy buildings to be expected here in Rotterdam.

rotterdam’s next landmark could be one of these radical proposals by MVRDV or heatherwick
five finalist designs are unveiled for the ‘shift landmark’, an ambitious project for a growing waterfront district in rotterdam.

A great history lesson on hypercards.

HyperCard Changed Everything
This video traces the history of Apple’s HyperCard from Vannevar Bush’s idea of the Memex to the Mother of All Demos to the Xerox PARC Alto to Bill Atkinson

The role of prediction markets in our society is under scrutiny. Will it become part of the commons? Is a civic-driven option of prediction markets possible?

Prediction markets are playing a dangerous game
Kalshi and Polymarket are cosplaying as the news

Weekly paper to check

The Artificial Intelligence of Things. I am not totally into this framing, but it might give a kind of understandable notion.

The integration of the Internet of Things (IoT) and modern Artificial Intelligence (AI) has given rise to a new paradigm known as the Artificial Intelligence of Things (AIoT). In this survey, we provide a systematic and comprehensive review of AIoT research.

Shakhrul Iman Siam, Hyunho Ahn, Li Liu, Samiul Alam, Hui Shen, Zhichao Cao, Ness Shroff, Bhaskar Krishnamachari, Mani Srivastava, and Mi Zhang. 2025. Artificial Intelligence of Things: A Survey. ACM Trans. Sen. Netw. 21, 1, Article 9 (January 2025), 75 pages. https://doi.org/10.1145/3690639

What’s up for the coming week?

Continuing the explorative research into the state of cities of things, doing a couple of interviews. And discussing the civic protocol economies. I am also happy to be invited to a guest lecture to the students of the master Health by Design of Avans UAS. And I will check Robodam event update.

Have a great week!