AI's stochastic nature and our interface confusion

Weeknotes 357 - A short newsletter announcing a vacation break and short thoughts of the last weeks.

AI's stochastic nature and our interface confusion
Interpretation by Midjourney

This is a short newsletter to update you on the reason why you will not receive any normal newsletter today and the next two weeks, and you also missed one last week.

To start with the latter, we organized the Civic Protocol Economies Design Charrette last week, Monday to Wednesday, and including preparations, it was hard to complete the newsletter. I started to look back at the Apple event, as follow-up to newsletter 356 (Building the RealOS through hybrid intelligent layers), and also had a draft for the triggered thought. I like to share these here today.

The design charrette was very fulfilling, with a great group of people, inspiring speakers and food for future thinking and research (through design) activities. One of the participants Viktor Bedö made a lovely reporting Linkedin-post, check it out here. Read also the report of Julia Barashkova.

The last day of the charrette about to start.

Why I will not be posting for the coming weeks is because of a three-week vacation. I am now on a train from Oslo to Bergen, where I will embark on a ship for almost two weeks. I will be back for the Society 5.0 Festival, to host a workshop on 16 October, sharing some of my impressions of the design charrette, amongst others.

As hybrid realities merge physical and digital spaces, we face a critical question: who controls these new environments? Current AI development prioritises individual convenience and corporate profit over collective well-being. This workshop explores how communities can reclaim agency in designing human-AI collaborations that serve the common good.

And of course also on time for the next ThingsCon Salon on 29 October. Tessa and Mike will organise a great workshop on maintaining good intentions in the smart city.

Finally, we opened the early bird registration for TH/NGS 2025 (12 December), after we sharpened the theme RESIZE < REMIX < REGEN.

Looking back at the Apple event

I've been thinking about Kevin Roose's take in the Hard Fork Podcast on the recent iPhone event. What struck me was his observation about how Apple might be foreshadowing more wearable devices in the coming years. The design choice to place all the computing power of the new iPhone Air on that plateau seems intentional - almost as if they're preparing us for a future where the intelligence component becomes separable. It was a take that was repeated by several others.

Looking at it another way, perhaps they created this larger plateau specifically to establish this modular concept. The intelligent elements could become something like a mobile puck - the heart, engine, and mind of the phone - while the rest is essentially battery, screen, and I/O devices. It's reminiscent of the Fairphone philosophy but with Apple's execution.

If they continue down this path of modular design rather than maximizing space optimization, we might indeed see a separate intelligence unit next year that connects wirelessly to various accessories - headphones, glasses, or other wearables. And the foldable screen. This aligns with what seems like a possible path for years: the thinking component becomes the true device, while everything else serves as an accessory.

Week 357: AI's stochastic nature and our interface confusion

Another thought that's been triggered by a recent AI Report podcast discussion: the fundamental misunderstanding of stochastic predictions is not only causing hallucinations but potentially enabling a new computing paradigm that's more generic and human-like in its thinking, versus the traditional strict, database-driven approach.

What's particularly interesting is how these different paradigms are being mixed in our interfaces. Google Search, for example, now combines AI's "generic thinking" capabilities with the traditional precision-focused search results. This hybrid approach is causing confusion as our tools oscillate between providing specific answers and operating with creative uncertainty.

The way GPT-5 routes queries to different specialized models based on the question type is essentially pre-loading this concept into our systems. While OpenAI is primarily focused on the generic, creative aspects (while trying to limit hallucinations), the real strength may lie in this triage capability between different types of thinking.

As users become accustomed to tools that dynamically choose between different models for different tasks, we're being educated about this hybrid world. We're learning that we need to combine different tools for comprehensive understanding, and that some friction in the system actually helps us understand its limitations and capabilities.

The challenge ahead isn't just improving performance in either hallucination reduction or creative thinking - individual scores in these areas may already be sufficient. What we really need is a trustworthy self-reflection system within AI, allowing it to recognize when it's hallucinating. Like someone with mental health challenges who struggles to recognize their own hallucinations, AI needs that meta-awareness to evaluate its own outputs.

Perhaps the solution involves different types of AI conducting peer reviews on each other's work, creating a system of checks and balances that mirrors how humans collaborate to verify information and catch errors.

Notions from the news of the last (two) weeks

When I return, I will review the main news events, as well as the ones that occurred during my vacation.

Like the new Meta Glasses. They indicate an in-between phase of physical AI, subtitling the real world, indicating also how this “agentic AI for the physical world” is becoming key step in our human-AI-things relations (the current theme of Cities of Things). The Glasses are also another try in that specific product category of wearables, as an archetype of smart wearables. The quest for finding the use case for these devices. I think this might still not be the right one, but it can learn a lot about what we need. And what not. Ross Dawson invited me to the podcast Human+AI, and we spoke on the needed friction for valuable relations with AI, and more: you can find (and listen) here via humansplus.ai (episode).

Enjoy your weeks and see you in October!

PS: these are some of the links I captured, shared here without the usual contextual thoughts:

THE HOT AIR FACTORY
what’s the cost of a thought?
Claude introduces memory for teams at work
Claude now remembers your team’s projects and preferences across conversations. Memory helps maintain context for complex work, with project-specific boundaries and full user control over what’s remembered.
The City is (still) a battlesuit for surviving the future.
Just watched Sir Norman Foster present at the World Design Congress in London, on cities and urbanism as a defence against climate change. This excellent image visualises household carbon footprint…
How Tim Cook sold out Steve Jobs - Anil Dash
A blog about making culture. Since 1999.
Peak bubble
It’s hard to see how this won’t end badly
What I think about when I think about Claude Code
Posted on Friday 12 Sep 2025. 1,405 words, 14 links. By Matt Webb.
Harvard paper reveals cultural bias in AI responses | Ross Denton posted on the topic | LinkedIn
Cracking chart from Harvard University’s “Which Humans?” paper. The further someone’s cultural distance from the US, the less correlated GPT responses are to their cultural values. I’ve heard pitches from AI companies suggesting that AI moderation/analysis is a good cost cutting option to make research in “non-priority” markets feasible. But looking at this, there’s some real risks that this would miss important local insights and flattening out international insights. The world is much bigger and more diverse than we think, and over-relying on cognitive tools designed in the US could end up making it feel a lot smaller. More thoughts to follow... | 301 comments on LinkedIn
SpaceX buys $17 billion worth of satellite spectrum to beef up Starlink broadband service
SpaceX promises a step change in performance for cell phone users around the world.
Gemini in Chrome: AI-ondersteuning, rechtstreeks in je browser
Krijg nuttige AI-ondersteuning van Gemini in Chrome. Begrijp meer, werk sneller en vind moeiteloos nieuwe ideeën op elke webpagina.
The AI vibe shift: From doom to realism
Existential anxiety surrounding AI is giving way to more realistic concerns about its potential impact on the workforce and beyond.
AI medical tools found to downplay symptoms of women, ethnic minorities
Bias-reflecting LLMs lead to inferior medical advice for female, Black, and Asian patients.
OpenAI might be developing a smart speaker, glasses, voice recorder, and a pin
It may launch a wearable after all.
Understanding Right to Explanation and Automated Decision-Making in Europe’s GDPR and AI Act | TechPolicy.Press
The GDPR and AI Act must ensure interpretable AI, clear explanations, and protect human agency to avoid discrimination, writes Peter Douglas.
Considering the Risks of AI-Enabled ‘Smart Glasses’ in Livestreamed Violence | TechPolicy.Press
Tech companies have a duty to invest in the necessary technologies, policy, and content moderation tools to minimize risks, writes Jordyn Abrams.
When AI Writes Code, Who Secures It?
In early 2024, a striking deepfake fraud case in Hong Kong brought the vulnerabilities of AI-driven deception into sharp relief. A finance employee was duped
MIT software tool turns everyday objects into animated, eye-catching displays
MIT’s FabObscura system helps users design and print barrier-grid animations without electronic components. From zigzags to circular patterns, the software turns unique concepts into printable scanimations, helping users create dynamic packaging, toys, signage, and decor.
US scientists achieve robot swarm control inspired by birds and fish
US scientists unveil a robot swarm breakthrough inspired by birds and schooling fish, offering promise for rescue and medical robotics.
Ruth Millikan: Unicepts — The Brains Blog
Unicepts Ruth Millikan, University of Connecticut Proposed in Beyond Concepts; Unicepts, Language and Natural Information (Millikan 2017, Oxford UP) is that many of the roles traditionally thought …
MCP in Practice
Mapping Power, Concentration, and Usage in the Emerging AI Developer Ecosystem
Introducing Monologue: Effortless Voice Dictation
Type at the speed of talk—included in your Every subscription
Apple introduces AirPods Pro 3 with live translation feature
Plus upgrades to active noise cancellation and battery life.
New iPhones use Apple N1 wireless chip—and we’ll probably start seeing it everywhere
Not Apple’s first custom Wi-Fi and Bluetooth chip, but the first in an iPhone.
AI doomerism isn’t new. Meet the original alarmist: Norbert Wiener
Decades before Geoffrey Hinton and Eliezer Yudkowsky raised alarms, the computer scientist warned AI could steal jobs and outsmart humans.
Seven-Eleven Begins Trial of Robots for Stocking, Floor Cleaning; Company Expects New Machines to Cut Employee Workload by About 30%
<p>Seven-Eleven Japan Co. on Tuesday began testing the use of several types of robots to handle tasks such as restocking shelves with drinks and cleaning windows and floors at a store in Tokyo.</p>
ai weiwei’s installation in ukraine unveils proportioned spheres & dyed camouflage uniforms
ai weiwei reveals the installation, three perfectly proportioned spheres and camouflage uniforms painted white, in kyiv, ukraine.