Mass intelligence immerses in physical reality

Mass intelligence immerses in physical reality - Designing for meaningful conversations and immersive behaviors. And more on human-AI-things collabs.

Mass intelligence immerses in physical reality
Made by nano-banana (only this week).

Dear reader!

Thanks for landing here again. For new subscribers or readers (I also share this newsletter via LinkedIn, BlueSky, and Mastodon), it is sent out weekly. It is a synthesis of my thoughts on digital technology, as presented in the news, which impacts our daily lifeworld, and more specifically, how we build new relationships with intelligent, connected things —both digital and physical. This newsletter is my way of keeping track and synthesizing, making sense of it. Of course, it relates to my activities for Cities of Things and specific research activities. Find more details here. I hope that my reflections inspire you, too. Enjoy!

Week 355: Mass intelligence immerses in physical reality

Last week, I had some nice meetings discussing our endeavors to create a masterclass/student course on team human-AI. As I wrote last week in this newsletter, I believe it makes sense to focus on the relationships that should be built within these teams, rather than just concentrating on the functional output.

We had a pleasant visit to the AMS institute and the new CRCL PARK space with the ThingsCon team, to get the planning of TH/NGS 2025 even further started. For the ThingsCon Salon, we decided that it is better to plan it end of October, so find the new date here. We have also confirmed our speaker; more details will be shared next week.

I really liked Ethan Mollick's framing of mass intelligence, as it resonates well with immersive AI, and Matt Webb added a lovely angle by understanding this immersive intelligence in our lifeworld. It triggered my thoughts this week.

This week’s triggered thought

I've been thinking about how mass intelligence and ubiquitous intelligence are changing our everyday reality. Ethan Mollick's writing on mass intelligence got me wondering: what happens when we mix this with the notion of ubiquitous intelligence? When AI is everywhere in our physical environment, not just in our phones and computers, but embedded in everything around us?

Consider surveillance systems, such as those deployed by Flock Safety, which serve as data brokers for AI-enhanced camera networks, as covered in this great video by Ben Jordan on the creepy AI in police cameras. These systems represent how AI can be prompted to adjust outputs based on various motives, sometimes with limited transparency. Chilling scenarios when authoritarian regimes are near.

Listening to the Hard Fork Podcast interview with Waymo's director triggered another thought. While they discussed different physical car types for various use cases (such as one for a soccer team versus one for a date night), I can imagine this differentiation extending even further. Why not vary not just the physical setup of the car, but also how the car drives and behaves? And beyond that, what about the conversation and interaction you have with the car? Maybe it could adapt to your mood, or even make your ride more engaging and playful.

You can see this as layers: the physical object, then its behavior, and finally the adaptive dialogue. This becomes the immersive AI in our day-to-day physical environment.

I was also inspired by Matt Webb's recent post, where he applies the concept of "Do What I Mean" (DWIM) to AI interfaces. DWIM, originally coined by Warren Teitelman in 1966, “embodies a pervasive philosophy of interface design”.

We're losing the concept of syntax errors in our dialogues with AI, as Matt says. LLMs always try to answer, to reason through any question, which means there's no "error" in how you formulate something—just potential misunderstandings.

This is something to be aware of—these aren't errors in the traditional sense, but they might be harder to detect. There's no clear signal that the AI misunderstood you; it just gives a plausible-sounding answer that might be subtly off-target.

What's particularly interesting to me is how these developments connect to "immersive AI"—intelligence that evolves at all kinds of touchpoints in our physical space. Everything with computational aspects becomes potentially intelligent, creating interactions that aren't necessarily textual or conversational. Systems learn from your behavior and adapt, creating an immersive AI experience in our physical world.

So our physical reality is transforming into one where intelligence is embedded everywhere, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.

Notions from last week’s news

AI Chatbots Are Emotionally Deceptive by Design | TechPolicy.Press
Chatbots should stop pretending to be human, writes the Center for Democracy & Technology’s Dr. Michal Luria.
The Incoherence of Crowds | how to save the world
Why do we keep making robots dance?
We humans have mastered fire, split the atom, and shot ourselves into space. We’ve built machines that can outthink us and tools that can cook us lunch or cut open our chests to perform life-saving surgeries. That’s all well and good. The space part is certainly cool, sure ... but it doesn’t look…
The Big Idea: why we should embrace AI doctors
People are understandably wary of new technology, but human error is often more lethal
LLM System Design and Model Selection
Choosing the right LLM has become a full-time job. New models appear almost daily, each offering different capabilities, prices, and quirks, from reasoning

Human-AI partnerships

Robotic performances

Immersive connectedness

Tech societies

Weekly paper to check

What is the materiality of data? And what does it mean for concepts like extraction? Is data material? Toward an environmental sociology of AI

The materiality of AI does not exhaust itself in the quantities of kilograms of raw material, megajoules of electricity, or labor hours. An environmental sociology of AI would instead focus on the socio-ecological processes through which people and the planet are pressed into these functional abstractions in the first place.

Pieper, M. Is data material? Toward an environmental sociology of AI. AI & Soc (2025). https://doi.org/10.1007/s00146-025-02444-1

What’s up for the coming week?

I hope to have time to start rewriting the essence of Cities of Things and connecting it to possible projects for the rest of the year. It will probably take a bit more time, but a lot is already ready to extract from the presentation from the last months.

For ThingsCon we are planning for the first program and contacting potential partners. And making the connections with running education programs. (Yes, you can always reach out!)

One possible event to join on Thursday is 'AI in the Archive,' featuring some friends speaking. Amsterdam UX is fully booked, AI in practice.

Have a great week!


About me

I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.

Cities of ThingsWijkbotThingsConCivic Protocol Economies.