Mass intelligence immerses in physical reality
Mass intelligence immerses in physical reality - Designing for meaningful conversations and immersive behaviors. And more on human-AI-things collabs.

Dear reader!
Thanks for landing here again. For new subscribers or readers (I also share this newsletter via LinkedIn, BlueSky, and Mastodon), it is sent out weekly. It is a synthesis of my thoughts on digital technology, as presented in the news, which impacts our daily lifeworld, and more specifically, how we build new relationships with intelligent, connected things —both digital and physical. This newsletter is my way of keeping track and synthesizing, making sense of it. Of course, it relates to my activities for Cities of Things and specific research activities. Find more details here. I hope that my reflections inspire you, too. Enjoy!
Week 355: Mass intelligence immerses in physical reality
Last week, I had some nice meetings discussing our endeavors to create a masterclass/student course on team human-AI. As I wrote last week in this newsletter, I believe it makes sense to focus on the relationships that should be built within these teams, rather than just concentrating on the functional output.
We had a pleasant visit to the AMS institute and the new CRCL PARK space with the ThingsCon team, to get the planning of TH/NGS 2025 even further started. For the ThingsCon Salon, we decided that it is better to plan it end of October, so find the new date here. We have also confirmed our speaker; more details will be shared next week.
I really liked Ethan Mollick's framing of mass intelligence, as it resonates well with immersive AI, and Matt Webb added a lovely angle by understanding this immersive intelligence in our lifeworld. It triggered my thoughts this week.
This week’s triggered thought
I've been thinking about how mass intelligence and ubiquitous intelligence are changing our everyday reality. Ethan Mollick's writing on mass intelligence got me wondering: what happens when we mix this with the notion of ubiquitous intelligence? When AI is everywhere in our physical environment, not just in our phones and computers, but embedded in everything around us?
Consider surveillance systems, such as those deployed by Flock Safety, which serve as data brokers for AI-enhanced camera networks, as covered in this great video by Ben Jordan on the creepy AI in police cameras. These systems represent how AI can be prompted to adjust outputs based on various motives, sometimes with limited transparency. Chilling scenarios when authoritarian regimes are near.
Listening to the Hard Fork Podcast interview with Waymo's director triggered another thought. While they discussed different physical car types for various use cases (such as one for a soccer team versus one for a date night), I can imagine this differentiation extending even further. Why not vary not just the physical setup of the car, but also how the car drives and behaves? And beyond that, what about the conversation and interaction you have with the car? Maybe it could adapt to your mood, or even make your ride more engaging and playful.
You can see this as layers: the physical object, then its behavior, and finally the adaptive dialogue. This becomes the immersive AI in our day-to-day physical environment.
I was also inspired by Matt Webb's recent post, where he applies the concept of "Do What I Mean" (DWIM) to AI interfaces. DWIM, originally coined by Warren Teitelman in 1966, “embodies a pervasive philosophy of interface design”.
We're losing the concept of syntax errors in our dialogues with AI, as Matt says. LLMs always try to answer, to reason through any question, which means there's no "error" in how you formulate something—just potential misunderstandings.
This is something to be aware of—these aren't errors in the traditional sense, but they might be harder to detect. There's no clear signal that the AI misunderstood you; it just gives a plausible-sounding answer that might be subtly off-target.
What's particularly interesting to me is how these developments connect to "immersive AI"—intelligence that evolves at all kinds of touchpoints in our physical space. Everything with computational aspects becomes potentially intelligent, creating interactions that aren't necessarily textual or conversational. Systems learn from your behavior and adapt, creating an immersive AI experience in our physical world.
So our physical reality is transforming into one where intelligence is embedded everywhere, understanding what we mean rather than just what we say, and actively participating in our daily lives in ways both visible and invisible.
Notions from last week’s news




Human-AI partnerships
- The weekly post on the importance of context, also in working with agents.
- The emotional bonds chatbots can build and the consequences. The companies should take action.
- Speaking chats become more real. More real-time; GPT-realtime.
- Next to introducing a new concept (see above), Ethan Mollick also does a good job in introducing Nano Banana. Or Gemini 2.5 Flash Image.
- In case you skipped the column above, the article by Matt Webb on the destination for AI interfaces and the Do What I Mean concept is worth a read.
- If remixing with the right vision and intention is also creativity, AI can be creative. In collaboration with a human.
- Anthropic is releasing a browser, or at least a parasitic one.
Robotic performances
- Robotic toys can become useful vehicles in outer space.
- Rethinking sensors for robots for feeling humans’ presence.
- What does it mean if seniors become too attached to AI robots?
- Chatbots can be manipulated through flattery. Peak humanizing?
- The evolution of robot dance. Will it be the next TikTok hit after blackpink jump?
Immersive connectedness
- This makes sense as a strategy for Microsoft. Co-pilot is the entrance for a lot of people into AI companions, so why not in other environments besides your office suite?
- The popularity of immersive experiences draws new attention to famous artists who engage with our senses.
- Getting Immersive AI because of its ubiquitous presence in things we use. An example. Tools for the AI doctors.
Tech societies
- New rituals, new words. Clanker.
- Every era has its own technology frameworks for building digital services. Now, LLMs have become a defining building block.
- And becomes part of sovereignty.
- The wisdom of the crowds was once a promise that is no longer so certain. Crowds can become misinformed, spreading beasts. The Incoherence of Crowds.
- Security challenges of deep integration of AI via MCP.
- The AI bubble remains the talk of the town. Is it realistic, and when? Is it pure financial or structural. Or just marketing to filter out smaller parties?
- The uncertain future of AI and jobs.
- Batteries in all forms for climate strategies.
- Microsoft is introducing its first in-house AI model.
- You have cat lovers and cat lovers.
Weekly paper to check
What is the materiality of data? And what does it mean for concepts like extraction? Is data material? Toward an environmental sociology of AI
The materiality of AI does not exhaust itself in the quantities of kilograms of raw material, megajoules of electricity, or labor hours. An environmental sociology of AI would instead focus on the socio-ecological processes through which people and the planet are pressed into these functional abstractions in the first place.
Pieper, M. Is data material? Toward an environmental sociology of AI. AI & Soc (2025). https://doi.org/10.1007/s00146-025-02444-1
What’s up for the coming week?
I hope to have time to start rewriting the essence of Cities of Things and connecting it to possible projects for the rest of the year. It will probably take a bit more time, but a lot is already ready to extract from the presentation from the last months.
For ThingsCon we are planning for the first program and contacting potential partners. And making the connections with running education programs. (Yes, you can always reach out!)
One possible event to join on Thursday is 'AI in the Archive,' featuring some friends speaking. Amsterdam UX is fully booked, AI in practice.
Have a great week!
About me
I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.
Cities of Things, Wijkbot, ThingsCon, Civic Protocol Economies.