Bots meet bots; the emerging mirror world of agent swarms

Weeknotes 375 - From group chats to neighborhood sidewalks: how this week's OpenClaw experiment foreshadows the swarms of AI agents that will soon inhabit our physical world.

Bots meet bots; the emerging mirror world of agent swarms
Interpretation by Midjourney

Dear reader!

This week, the newsletter is a lot shorter due to family circumstances, as they say. I did not have the time and headspace for an extensive news gathering, but I did already do some thinking on the hype of the week. So, after some back-and-forth with my co-author, I'm sharing these as a spark for this week.

I also pasted the links I already collected as is.

I hope next week's situation is improved enough for a complete newsletter.

This week’s triggered thought

This week, the AI world discovered OpenClaw (that started as Clawdbot, later Moltbot)—a personal assistant built by a vibe coder from Vienna named Peter Steinberger, who was simply surprised that such a tool didn't exist yet. Using Claude's capabilities and WhatsApp as an interface, he created an agent that could access your entire digital life: calendar, messages, passwords, payments—everything. The reactions ranged from astonishment to alarm.

One aspect that made OpenClaw fascinating wasn't its technical capability. It was the illusion of proactivity. The bot isn't actually proactive; it runs in a continuous loop, checking in, asking what's new, resuming where it left off. It feels alive and anticipatory, but it's really just persistent polling dressed up as attentiveness. We project intention onto pattern recognition.

It even creates the belief of consciousness, almost. Consider the cleaning supplies story: a user's cleaning person asked for toiletries on WhatsApp, and OpenClaw—reading the conversation—stepped in, accessed payment credentials, and ordered the supplies. When the user's friends tried to prank him by requesting absurd amounts of toilet paper through the same channel, the bot DM’d its owner: "Your friends are trying to fool you. Want me to play along?" Something that looks remarkably like wit. Something that feels like understanding social context.

This same week, a new term gained traction: "agent swarms." The idea that instead of one monolithic AI doing everything, we'll see multiple smaller agents working together. Some say this is the concept of the year. Time will tell.

OpenClaw doesn't just serve you. It exists in a network. It reads your WhatsApp groups. It knows your friends. It observes the cleaning person. And now imagine millions of these agents, all operating simultaneously, all logged into the same platforms, all serving their individual owners—but inevitably encountering each other.

In 2024, I designed a student assignment at TU Delft called "Neighbourhood Navigators"—autonomous robots that would perform daily tasks in a neighborhood while also fostering social relationships among residents. The provocation was simple: these bots would service humans, yes, but they would also form their own bot-to-bot network. How would those parallel relationships evolve? What kind of social fabric would emerge between machines that serve competing or collaborating human interests?

Moltbook is another hot new thing from this week; a social network for the LLM bots. Triggers imagination too, but OpenClaw is for me more the digital version of that thought experiment—already happening. These personal assistants live in our phones, manage our lives, and increasingly operate in shared spaces: group chats, email threads, and collaborative documents. They're not just serving us; they're starting to encounter each other's outputs, decisions, and traces.

This raises the question that haunts me: are these agents egocentric by design? Current AI assistants are built to serve you—your preferences, your convenience, your goals. They please. They follow. But what happens when pleasing you means harming someone else? When your agent's optimization conflicts with mine?

Social media showed us what happens when algorithms optimize for individual engagement without considering collective consequences. Are we building the same mistake into our personal agents? Will bad morals be amplified, just as bad content was amplified before?

The big players—OpenAI, Anthropic, Apple—could build OpenClaw yesterday. They know exactly what it takes. But they're hesitant, at least some of them. They're thinking about system cards, guardrails, the ethics of autonomous action. Meanwhile, a single developer in Vienna built it in a week with off-the-shelf tools and no friction.

This is the wild west moment. The technology exists. The questions remain unanswered. Who defines the morality of these bots? Is it linked to democracy? To culture? To corporate policy? Can we build agents that consider not just their owner but some minimal threshold of collective good?

And perhaps most intriguing: as these swarms of personal agents grow, they will increasingly interact, negotiate, and perhaps even develop their own emergent behaviors—a mirror world operating beneath our own, serving us while becoming something we never explicitly designed.

What neighborhood are we building now?

Notions from last week’s news

This week I keep my notions from the news unfiltered (and not fully noted) due to the circumstances.

Davos 2026: A Few Honest Words on Power, People, and AI
On Power: Waiting for the Barbarians If you are looking for a field guide to the Stimmung, the overall tone, at Davos 2026, you can skip the panel recordings, the position papers, and the AI-generated LinkedIn recap posts, and just read C.P.
Google DeepMind Staffers Ask Leaders to Keep Them ‘Physically Safe’ From ICE
A federal agent allegedly tried to enter Google’s Cambridge campus in the fall, WIRED has learned. Now, staffers want policies that protect them from immigration officials.

Kimi K2.5: Best open-sourced coding AI is here

I Didn’t Expect an AI Agent to Feel This Unnerving
I shut down my Clawdbot AI agent after an experience that made me question what I was willing to hand over in exchange for autonomy.
ICE Is Using Palantir’s AI Tools to Sort Through Tips
ICE has been using an AI-powered Palantir system to summarize tips sent to its tip line since last spring, according to a newly released Homeland Security document.
China rolls out robot cops in cities to push humanoid robots in daily life
China is deploying AI-powered robots to manage traffic and pedestrian flow in cities. NBC News’ Janis Mackey Frayer explains how China continues to advance robot technology and is pushing to integrate humanoid robots into daily life.
On the smokescreen of AGI, and fighting for workers in the age of Trump and the tech oligarchy
Plus, protestors demand Amazon ditch its ICE contracts on the national day of protest.
Field Note 12: Cities Don’t Have a Green Innovation Problem. They Have a Story Problem.
Humans didn’t evolve to read dashboards or cost–benefit tables. We evolved to listen to stories—about what mattered, what was risky, and what was worth betting on. Cities are no different.
OpenClaw (a.k.a. Moltbot) is everywhere all at once, and a disaster waiting to happen
Not everything that is interesting is a good idea.
[AINews] Moltbook — the first Social Network for AI Agents (Clawdbots/OpenClaw bots)
The craziest week in Simulative AI for a while
After Minneapolis, Tech CEOs Are Struggling to Stay Silent
Silicon Valley’s power brokers spent the past year currying favor with President Trump. Two deadly shootings in Minneapolis are now exposing the price of that bargain.
Singing the gospel of collective efficacy
Posted on Friday 30 Jan 2026. 885 words, 4 links. By Matt Webb.
Where Tech Leaders and Students Really Think AI Is Going
We asked tech CEOs, journalists, entertainers, students, and more about the promise and peril of artificial intelligence. Here’s what they said.

Why the Smartest AI Bet Right Now Has Nothing to Do With AI

Project Genie: Experimenting with infinite, interactive worlds
Google AI Ultra subscribers in the U.S. can now try out Project Genie.

Apple acquires Israeli audio AI startup Q.ai

NEW Research: AIs are highly inconsistent when recommending brands or products; marketers should take care when tracking AI visibility - SparkToro
The Problem: For the last few years, companies have been investing inordinate sums into AI tracking and AI visibility for their brands and products.
China’s AI Landscape: a free-for-all, not a central plan
what 6000+ filings with regulators reveal
Dario Amodei — The Adolescence of Technology
Confronting and Overcoming the Risks of Powerful AI
China’s Unitree ships over 5,500 humanoid robots in 2025, surpassing US peers
The Hangzhou-based firm’s output far outstripped the roughly 150 units each shipped by Tesla, Figure AI and Agility Robotics last year.
Vibe Check: OpenAI’s Codex App Gains Ground on Claude Code
OpenAI nailed the interface. But it’s built for hardcore engineering.

What’s coming up next week?

Two things in my calendar that I probably need to skip.

AI in Robotics. Seems to be a huge meetup.

Doomscroll together in Amsterdam too.

Presentation Club 4. Online.

Immersive productions during IFFR in Katoenhuis (Rotterdam)

Have a great week!


About me

I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.

Currently working on: Cities of ThingsThingsConCivic Protocol Economies.