The mundane education of AI. By us.
Duplex learning: we're learning to use AI, and AI is learning our mundane rituals—that changes everything about the relationship. These thoughts and more from last week’s news on human-AI collabs.
Dear reader!
Happy Davos (…). Parallel worlds normally are a silent power, but now it seems played out in the open. Tomorrow, everyone will try to make sense of the Trump speech, and I hope next week will find a new balance, pushed back towards the old balance, better said.
In the meantime, we just continue to wonder how the new intelligences are shaping our reality and beyond… And how we are creating our own balance.
Week 373: The mundane education of AI
Last week I had some interesting interviews for the Cities of Things - 7 years later project. Different perspectives and contexts. I have also been thinking about the form to organise the results later. I also had some nice catch-ups with Bram and Daniel discussing all kinds of human-AI collabs (I like this one), and I am looking forward to the ESC expo with the Wijkbot in the first half of February. And I came up with some new ideas for personal tools; I need to start a project in Cowork asap.
Oh, and I liked the extensive exposition on the work of Iris van Herpen, especially during a museum ‘rave’.
This week’s triggered thought
Last week Anthropic released Cowork, a conversational interface for Claude Code. The process of making, initiated by how many used Claude Code for mundane tasks, and an internal challenge, makes storytelling how fast it was developed (agent builds next Iteration of agent). The adoption and potential are much more interesting: who will use it. And there is a potential intriguing extra impact.
Until now, AI coding tools primarily served people who already think like programmers. Even when they're not writing code themselves, developers bring a computational mindset—they know how to structure problems for machines. Cowork invites everyone else in. People who think in tasks, intuitions, and creative leaps rather than functions and loops.
This is not just a nice and accessible tool, it is also a kind of duplex learning. We're not just getting more accessible tools. We're giving AI systems access to how common people want to build things, solve problems, and structure their work. Every conversation with a non-coder teaches the models something about mundane human rituals—the messy, personal, often illogical ways we actually get things done.
That education changes the relationship. When AI understands not just what we want to build but how we naturally think about building, the tools stop being translators between human intent and machine execution. They become participants in our workflows who grasp the texture of daily practice.
Consider what I'm planning for my monthly Cities of Things newsletter, which I need to resume. Currently, I drive the process: feeding the month's posts to Claude, extracting a thematic gist, drawing random cards from the Near Future Laboratory Design Fiction Work Kit deck to spark an idea, then having Claude write a day-in-the-life story around my concept.
I think about to vibe-code this into something different—a tool that initiates the work itself. At month's end, it gathers posts, extracts the gist, pulls the cards, and comes to me: "Here's January. Here are your prompts. What thing do you see?" I provide the creative spark. It completes the rest.
In this arrangement, I'm no longer driving. I become a resource the tool consults for one specific contribution. The publication runs itself and sources me when it needs human imagination.
"We shape our tools, and then our tools shape us"—the saying risks cliché, but something real lives in it. If I build this, I'm defining where my contribution matters. And the tool will learn to see me that way too. Over time, it might extend its scope: managing the publication calendar, suggesting themes, proposing when to break format. Less my assistant, more a communications officer for Cities of Things who checks in with the founder for creative direction.
I don't know yet if that's liberation or something stranger. But as AI learns our mundane rituals, these questions stop being theoretical. They become design decisions to make now.
Every week, I like to connect this thought to the work of Cities of Things. In the thought, the monthly newsletter is mentioned. That is not only a nice way to summarize the developments of a month, but also a way for me to reflect on them. You can find them —next to more general project related updates—here: citiesofthings.substack.com
Notions from last week’s news
On with the captures of last week’s news.
Human-AI partnerships
Anthropic had the biggest news last week with Cowork, which is seen as a pivotal moment for Claude and AI.

News from the frontier AI labs. Gemini is adding access to your digital life if you are aiming for better answers. And have no problem AI scraping your email. That is a setting you can uncheck. And it might contribute to winning the race for best-performing AI. Especially with the embedding in iOS.


OpenAI is announcing ads and a cheaper subscription.


10 learnings working with AI agents for coding.

Is conscious AI a myth forever? It is even dangerous, according to Anil Seth, to attribute consciousness to machines and distract from the real issues.

And is it really intelligent?


An AI partner as a safe haven.

Matt has a good take again. It also relates to the triggered thought above: with mundane agents or agents performing mundane tasks, the communication layer is key, and the initiative is with the agent.

Robotic performances
Predictions for physical AI in 2026 and beyond. From a robot perspective.


Immersive connectedness
That category of VR glasses is a thing now, still on Kickstarter.
L’histoire se répète (think Second Life)

Tiny computers everywhere

Tech societies
Adoption of generative AI worldwide.

Can we have AI artists without doing harm?

Slop is here to stay, as it was already there before AI accelerated it.
Grok is worse by the place where it lives.

Will we have a Polymarket disaster?

Chew a bit more on this concept of New Nature.

If you were raised with your first computers in the 80s, the C64 is an icon.

And the promise of a one-person car…

Weekly paper to check
I am wondering what this would mean for perception by non-humans?
This paper develops and defends a theory of perceptual responsibility, according to which individuals are sometimes responsible for how they perceive. I argue that we are responsible for perceptual experiences because they can reflect our evaluative commitments, such as professional standards or moral values.
Prettyman, A. Responsibility for Perception. Erkenn (2025). https://doi.org/10.1007/s10670-025-01029-0
What’s up for the coming week?
Continuing to work on the research and more. I will check for the first time in a long time at a ProductTank meetup. Picnic is doing meetups now too, on UX. Or robotics next week, a technical one on MicroPython.
Have a great week!
About me
I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.
Currently working on: Cities of Things, ThingsCon, Civic Protocol Economies.



















