Stop speaking first and the need for conversational frameworks
Weeknotes 388 - Thoughts triggered by being in a bubble in Athens, making sense of reality, community, and genuine collaboration in times of AI. And of course, lots of captures from last week’s news on physical AI and beyond.
Dear reader!
Last week was stuffed with a conference in Athens, see below, which challenged the calendar a bit. So this edition is delayed a bit, just like the planes…
Week 388: Stop speaking first and the need for conversational frameworks
So, last week I attended the World Beautiful Business Forum in Athens—a conference for business leaders who believe enterprise can improve the world, with the right mindset. Or in their own words “to shape a humanist future in and through business, in partnership with AI, and for the benefit of all life on earth.” As a fellow event organizer and guest of one of the curators Monique, I was able to attend for the first time. Different from my usual crowd, but that difference surfaced some ideas worth carrying forward. See below.
And in other news, the draft articles of RIOT look great, and we are planning a very nice launch event on 26 June at a ThingsCon Salon themed Making Symbioscene in Rotterdam. Happy to partner with Tiwanee van der Horst's project “design together with algae” and connect this to more regenerative deep dives articles., More information will follow soon!
Triggered Thought
Last week I attended the World Beautiful Business Forum in Athens. The conference mission: to shape a humanist future in partnership with AI, for the benefit of all life on earth. Ambitious framing. Three moments stayed with me.
James Bridle argued that to understand how other beings navigate the world, we must stop speaking. But the fuller provocation goes further: ask the creatures you meet, the rivers you know, what we can do for them, and then listen when they speak. This flips the script from observation to service. Notably, Bridle dismissed AI as any kind of solution. The transformation required is cultural, not technological.
The emergent intelligence session**,** featuring talks by Léa Steinacker and Gianni Giacomelli. They challenged the assumption that intelligence lives in brains. Intelligence, they argued, is emergent: the capacity to sense, remember, decide, act, learn. Mycelia networks allocate resources across forests. Ant colonies solve logistics problems no individual ant comprehends. Modern AI exhibits similar emergence—attention mechanisms crystallizing into specialized clusters that interact like neural regions.
If intelligence is distributed and everywhere, then Bridle's call to listen becomes practical necessity. But it also suggests that AI might participate in this distributed conversation—not as solution, but as one voice among many.
In both a session and the closing panel, Indy Johar made a deep diagnosis: the crisis isn't polarization or democracy or climate. The crisis is our theory of self—a "thin, temporally present, divisible" conception that science has already shown to be delusional. We are stardust, entangled with microbiomes, genetically connected to other life. The cultural gap is that we haven't absorbed this.
From this follows his provocation about truth: "Truth is a shit idea." It creates object forms of certainty that generate conflict. What we need instead is conversation technology—not the opinion distribution technology of social media, but frameworks that hold uncertainty collectively, sustaining dialogue across difference. LLMs, used smartly, might help construct a "conversational civilization."
The conference articulated a more-than-human, AI-partnered vision. But the sessions I followed stayed human-centric—connection exercises, bonding rituals, workshops that ran out of time before sharing actual research. Beautiful conversations, but what systems survive after everyone disperses?
Johar's framing haunts you: we are entering decades of crisis. We cannot workshop our way to new societies. We need conversational architectures that hold complexity, sustain disagreement, and make space for more-than-human participants, including, perhaps, the AIs we're learning to build.
That's the design problem I want to carry forward. Conversational frameworks leveraging LLMs in an enriching way, the environments where we live in concert with the non-human fellow inhabitants of our lifeworld, and the conversation extending to the intelligences already at work around us.
Notions from last week’s news
The round of updates:
- Gemini 3.1 can do more complex and multi-step requests
- Project Mariner is shut down (Google tasks manager)
- Claude is dreaming now and a deal with SpaceX
Human-AI relations
Changing roles for designers in times of AI

How human creativity is changing in times of AI: unleashing more than ever

How to protect human autonomy in times of AI: also redesign the environments that shape us.

From digital literacy to literacy-slob. In times of AI.
In times of AI, am I becoming an AI?

A new type of self…

Understanding humans in times of AI.

AI research in times of AI.

Kid toys in times of AI.

Age caps in times of AI.

Who is the top dog now?

Physical AI
Are these monks just seeking attention, or is there really a belief in a role for other species?
Is the form factor of the robot arm entering our homes and offices?

Living the robot life (or live)

Not only matter is real; consciousness might shape reality by collapsing possibilities into one outcome.

Mind the robot.

A mini robot platform now with agentic capabilities.

Tech in civic societies
Is the company or the AI being sued for impersonation?

Where are the times of lite browsers? Or is this an interpretation of edge computing acc Google?
Shifting design of computer chips and memory capacity for inference.
How healthy is AI business? And very entangled value systems.

Agents integrated in organizations.

And how trustworthy are these AI businessmen?

What should we be afraid for with AI and jobs?

Wealthy applications? And earthly ones.


Keeping an eye on the quantum developments.

Weekly paper to check
The labor market effect of generative artificial intelligence on artists
There are no events in my calendar at the moment, it might be a low event week with Ascension Day.
Makridis, C. A. (2026). The labor market effect of generative artificial intelligence on artists. Journal of Cultural Economics, 1-24.
What’s up for the coming week?
The key tasks for the coming week are proposals, shaping the consortium, and planning events.
There are no events on my calendar at the moment; it might be a low-event week with Ascension Day. Next week, there is another AI House event.
Have a great week!
About me
I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.
Currently working on: Cities of Things, ThingsCon, Civic Protocol Economies.
