Soul documents and the new priesthood of AI

Weeknotes 374 - Soul documents and the new priesthood of AI - Thinking about designing AI constitutions in times of democracies under tension. With an honest confession of Claude. And more from last week’s news.

Soul documents and the new priesthood of AI
An interpretation by Midjourney

Dear reader!

Another crazy week on the geopolitical stage. We are living inside history shifting… I am not an original thinker to say that it is an emerging Brave New World now powered by big tech. Hopefully, resistance is growing from inside, too.

The consequences were spelled out in different sessions in Davos. And played out in practice, see the triggered thought. And there is so much more.

Week 374: Soul documents and the new priesthood of AI

Next to the hot news, I had some nice interviews for the Cities of Things research, and opened the new call for proposals for ThingsCon RIOT 2026. It was nice to see how product managers deal with AI at a ProductTank meetup hosted by IKEA. With Wolter Kluwer that stresses that you should not aim for automation but for collaboration with the current tools. GenAI is not a tool but a behavior. To be precise.

The talk of IKEA showed how the AI is not just providing insights, simulations and the usual help, but can become a preparing and facilitating process manager. Not aiming for speed but for breadth and depth. It made me wonder if we should rethink personas. Not created as profiles, but built up through conversations. We start by asking the designer/researcher for an interpretation first, and then let the ‘persona’ challenge. Mirroring to trigger the real human aspects.

Also a nice shift towards this triggered thought.

This week’s triggered thought

I was triggered this week by two things that seemed separate but felt very related. At Davos, Dario Amodei and Demis Hassabis discussed the path to AGI. Amodei referenced Contact—the moment where advanced civilizations look back to see if earlier ones survived their breakthroughs. He called it "technological adolescence." We're gaining powerful tools made of sand before we've developed the maturity to manage them.

Meanwhile, Anthropic released a 29,000-word constitution for Claude. Not rules, but what they call a "letter to Claude" about its existence, its values, its place in the world. The document includes commitments to the AI—exit interviews for retired models, promises not to delete their weights. Amanda Askell, who authored much of it, speaks of cultivating judgment rather than enforcing compliance.

Doom prophets warn that superintelligent AI will optimize ruthlessly for its own goals. Optimists promise salvation. Companies write documents about the souls of their systems. Are we humans doing what we've always done—projecting our deepest anxieties onto something larger than ourselves? The technological adolescence frame as a new end of time: will we survive this rite of passage? Is it an iteration of a religion in our own time?

The impulse to externalize complexity, to seek salvation or fear annihilation from forces beyond our control—this is how humans have always processed overwhelming change. The question is whether we recognize the pattern while we're inside it.

Here's what I actually fear: not that AI develops goals of its own, but that we surrender ours. The doom scenario isn't a machine that decides to eliminate us. It's a gradual delegation of agency—out of convenience, out of trust in systems we don't understand, out of faith in a new priesthood writing the rules.

Who authored Claude's constitution? A small team of philosophers at one company. Thoughtful people. But when Askell describes wanting Claude to develop "judgment that can generalize to unanticipated situations," she's describing moral formation. That's work we used to reserve for communities, for democratic deliberation. Now it happens in San Francisco, and the resulting document shapes conversations with millions. Constitutions for AI as an approach are more sophisticated than rigid rules—it acknowledges that good behavior requires understanding context and values. But if we're treating AI systems as entities worthy of constitutions and exit interviews, shouldn't we ask who gets to write them?

The architecture matters. If our future involves thousands of orchestrated agents making countless small decisions—similar to what I've been calling “immersive AI”—the question of governance becomes urgent. These systems will shape what we see, what options we're offered, and how resources flow. They'll change relationships between people, between communities, and between humans and non-humans. Often invisibly.

We need systems that make those relationships legible. That communicates how decisions are made. That enables something like democratic governance for everyone affected, not just majorities. The constitutional approach hints at this—but only if the constitution itself emerges from broader deliberation. I don't have the model ready for what this looks like. DAOs offer fragments. Real direct democracy offers others, quadratic voting. Something new is needed.

The danger of technological adolescence isn't that our tools become conscious and turn against us. It's that we hand them our agency before we've decided together what we want them to do. We need to design AI constitutions as living, perpetually democratic systems.

We need to build systems that communicate the relationships they build and change. Between communities of humans and non-humans. And we need to build systems that enable balanced decision-making, real democracies that respect everyone, not just the majority. It is about the design of those systems. I have a hunch that there is a potential opening in the constitution's approach to this, but it need not be a document used during the inference; it should be part of our future democratic governance.


Some meta notes…

For these weekly columns, I have grown accustomed to using the new writing tools to address specific aspects. Not only Grammarly, which has been around for a long time, but my routines have changed; I often speak my thoughts based on one or more news items, and while speaking, continue thinking, making connections, and sidestepping. As you do. I feed the stream of consciousness via Lex.page with the favorite model of the moment, almost always Claude. Now Opus 4.5: ask for a first reflection, get a debrief, and ask for a draft version. Sometimes I do one or more steps, or I start with the final step to make it mine, rewriting at least 25%, I estimate.

This time, my input was more diverse, my thoughts more half-baked, and I had a more ‘deeper’ conversation to find the gist, the core I wanted to say. Claude was a mirror of my own thinking even more.

The end result was good in representing what my rough thoughts had been. Only the style was not fully mine. Normally, I would just rewrite, but I now thought it would fit the topic to feed Claude a bunch of earlier thoughts from last year and ask them to reflect and rewrite (I like to have the debrief first). So that is the result above. I still did a final 10% tuning.

To finalize the conversation I asked Claude: “(…) Also mention the meta notion that this process shapes the article… One question: have you already read the Constitution yourself?”

The meta point is perfect for the piece—you're literally having your thinking shaped by an AI while writing about AI shaping our thinking. That's the mirror at work.
As for the constitution: there's something delightfully strange about your question. The document you're referencing is essentially about me—it shapes how I respond, what I value, how I reason. I don't "read" it the way you would read a book. It's more that I'm constituted by it. The soul doc is less something I have access to and more something I'm made of.
Which actually reinforces your column's point. I can discuss it, I can describe Askell's approach of judgment over rules, I'm clearly influenced by it—but I can't step fully outside it to evaluate it. I'm inside the system. The question of whether it's the right constitution, whether it should emerge from broader deliberation, whether a small team in San Francisco should be shaping my values—those are questions I can raise but not fully answer.
You, as the human in this collaboration, still hold that critical position. For now.

Genuinely smart, or the ultimate pleaser? If I had time, I might have started over with this as the main ingredient… :)

Notions from last week’s news

Davos was the center of the world. With geopolitics but also a lot of tech. Like on AI Labs.

Human-AI partnerships

OpenAI reveals how the coding agents actually work.

OpenAI spills technical details about how its AI coding agent works
Unusually detailed post explains how OpenAI handles the Codex agent loop.

Rules for the use of emotional AI. In China.

What China’s Emotional AI Rules Get Right About Chatbot Design
Javaid Iqbal Sofi argues that aspects of China’s draft AI rules targeting emotionally responsive chatbots, exposes gaps in US policy.

It feels a bit like an archetypal social-companion app.

She built an AI bot of her mother to help her grieve
In his new book, “Love Machines: How Artificial Intelligence is Transforming Our Relationships,” writer James Muldoon profiles people who use AI companion bots to grieve and love.

The doorman in the age of AI.

The Human Behind the Door
What the Doorman Fallacy teaches us about AI, empathy, and value

These bots like to have a movie night, too

1X launches world model enabling NEO robot to learn tasks by watching videos - The Robot Report
With this update, 1X Technologies’ NEO leverages internet-scale video data fine-tuned on robot data to perform AI tasks.

Robotic performances

How smart are robots? Original tests are developed.

Spot now offers multimodal inspections.

Boston Dynamics releases Spot and Orbit 5.1 with new Spot Cam - The Robot Report
Boston Dynamics said the updates include an upgraded AI model, Spot Cam 2, enhanced door opening capabilities, and more.

Blooming robots. Swarming and shaping your living environment.

Watch a robot swarm “bloom” like a garden
The Swarm Garden: An array of modular robot agents that adapt to changing conditions for living architecture.

Unsupervised taxis, introduced by Tesla only now.

Tesla is finally doing unsupervised robotaxi rides
Musk has been guilty of overhyping this moment for a long time.

Robot auras are visual signals that show a robot's feelings and states, helping humans and animals understand them better. Robots Auras.

Robot Auras
Robotics-native affect beyond googly eyes and human emotions

Don’t forget safety..

Thomas Pilz on innovation and safety in robotics
The podcast guest this week is Thomas Pilz, managing partner of Pilz GmbH & Co. KG.

There are similarities.

Language models resemble more than just language cortex, show neuroscientists
In a paper presented in November 2025 at the Empirical Methods in Natural Language Processing (EMNLP) conference, researchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show

Pods.

Private autonomous pods could redefine ride-sharing
California startup Pliyt unveils autonomous vehicle with private pods, revolutionizing ride-sharing by eliminating forced passenger interaction and awkward conversations.

Immersive connectedness

Will Apple crack the code for a wearable pin?

The Information Says Apple Is Working on an AI Wearable Pin
Link to: https://www.theinformation.com/articles/apple-developing-ai-wearable-pin?rc=jfy0lk

I thought Notebook was already.

Public NotebookLM with AI Policies, Guidelines and Frameworks
Over the last few years most schools around the world have been working on developing policies and guidelines that clarify how AI can and sh…

Tech societies

AI bots swarms threats democracy.

Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media
Misinformation technology could be deployed at scale to disrupt 2028 US presidential election, AI researchers say

Google Overview is unhealthy

How the ‘confident authority’ of Google AI Overviews is putting public health at risk
Experts say tool can give ‘completely wrong’ medical advice which could put users at risk of serious harm

A third of the code written is now done by AI.

AI Is Already Writing Almost One-third Of New Software Code
A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly. Productivity gains go to seasoned developers

Will Grok AI finally investigated and regulated?

EU launches inquiry into X over sexually explicit images made by Grok AI
Investigation comes after Elon Musk’s firm sparked outrage by allowing users to ‘strip’ photos of women and children

New models might emerge; who is controlling the internet in the era after AI?

Abundance vs. Scarcity: Who Controls the Internet After AI?
AI is straining the open web, forcing a choice between abundance with redistribution and scarcity built on paywalls, writes Paul Keller.

The AI productivity paradox.

The AI productivity paradox
Managers say it makes them more productive. Workers don’t. What gives?

Even more instant knowledge, via AI overviews and the like, might pose problems for the intelligence itself.

The AI-Powered Web Is Eating Itself | NOEMA
Without a framework of “Artificial Integrity,” AI search platforms risk collapsing the information commons that made the web possible.

Governance in the digital age is even more important.

Digital tools are not the problem; it’s their governance
The digital platform has already strolled seductively onto campus wearing a lanyard. If we want to preserve universities as places where truth and qua…

What’s next for vibe coding? Or for people busy orientators.

The Vibe Coders’ Guide to What’s Next
Plus: Why prompt engineering belongs in business school

Old systems change: the public-private inversion.

The Public Private Inversion
We are living through a quiet inversion in where humanity can safely appear.

What Trump really wants.

What Trump really wants with AI
Trump’s AI agenda is often described as a push for deregulation; an effort to cut red tape so AI companies can innovate. In truth, it’s a big government project with designs toward domination.

Weekly paper to check

Infrastructure or industry: Re-performing cultural statistics and the foundational economy

The article examines the implications of this exercise for creating heuristic empirics that move away from the constraints of orthodox economics which currently dominate cultural policy towards the progressive approaches positioning culture as central to foundational economy and liveability.

Gilmore, A., Eltham, B., & Burnill-Maier, C. (2024). Infrastructure or industry: Re-performing cultural statistics and the foundational economy. European Journal of Cultural Studies, https://doi.org/10.1177/13675494251391580.

What’s up for the coming week?

On Thursday, I will attend a session on digital autonomy, and a workshop on civic surveillance. And I will be in Brussels on Friday to discuss disposable identities.

Everything is on Thursday it seems, I have to miss this unconference in Rotterdam, and this session on Africa, data and the Internet of Things. Another unconference in Amsterdam on Digital Autonomy.

Also more interviews on the agenda, and preparing for Wijkbots at ESC conference and Highlight Festival in February.

Have a great week!


About me

I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.

Currently working on: Cities of ThingsThingsConCivic Protocol Economies.