Issue tracking substrates to bridge human-agent understandings

Weeknotes 387 - We need tools to support understanding of agents by humans and vice versa. Issue trackers might be a candidate. Next to this triggered thought the latest captured news on physical AI and beyond.

Issue tracking substrates to bridge human-agent understandings
Imagined and blended impression by Midjourney

Dear reader,

Last week, I took a few days to focus on art, architecture, and landscapes, after the weeks prior were dedicated to work on the Cities of Things event (see this report in case you missed it). For impressions on an abandoned city, a flashy Antwerp art gallery institute, and another one with a lot of rough edges, find my impressions on Instagram 🙂

Week 387: Issue tracking substrates to bridge human-agent understandings

Next to the first report on the event, I worked on the first drafts of the next phase of civic protocol economies and had some nice chats with potential partners. More on that later if more concrete.

Also, the drafts of the ThingsCon RIOT articles were sent in, great pieces that I am looking forward to diving into more. Did I already mention that we plan to launch on the 26th of June? In Rotterdam, mark your calendars!

This week’s triggered thought

A question came up on a podcast this week: might AI actually make us more human? Not in the sense of replacement anxiety, but in recognizing that co-performance with AI demands more of our human qualities, such as intuition, sense-making, and judgment. We are delegating tasks to machines, but the current wave makes us think: which tasks should we never delegate?

When we hand over writing, crafting, or formulating, how much of our own specialty remains in the output? How do we relate to our AI collaborators, and what happens to our sense of belonging when the collaboration tilts too far toward delegation?

Nate suggested that issue trackers might become more important in the age of AI, not less. The conventional view says conversational AI eliminates the need for all those translation layers; requirements documents, tickets, specifications. AI just understands. Issue trackers, however, create a substrate, a medium through which organizational processes become legible. Every ticket, every handoff, every resolution deposits knowledge about how things actually work, who decides what, where friction lives. This substrate isn't just documentation; it's the shared ground where understanding accumulates.

Both humans and AI agents need this substrate to understand each other. Humans need to see what agents are doing, why, and where human judgment should intervene. Agents need to grasp human intentions, constraints, and preferences. Without this shared legibility, co-performance becomes blind delegation or constant supervision—neither of which scales.

Think of the issue tracker as a tool that values the much-chased ‘human in the loop’. It takes initiative to keep us engaged, to surface moments where human judgment matters, to prevent us from becoming passengers in processes we should be steering. It's a kind of relationship manager, making tangible not just who does what, but how humans and agents can genuinely work together rather than pass each other.

Now what happens when we extend this to physical space. When AI becomes embodied, when it's not just in our screens but in our streets, our buildings, our daily encounters, the orchestration problem multiplies. We'll be living inside AI, in a sense, I mentioned it here before. As our experience of reality will increasingly be mediated by AI-like entities, some human, some not, the future issue tracker might need to track not just tasks but also presence. Whose attention is required here? Whose judgment? Whose humanity?

What are the issue trackers of the real world? Urban planning tools exist, architectural workflows, construction schedules. But those track physical things, not the emerging relationships between humans and embodied AI agents. What substrate captures the small negotiations between a pedestrian and an autonomous vehicle? The adjustments a building's AI makes to its environment, and the human preferences that should constrain it? When an autonomous delivery robot navigates a sidewalk, who tracks that interaction? When a building's AI adjusts its environment, who ensures human preferences remain legible in that system?

With civic protocol economies, we aim to build new communities, new forms of coordination—need this kind of tooling. Not just to manage tasks, but to create the substrate for genuine co-performance. To give humans a role that isn't just residual but intentional. Issue trackers, reimagined, might be one way to get there. Not as bureaucratic overhead, but as guardians of the human part and the shared ground where humans and agents learn to work together.

Notions from last week’s news

Sometimes I lose track of which AI frontier news is allocated to last week. I think the consensus is that, in the ongoing battle of the giants (at least in investments), after the GPT 5.5 release last week, it is (over)taking the lead for now. It shifts more towards coding assistant and security, with new leaps in human-like performances, so it seems. A ramp-up for the enterprise push, chasing each other.

Who Isn’t Using GPT 5.5
Plus, the CTO-to-IC pipeline and GPT-5.5 one week in

In the meantime, Musk and Altman are meeting in court, on how it all started and pivoted.

Human-AI relations

Agents can also become a burden as they start deleting entire databases. AI self-awareness does not solve the issue.

Claude-powered AI agent’s confession after deleting a firm’s entire database: ‘I violated every principle I was given’
A startup was left scrambling after a rogue AI agent deleted swaths of code underpinning its business

But is also outperforming doctors in prognoses

AI outperforms doctors in ER diagnoses
Researchers tested an OpenAI model which achieved a correct or close-to-correct diagnosis in 67% of cases, compared to 55% for physicians.

Could RSS the goto format for communication between all the small personal apps you will vibe code.

We need RSS for sharing abundant vibe-coded apps
Posted on Wednesday 29 Apr 2026. 1,087 words, 19 links. By Matt Webb.

In other words: everyone is an engineer now. And so is the AI.

Everyone’s an Engineer Now
Takeaways from Cat Wu’s fireside chat with Addy Osmani
Import AI 455: AI systems are about to start building themselves.
The first step towards recursive self improvement

Are you already being managed by your AI? Not everyone is planning so.

I Let ChatGPT Manage My Workweek
My AI project manager reads my OKRs, calendar, Notion, and Slack so I can stay on top of my work
The growing AI backlash
Nobody should be surprised

If we lose our limits of short lives and simple communication, we lose also a part of what makes us human. Is this claim.

Will human minds still be special in an age of AI?
We tend to think of intelligence like height – and imagine ourselves being overtaken. That misses the point

Back to the future of software engineering

This startup’s new mechanistic interpretability tool lets you debug LLMs
Goodfire wants to make training AI models more like good old-fashioned software engineering.

Following a bit on last week's triggered thought on AI in organisations, and linking also to the quest what makes us more human when we work with AI in the real world.

Is AI making us softer? And are soft models making more mistakes?

Study: AI models that consider users’ feelings are more likely to make errors
Overtuning can cause models to “prioritize user satisfaction over truthfulness.”

Getting Gooier
How AI is transforming humans

Physical AI

Some embodied AI, and humanoids. Applied (or planned to apply):

Schaeffler plans to deploy 1,000 Hexagon humanoids by 2032 - The Robot Report
Schaeffler is deepening its investment in humanoid robots in production through partnerships with Hexagon and VinDynamics.

Robot meme of the week?

Humanoid robots start sorting luggage in Tokyo airport test amid labor shortage
Humanoid robots could load cargo and clean aircraft cabins at Haneda Airport.

I like this approach; first create the space and context of operating, and the means of interacting with the surroundings, before starting to design and build the vehicle.

Start with the sensors, then design the rest: How Zoox built its robotaxi
The bidirectional design has some clear advantages for a working taxi.

Too bad Elon is not making the rules.

Tesla hits Musk’s threshold for ‘safe unsupervised’ driving
Should Tesla flip the switch?

A Nike lab

Nike opens permanent Air Lab for designers in Milan
Sports brand Nike has unveiled a Milan laboratory where anyone can experiment with its Air technology over the coming years.

And your car is the current or next wave at the latest to be the AI touchpoint

General Motors is adding Gemini to four million cars
It’ll take a few months though.

Meta is entering humanoids; expect revolutionary advertisement models

Meta buys robotics startup to bolster its humanoid AI ambitions | TechCrunch
Meta bought humanoid startup Assured Robot Intelligence to beef up its AI models for robots, the company said.

The promise of intelligence in fitting fashion. Now Google is introducing an implementation.

Google Photos launches an AI try-on feature for clothes you already have
Save a trip to your closet.

An interaction of the fluffy companion bot. And another.

Reader
Read and highlight anything
The creator of Roomba is back with a furry robot companion
But can it clean the floors?

AI and beyond in society

The Trojan Horse of Palentir is becoming transparent.

Palantir’s ‘Manifesto’ and the Digital Sovereignty of Other Nations
Other nations have good reason to be suspicious of Palantir given its declared allegiance to the United States government, says James Görgen.

Short-term predictions are the hardest.

Anthropic Executive, One Year Ago: Fully AI Employees Are a Year Away
Link to: https://www.axios.com/2025/04/22/ai-anthropic-virtual-employees-security

The economics of AI: Does it make sense?

AI’s Economics Don’t Make Sense
If you liked this piece, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA, Anthropic and OpenAI’s finances,

Fighting for control of AI. Without a center of gravity.

The US Is Fighting for Control of AI. It Would Be Better Off Building Standards.
T.J. Pyzyk makes the case for standards vs. strong-arming, if the US government wants shape global AI governance.

Fight for the right to define your own AI meaning in your community.

Digital Sovereignty Means Breaking the Western Monopoly on AI Meaning
Semantic ownership is the right of communities to define themselves in AI systems, write Sujata Mukherjee and Sasha Maria Mathew.

The city as controlled hallucination.

the city as a controlled hallucination: from amusement parks to urban space
from early amusement parks to global megacities, illusion, movement, and simulation evolve into models for contemporary urban life.

AI swarms that disrupt democracy.

How AI Swarms Are Disrupting Democracy
Every day, millions of pieces of fake content are produced. Videos, audio clips, posts, articles, generated by artificial intelligence, distributed at

Data center rebellion

The data center rebellion is only the beginning
And it’s precisely what democratic governance of AI looks like.

Weekly paper to check

Vera was a guest at our event on Cities of Things and shared her research on reflective AI. This is a paper: Reflective AI: A Slow Technology Approach for Design Education

The proliferation of efficiency-focused AI tools in creative processes threatens to undermine critical, reflective practices foundational to design education. This approach can lead to creativity exhaustion and diminished agency among designers and students. As an antidote, we propose Reflective AI: an approach grounded in slow technology principles that reframes AI not as a production tool, but as a medium for reflecting on the creative process itself.

Vera van der Burg, Gijs de Boer, Jesse Josua Benjamin, Brett A. Halperin, Alkim Almila Akdag, Senthil Chandrasegaran, and Peter Lloyd. 2026. Reflective AI: A Slow Technology Approach for Design Education. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI '26). Association for Computing Machinery, New York, NY, USA, Article 89, 1–19. https://doi.org/10.1145/3772318.3791691

What’s up for the coming week?

This week I am happy to be able to join the World Beautiful Business Forum in Athens, as guest of Monique. Looking forward, and will report on my impressions next week for sure!

Other things happening this week, you might like: a DIY sessions of Sensemakers on robotics, The other AI in Rotterdam v2 and Nieuwe Instituut, or Amsterdam UX on responsible AI and the role of design. Or dive into good money with a different take on the digital euro.

Have a great week!


About me

I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships for immersive experiences in physical AI and embodied AI. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.

Currently working on: Cities of ThingsThingsConCivic Protocol Economies.