Relations over functions will define AI-collabs
Weeknotes 354 - Relations over functions will define AI-collabs - What if we find ways to build relations with AI that make sense? Some thoughts, and some nice articles found in the news of last week. Check it out in the weeknotes.

Dear reader!
Welcome to another week as more people return from vacation. I am still waiting for that moment to arrive. The newsletter is still a bit in development. I am trying this week to see if I can format the captured news a bit differently. I'll throw in a quick questionnaire in a couple of weeks to see what you think.
Week 354: Relations over functions will define AI-collabs
Exploring new projects and preparing for the design charrette, ThingsCon Salon, and conference. And other things. It was interesting to see how an experiment on coaching a student with AI only got the news. See thoughts below.
I did a quick post on the proposal for SXSW (voting is closed now) that boils down the topics of now, in my belief.
This week’s triggered thought
I was triggered by an experiment where Professor Bas Haring allowed one of his graduate students to complete her entire thesis with coaching exclusively from an AI, not him. While the AI performed admirably with technical content, Haring concluded it fell short in critical thinking, academic reflection, and inducting students into academic culture.
This is a nice way to find out about the potential role AI can play, and it is good for creating a story to share. I think it is valuable to explore a different angle to address some of the issues, and connecting it to something that seems to be an inevitable future reality, rather than delegating completely to AI or excluding it entirely, hybrid collaborations feel more sensible. Hybrid collaborations that are shaped by the relation more than the exchange of capabilities.
What if AI joined the table as a full member of the coaching team? Imagine weekly discussions where the AI presents progress reports and initiates conversations about the student's work. This approach offers dual benefits: students receive balanced feedback while learning critical evaluation of AI tools. The crucial question becomes: how do we develop critical thinking about these AI helpers? This will be even more as the tutor is also open in valuing what opens up their thinking.
What's becoming increasingly clear is that the success of these collaborations isn't defined primarily by content contributions, but by the relationships built between humans and AI. With this realization, the graduation project scenario transforms: instead of simply adding AI as a tool, we create a situation where student, professor, and AI develop meaningful relationships that enhance learning outcomes for critical academic thinking—particularly important as AI becomes an integral part of academic research sources.
What would constitute an effective system card for such collaborations? How should we prime these partnerships? How can we evaluate different collaboration stages?
The implications extend beyond mentoring or developing (academic) knowledge. The longer belief is that (digital/computational-based) products and services evolve more towards adaptive systems. Now with the AI, this starts to be played out on a whole different level. AI and immersive technologies can transform design professions. Designers must move beyond creating end products to developing adaptive systems with AI components that evolve through use.
And even more profound: How will AI be integrated into every product and service? What does this mean for designers? The consequences are particularly significant for redesign work, which constitutes the majority of the design business. If AI can continuously adapt products, identify user friction points, and even initiate and execute redesigns, the designer's role shifts significantly. The human creative director might simply approve AI-generated solutions.
This creates a fuzzy space where designing with AI and designing the AI itself become intertwined. The boundaries between the product, the design process, and the designer blur, opening new possibilities for human-AI collaboration—one we engage critically while leveraging its unique capabilities and building meaningful relationships with.
Notions from last week’s news
Is there an AI bubble on the verge of popping? Or is it just a strategy for more AI attention? Do 95% percent of the pilots fail indeed? Does it?
Human-AI partnerships
- Imagine a world where many AI personalities coexist with us; how can we recognize and engage with them?
- What will happen when our agents get emotional? Kevin Kelly is philosophizing.
- It takes time to understand how to use AI, Tim O’Reilly is making a case.
- We need AI that collaborates with experts, not replaces them.
- The hidden ingredients behind AI’s creativity.
- Amazon is betting on agents that can help you with your everyday tasks. And blocking bots of others.
- Useful overview on vibe coding by Kottke. And some real-world experience reported at Wired.
- We need human literacy as we start to live with AI in our daily lives.
- The reified mind. Does the human conversation risk being devalued by machine-mediated language?
- Chatbots need better safety rules.
- Another exposé on context engineering as crucial for effective AI systems.
- Building an AI editor for a publication.
Robotic performances
- Learning robots to perform mundane tasks. By capturing mundane tasks.
- The humanoids are a thing in China; we have seen them multiple times before. Now there are special World Games for humanoids.
- The future will bring moving grocery stores.
- And farming remains a fertile ground for deploying robot helpers.
- Hobby robots, always fun to check out.
Immersive connectedness
- Google is betting on AI for your home with Gemini. With a new home speaker.
- Hardware is hard, but it was hot. But Google is taking a pause. Meta has won the smart glasses race.
- Your phone is spying on you, in a different way than expected.
- Even more intrusive potentially; a sensor reading your inner minds.
Tech societies
- The AI bubble news is not over yet.
- While whole countries get a plus account.
- New type of problems: AI is messing up research.
- Do you trust Meta creating this layer with dubbed reality (is that a better word than digital twin?)
- Former Twitter CEO is now building an AI product, a deep research API for advanced research. Parallel.ai
- The whole game of talent is a bumpy ride. Meta is changing plans.
- This is not a good time for AI doomers. Apparently.
- Google is publishing its AI energy use.
- Safety by design is needed to address AI-facilitated online harms. Legislators can play a role in stopping the sexual harassment of kids by chatbots.
Weekly paper to check
Let’s do an agentic AI paper (pre-print) on SLMs.
Here we lay out the position that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI.
Belcak, P., Heinrich, G., Diao, S., Fu, Y., Dong, X., Muralidharan, S., ... & Molchanov, P. (2025). Small Language Models are the Future of Agentic AI. arXiv preprint arXiv:2506.02153
https://arxiv.org/pdf/2506.02153
What’s up for the coming week?
Writing, meetings, and exploring plans. There is a nice Test_Lab at v2 on Thursday in Rotterdam. Creative Mornings is happening in Rotterdam, too.
Have a great week!
About me
I'm an independent researcher through co-design, curator, and “critical creative”, working on human-AI-things relationships. You can contact me if you'd like to unravel the impact and opportunities through research, co-design, speculative workshops, curate communities, and more.
Cities of Things, Wijkbot, ThingsCon, Civic Protocol Economies.