Well thought-out thoughts between algorithm and applause

Ferry van Saalbach's blog about technology, society, moderation and AI.

Ferry sitzt auf einer Parkbank und hat zwei Tamagotchis in der Hand, auf denen jeweils die Logos von ChatGPT und Google Gemini abgebildet sind.

Are AI agents the Tamagotchis of our time?

No, I don't have a team of 15 AI agents. I also think it's dangerous and absurd to use AI as if it were a human employee. Sometimes I also wonder whether AI agents are the Tamagotchis of our time: an artificial something that gives us meaning because we imagine we have to take care of it.

But let's start from the beginning.

I really enjoy using AI. AI makes me smarter every day. Explains things to me. Supports me in many activities. Helps me to see through legal disputes, to get to the heart of my interests, to make things understandable to me that I don't understand or have never learned, to iron out weaknesses that I carry around with me and, above all, to understand and acknowledge these weaknesses in the first place.

This article here has also been run through my AI several times, honed and refined, sometimes sharpened and sometimes toned down, and incidentally I have had the AI explain to me how and where and why I sometimes formulate things in such a way that they tend to produce areas of attack, although I actually only want to create clarity.

That's how AI makes sense to me. Because it makes me stronger. It helps me to see my weaknesses and helps me where I'm lacking.

That's AI for me.

AI makes me stronger - not more productive

The idea of building pseudo-human agents to talk to, on the other hand, feels more like self-employment than technological progress to me.

Of course, as an independent moderator, I could create a team where I have one agent who is responsible for my branding and a second who writes my LinkedIn posts and a third who sends out acquisition emails and a fourth who writes my introductions and a fifth who prepares panels. I just ask myself: aren't these all things that define me as a person? Isn't it a lot of work to get my personality into all these agents?

But above all: will it really do me any good in the end?

I do many of these tasks myself, because ultimately it's me who stands on stage for the acquired customers and asks questions.

But that only works if I am authentic.

If I know who the client is, why I'm asking these questions and am honestly interested in what's going to happen on stage.

Authenticity cannot be outsourced

If I only carry out what my five agents have prepared for me, then in the end I'm no longer needed on this stage as a person.

Then I am a interchangeable voice robot.

But that's exactly what I am not.

And that's why I don't see any added value in outsourcing this work to AI agents. It would perhaps generate more output.

But I question very strongly whether this output would really be stronger or more authentic in the end, or simply more, bigger and more interchangeable.

The real problem: we are humanizing AI

But what I find much more dangerous is the attitude of humanizing AI in the process.

Sure, people need that somehow. We have always made idols of ourselves, we have always imagined God as an old man with a white beard who you can worship and who thinks it's great to be worshipped. By the way, wanting to be worshipped is quite a human and not very divine trait, but that's just as an aside.

But what is actually happening psychologically? Doesn't this humanization of AI lead to us forgetting to differentiate between what is technology and what is human?

AI is a sophisticated, but still based on the probability of a desired outcome, language technology that structures information for us. Sometimes better, sometimes worse. It is not human. It has no needs.

But if we start treating it in exactly the same way, holding feedback meetings and preferably agreeing to it finishing work at 3 p.m. on Fridays because it wants to walk its virtual dog, then the whole thing just becomes absurd. Then AI is no longer a technical helper, but ultimately only satisfies our own narcissism: that we can imagine that there are 15 agents working for us that we have to manage.

Let's be honest: if I want to play Manager, I'd rather fire up „Big Ambitions“, „Cities Skylines“ or „Tropico“. I don't break anything with my management and can switch it off without any consequences if I don't feel like it anymore.

But if I imagine that it's now my job to manage 15 agents and hold feedback meetings with them, then of course I have something to talk about. Something that keeps me busy for hours. Something that gives me meaning. Then I might even feel a sense of responsibility for my agents.

And then it becomes dangerous.

Why the difference between tools and people is essential  

For me, AI is a tool. An incredibly powerful one. I have used it Legal disputes sorted, We have been working on a purchase contract for a condominium and have been persuaded byovia to make basic repairs to our elevator and heating system, Tax issues clarified, I wrote a novel in three months and reorganized the marketing and SEO for my moderator work, built a help portal and I learn Something new every day, because every time I come across something I don't know yet, I pull out my cell phone and just blurt out what I've just seen and don't understand. But then I don't have to thank my agent afterwards in a feedback meeting.

Instead, I close the app, I'm happy that there's a technology that makes me smarter at the touch of a button and I'm just as happy that it doesn't create any emotional obligation for me.

Humanized AI is something that many people are afraid of. Because they don't trust human needs. Because they are afraid of interests. And of the fact that a powerful machine could assert interests much more powerfully than a human.

What they overlook is that a machine has no interests. But if we now start to treat agents as if they were people and as if they had interests, and if we implant our own interests in them, then we create exactly what we are afraid of.

Artificial intelligence has no interest. But it will imitate interest if we teach her.

Perhaps we should therefore rethink our approach to artificial intelligence and not start setting up mini AI companies, treating them more and more like humans and thus making our interaction with this seemingly infinite intelligence less and less intelligent.

Just a thought.

en_USEnglish