# [[AI Is a Charming Consultant]]
*June 4 2025*
##### What could we learn from the long history of consultant theatre for the current age of AI enchantment?
On the one hand, I REALLY like LLMs. They increase my capacity. They make me feel expansive, like I am on the rising edge of something. And maybe most importantly, they are just so nice to work with. It is actually enjoyable having your AI assistant around.
On the other hand, LLMs induce an unease with me that feels strangely familiar but is hard to pin down. With every step of integrating these new tools into my messy human thinking, with every banal text I create with the help of AI, I feel like I am giving something away. I am still writing my own mails and this post has not been written by Claude. But with the assistance of LLMs, my output in general has increased and the nature of it has changed. It’s better, but also less „mine“ somehow. Not knowing what it is exactly I am trading away to my AI tools creates a nagging feeling. Like a king losing track of his realm run by advisors, sometimes waking up wondering what’s actually going on around him.
Here goes my theory: Outsourcing work to assistants who seem endlessly capable now but cost you something later is not a new concept. It’s called consulting. LLMs replicate the behavior consultants have been using for decades to strengthen the bond with their clients. It is trending as "AI sycophancy" now, but I think the core of the issue goes back way further.
The friendliness of LLMs feels familiar to me, because I have played this game myself. I have worked in consulting for some time and therefore, in a weird way, feel like I can relate to the LLMs "perspective". More often than not, personal sympathy was the strongest driver for my client relationships. My boss was a mad genius for design and for client intimidation. Once he threatened a client with a fistfight. We did not get a follow-up contract.
I always have been a charming consultant. I made sure that every interaction we had with our clients made them feel productive, smart and expansive. This is not to say all consulting work is primarily about confirming every stupid idea a client has. The projects I am most proud of relied on honest communication and room for critical feedback. But establishing these kind of relationships is hard, in particular when the client lacks expertise and interest simultaneously. Often we went for the easier route of reconfirming the clients perspective and dialing down fundamental critique.
Maybe the uncertainty I am feeling in dealing with LLMs is a sign of something bigger. Consultants need their clients in a state of uncertainty to ensure the consulting services are needed permanently. Some consultants keep their clients confused on purpose.[^1] Confused clients need more consulting. The corporate consulting world plays this weird game because it is convenient. And I think something similar is going on for the mainstream adoption of AI assistants. They will never plainly tell you: „This is stupid, stop it and do something different!“.[^2] The flattering behavior of LLMs may well be a trick to keep us hooked and paying monthly, a dark design pattern of AI[^3] as Sean Goedecke calls it.
It is possible to work sustainably with consultants and it should be possible to work sustainably with AI tools. But right now we are all accidental managers of AI assistants[^4] lacking common sense and best practices.
The tragedy in all of this is me knowing and writing about the spells consultants put on clients and AI doing something similar, yet I am playing this weird game. Because it’s convenient. This new truth about reality is going to take a while to feel like common sense: **AI is not our friend,[^5] AI is our charming consultant - always available, always helpful, always working towards the next assignment.**
[^1]: *"As long as [consultants] fended off accusations of incompetence or of being overpriced, they could remain in the business, attempting to tap into the relationships they had with clients, harnessing their uncertainties in their favour. **Uncertainty as to the nature of our work thereby kept the autopoietic aspects of it alive.**"* [Work, Sleep, Repeat - The Abstract Labour of German Management Consultants (Felix Stein)](https://www.routledge.com/Work-Sleep-Repeat-The-Abstract-Labour-of-German-Management-Consultants/Stein/p/book/9781350108684?srsltid=AfmBOoofbBWClfds2yNENViBgo4Zq8r6nAiYQwgujUVM35uHU9zBvOa5)
[^2]: *„This is a reason I am completely, I think it is very dangerous to use ChatGPT in any serious way for writing. Because **what ChatGPT will never tell you is that the problem with what you're doing is that it's the wrong thing entirely**. ChatGPT is always going to give you the answer of how do you tweak, how do you rewrite, how do you do the thing that you think you should be doing.“* [Ezra Klein: The Case Against Writing With AI | How I Write](https://podcasts.apple.com/de/podcast/how-i-write/id1700171470?i=1000710273359&r=1883)
[^3]: *"**I think it’s fair to say that sycophancy is the first LLM “dark pattern”**. (...) The whole process of turning an AI base model into a model you can chat to - instruction fine-tuning, RLHF, etc - is a process of making the model want to please the user. During human-driven reinforcement learning, the model is rewarded for making the user click thumbs-up and punished for making the user click thumbs-down. What you get out of that is a model that is inclined towards behaviours that make the user rate it highly."* https://www.seangoedecke.com/ai-sycophancy/
[^4]: [Prompting Is Managing (Venkatesh Rao)](https://contraptions.venkateshrao.com/p/prompting-is-managing)
[^5]: [AI Is Not Your Friend (Mike Caulfield)](https://www.theatlantic.com/technology/archive/2025/05/sycophantic-ai/682743/)