# [[AI Is Your Charming Consultant]]
*June 4 2025*
##### What could we learn from the long history of consultant theatre for the current age of AI enchantment?
At the one hand, I REALLY like LLMs. They increase my capacity. They make me feel expansive, like I am on the rising edge of something. And maybe most importantly, they are just so nice to work with. It is actually enjoyable having your AI assistant around.
On the other hand, LLMs induce an unease with me that feels strangely familiar but is hard to pin down. With every step of integrating these new tools into my messy human thinking, with every banal output I create with the help of AI, I feel like I am giving something away. Not knowing what it is exactly I am trading away to my AI tools creates an itching confusion. Like a king loosing track of his realm run by wesirs, sometimes waking up wondering what’s actually going on around him.
All of this feels eerily familiar to me and this brings me to my working theory: Outsourcing work to assistants with seemingly infinite resources for short term gains but with longterm implications is not a new concept. It’s called consulting. LLMs replicate the behavior consultants have been using for decades to strengthen the bond with their clients. It is trending as "AI sicophancy" now, but I think the core of the issue goes back way further.
I have worked in consulting for some time and therefore, in a weird way, feel like I can relate to the LLMs "perspective". More often than not, personal sympathy was the strongest driver for my client relationships. My boss was a mad genius for design and for client intimidation. Once he threatened a client with a fistfight, if he would not go with my bosses vision. We did not get a follow-up contract.
I always have been a charming consultant. I made sure that every interaction my consultancy had with our clients made them feel productive, smart and expansive. This is not to say all consulting work is primarily about confirming every stupid idea a client has. The projects I am most proud of relied on honest communication and room for critical feedback. But establishing these kind of relationships is hard, in particular when the client lacks expertise and interest simultaneously. Often we went for the easier route of reconfirming the clients perspective and dialing down fundamental critique.
Maybe the uncertainty I am feeling in dealing with LLMs is a signal for something more fundamental. Consultants need their clients in a state of uncertainty to ensure the consulting services are needed permanently. At least a certain kind of management consulting continuously recreates the confused conditions that maintain the need for consulting services - an autopoetic system.[^1]
Overly flattering behavior of LLMs may well be an artifact of the fight for screen time and renewed subscriptions, a dark design pattern of AI as Sean Goedecke calls it.[^2] The itching confusion I feel could be a side effect of the blurred boundaries between LLMs and my own expertise.
The tragedy in all of this is me writing about consulting experience and yet falling for the AI‘s gentle manipulation. It is possible to work sustainably with consultants and it should be possible to work sustainably with AI tools. But right now we are all accidental managers of AI assistants[^3] lacking common sense and best practices.
**AI is not your friend,[^4] AI is your charming consultant - always available, always helpful, always working towards the next assignment.**
[^1]: *"As long as [consultants] fended off accusations of incompetence or of being overpriced, they could remain in the business, attempting to tap into the relationships they had with clients, harnessing their uncertainties in their favour. **Uncertainty as to the nature of our work thereby kept the autopoietic aspects of it alive.**"* [Work, Sleep, Repeat - The Abstract Labour of German Management Consultants (Felix Stein)](https://www.routledge.com/Work-Sleep-Repeat-The-Abstract-Labour-of-German-Management-Consultants/Stein/p/book/9781350108684?srsltid=AfmBOoofbBWClfds2yNENViBgo4Zq8r6nAiYQwgujUVM35uHU9zBvOa5)
[^2]: *"**I think it’s fair to say that sycophancy is the first LLM “dark pattern”**. (...) The whole process of turning an AI base model into a model you can chat to - instruction fine-tuning, RLHF, etc - is a process of making the model want to please the user. During human-driven reinforcement learning, the model is rewarded for making the user click thumbs-up and punished for making the user click thumbs-down. What you get out of that is a model that is inclined towards behaviours that make the user rate it highly."* https://www.seangoedecke.com/ai-sycophancy/
[^3]: [Prompting Is Managing (Venkatesh Rao)](https://contraptions.venkateshrao.com/p/prompting-is-managing)
[^4]: [AI Is Not Your Friend (Mike Caulfield)](https://www.theatlantic.com/technology/archive/2025/05/sycophantic-ai/682743/)