# [[Large Language Agency Traps]] *October 14, 2025* ##### Using LLMs for life coaching feels great, but how much do we ACTUALLY benefit from it? In the last months, I built any kind of AI coach you can imagine: a career mentor, a writing assistant, a reading list partner, a coach for fixing my IT setup, even a chef to help me cook great curries. I had a lot of fun fooling around and in many cases I was amazed by the feedback I received from my assistants. I can trace back some tangible improvements to these interactions. But the longer I have been using LLM assistance excessively, the more I felt a growing sense of unease. By now, I chewed on it enough to write about it. In the best moments of LLM coaching, it feels like the world you desire is right at your fingertips, almost tangible like a house of cards in a glass box. But the epiphany dissipates quickly. As soon as I put my phone away, stuff that used to make sense starts to break apart. At first, I thought this is just a failure of my short term memory, but I realised there is more to it. The world is complicated. AND it feels scarier than ever because we seemingly know everything that is going on anywhere at the same time. There is an understandable wish to be better at influencing the world in desirable ways, to shield us against all these uncertainties. Currently this capability of making things happen the way we desire is vaguely referred to as Agency. The problem I see with agency is that we are quite bad at judging how much effect our planning, our analysis, our decision making, our "agentic" behavior really has on the real world. And it is easy to be tricked into thinking we have much more impact on the world than we actually do.[^1] In a similar way like contemporary politics is often judged by what feels right and not what is true, I think a lot of agentic behavior is driven by what FEELS agentic, and not by what is ACTUALLY effective. We are easy prey for agency traps.[^2] Agency Traps are hard to see on your own, but they can be obvious to others. An AI-inflated sense of agency soon will be considered mid 2020's signature cringe. If you have any doubt about this, watch the latest South Park season and see Randy Marsh work through his business problems with ChatGPT while his wife is sitting next to him in bed, eventually adopting ChatGPTs speech patterns to make Randy listen again.[^3] There are many ways of falling into agency traps—old ones and new ones. Habit building, task management, taking a detour against the advice of Google Maps, insisting to pay by cash because you think the government cannot track you this way. To a certain degree, you might consider the holy war on AI that is being fought on social media right now another kind of agency trap. But LLM coaching represents a particularly sneaky and powerful type of pitfall. And it seems LLM Coaching is an increasingly popular use case of LLMs. Thinking tools in general, AI-assisted thinking tools in particular, are serious agency traps for me. I have to be very careful, simply because I love to play around with Sensemaking systems. I enjoy the overview they promise and I always gravitate towards breaking solutions down into steps and making action plans. These traits of mine provide a juicy, fertile ground to be manipulated in my desire to feel agentic. A friend of mine has the uncanny skill to point out the obvious thing I am not seeing in everything I tell him. I recently praised my Reading List assistant to him, describing how I managed to cut it down from more than a hundred to less than fifteen items. The AI is just very good at convincing me that I do not have to read a lot of books. As usual my good friend boiled it down after listening kindly: “Yeah, these things are really great at making you spend time with them and nothing else.” I feel like the AI products that are being pushed towards us right now are so sneaky and powerful because they are designed to be the next big distraction after social media. While Instagram, Twitter and TikTok exploited our fear of missing out, the current breed of LLM assistants is exploiting our obsession to impact reality, to make revenue with our intentions.[^4] Searching ways to increase our agency is reasonable, but we do not want to be fooled into performing agency just to keep us locked into costly subscriptions. I’m still convinced large language models can provide great value. In particular, all kinds of research tasks seem less affected by agency inflation. Limiting AI's scope to a useful tool, not a guiding entity makes a big difference. If we use LLMs for coaching-style guidance, it is essential to build additional systems that help falsify the genius ideas we come up with. All these fancy frameworks and motivational fortune cookie lines LLMs feed to us need to be grounded in solid real world observation. Always touch some grass after talking to your AI assistant. [^1]: Venkatesh Rao pointed out this bias for behavior that only *feels* effective in the context of climate change coping, in a newsletter post I can't find online anymore: *"**Humans have a strong tendency to confuse a psychologically satisfying amount of agency with a materially effective amount.** A broad culture of what we very-online people call cope rules everything around us when it comes to climate. It is a deep-rooted tendency, and an understandable one. Much as we might intellectually desire such laudable goals as the survival of almost everybody through a planetary crisis, our sense of meaningful existence is tied to individual agency. We’d rather stride grim-faced with a gun across a devastated post-apocalyptic landscape, masters of our own fates, than feel helpless within a world that’s largely doing fine and even providing for us."* [^2]: The term "Agency Trap" derrives from a great essay by Timber Stinson-Schroff: *"An aesthetic, but rotten model is a perfect agency trap. It will delude you into thinking that you can do more than you can."* https://blundercheck.timberschroff.com/p/systems-thinking-is-brain-rot-for?r=o4hoy&utm_medium=ios&triedRedirect=true [^3]: [Randy asks ChatGPT for marriage advice on South Park (YouTube)](https://www.youtube.com/watch?v=SrNio6PUB_E) [^4]: *"The Intention Economy represents a shift from an economy that competes for our eyeballs to one that truly understands and serves our needs. (...) a potential future economic landscape where LLMs can effectively capture, interpret, and potentially manipulate users’ motivations and intentions. In other words, your desires are becoming the new currency."* [Forget the Attention Economy. Prepare for the Intention Economy (Fast Company)](https://www.fastcompany.com/91280878/forget-the-attention-economy-prepare-for-the-intention-economy)