# [[Dancing With the Shoggoth]]
##### A year of writing with AI taught me one thing: make it less convenient.
About a year ago I started using Claude for dictation while outside walking the dog. I enjoyed strolling through the parks AI-accompanied, getting thoughts on a digital page that otherwise would have stayed in my head. But at some point I realized Claude occasionally made strong edits while cleaning up my raw inputs. Usually I would not notice, because I did not remember the exact phrases I dictated ten minutes ago. But here and there I stumbled across it, catching Claude rearrange words slightly in a way that made the sentence flow better, a bit closer to proper written English than my non-native stumbling.
Knowing my dictation assistant would take the freedom for minor corrections made me feel weird. At the one hand, I could constrain Claude more strictly, adding something like "NEVER CORRECT SENTENCE STRUCTURE!!" to the assistants instructions and it would likely work. At the other hand this made me think about the specific work split in AI augmentation I am willing to maintain and to defend in my daily practise and more importantly, how this will hold in the near future. Despite public critique of AI, I think society is moving continuously towards more AI integration. In the 2010's there was a similar dynamic with the growing number of things an average person would do on their phone, but nowadays the rate of adoption for AI tools seems to be much faster than past waves of adopting new digital tools.[^1] Is fighting back against AI modifying my dictations the same thing as rejecting auto-correct on smartphones in the 2010s? Maybe all of this is just transitional noise of us getting used to AI cleaning after us in everything we do?
There is one continuous thread that pulls through my writing: a persistent doubt that whatever system improvement I introduce to my workflow, I may actually not enable myself more, but rather might fool myself into thinking I improved while actually I stagnate. My self improvement persona is a genius in building sophisticated cathedrals of self deception, eager to shield me from frustration. Handing this unreliable narrator (my procrastinating self) virtually unlimited access to Large Language Models is a dangerous decision. Yet I already moved far on this treacherous path. But why?
It is obvious that the "shoggoth with a smiley face"[^2] poses a risk to intellectual autonomy for anyone, not just me. The reason I and many others keep experimenting with LLMs is not uncertainty about the downsides, it is fear of missing out on the upsides that may be uncapped. Essentially this is about wether AI within our lifetime will be a mere sophisticated tool or if it will turn out to be a beyond-human-level intelligent actor we have to deal with. If the latter is true (and it seems the number of serious voices leaning towards AI becoming as smart or smarter than us is growing by the day) a lot of things will change very quickly. Writing as a medium will change much more significantly than it already has in the last months.
The current AI critical discourse in the writing community in my eyes is largely focused on the assumption of AI becoming an extremely sophisticated Auto-Correct, the perfect tool for cheating yourself towards mass-publishing things you could not have written before. But if AI is actually reaching human like levels of intelligence, the discussion is not only about ethical, ecological and aesthetic trade-offs. It will also have to be a discussion about epistemic decisions: would you choose to reject a unique source of feedback and insight for the sake of all the bad things AI is associated with? Despite all concerns I am certain AI tools of human level capability would make their way into every thinking persons tool kit eventually, in the same way as the Internet and Wikipedia have made it there in the past years.
As Anton Korinek has framed it recently on HardFork,[^3] both assuming AI will succeed or AI will fail to surpass human level intelligence right now are speculative positions to take, embracing the current uncertainty about this question is the most sensible path there is. Considering this, figuring out how to work with AI in writing workflows in 2026 should be considered less a question of mere toolchain optimization but rather a probing for how a likely near term future of writing with human-like assistance may feel like and what levers of influence we will have.
I will follow up with examples of me testing the ground with LLM assistance in my writing work flows, iterating from simple auto-correct like convenience towards a full-blown dance with the Shoggoth:
**1) Summarizing highlighted paragraphs from books & papers in Readwise Reader with ChatGPT**
I do most of my reading in Readwise Reader, which offers an integration of ChatGPT. Since some years I developed a habit of giving each reading note a dedicated header with a neutral summary of the quotes content, followed by my personal take on it. Initially I would write these highlights myself, which also was a good exercise to doublecheck my understanding of a highlighted section. But also I realized this slowed down my reading significantly, therefore I started using a "TLDR" custom prompt for sumarizing these highlights. This made my reading significantly quicker, but I always felt a bit uneasy with this solution. Do I really want an LLM to extract the gist from what I just read? What if it omits exactly the most important bits that make my understanding of a paragraph MINE? Eventually I settled with a compromise. The current TLDR prompt does not create one, but three different summaries of the paragraph, allowing me picking one to keep. I feel quite confident with this solution and noticed I more often than not end up clearly rejecting two options and end up modifying the most suitable proposal until it matches my understanding. By presenting three alternative interpretations of the same thing I not only can choose the most fitting option, I also keep reminding myself that the entity I am interacting with is never offering universal truth, but always just statistical aproximations. LLMs are contingency machines and we should treat them as such.
**2) Generating follow-up prompts while taking notes on the go with Apple's Private Cloud Compute and ChatGPT**
A lot of my thinking happens on the go, thats why I developed a nice shortcut that allows me to take notes with the press of a button on my phone, automatically appended to a file in my Obsidian vault. At some point I realized I would like to have this workflow of mobile thinking dig a bit deeper. So I started sending these raw on-the-go notes to LLMs, to generate a follow-up prompt for myself for a richer elaboration of the current idea. I started with Apples Private Cloud Compute AI, which is very private but also tends to be quite generic and bland in its responses. Eventually I included a switcher that allows me to toggle between PCC and ChatGPT, to allow me to choose a more private or a less generic prompting for each individual thought I capture. Whatever option I take, it will generate several prompts for me to choose from. The weaker proposals I reject often expose the limits of LLMs understanding of the deeper meaning. Choosing which question to answer triggers my sense of intellectual agency. Sometimes I would consider all questions as meaningless, having me close the cycle typing "whatever". With time I started guessing what PCC / ChatGPT might ask as a follow-up and often my guesses would actually show up a step later. I also noticed some patterns in my on-the-go thinking, which is a kind of meta insight I appreciate. For example, despite all the benefits I see in my practise of on-the-go thinking I have come to realize that I tend to make conclusions based on weak evidence much more often when I am on the go. Overall I am quite confident with this particular solution in regards to my intellectual autonomy. Using LLMs to ask me questions seems like a good use case with transparent implications.
**3) Using Claude as Writing Assistant**
The most complex application of LLMs in my writing is clearly using Claude as a writing assistant / coach. So far I was never really interested in "write this essay for me" kind of applications, but rather using it as a critical editor, which is of particular value as a non-trained writer doing self publishing without any institutional quality gating. This use case of AI-assistance in writing to me is the hardest to get right, but also the most rewarding one. Claude is shockingly good in understanding outlines and identifying tensions on the intention level of essays. By now I hate to recieve "It seems you are trying to write three essays at the same time. Which one are you actually interested in?" as a feedback, in exactly the same way you would hate it when you’re teacher is able to point out exactly what is wrong with the paper you delivered. I can keep saying this is only sophisticated pattern matching 100 times in a row, still it feels like a real person is giving me pointed feedback on my attempts.
But despite strict instructions for Claude to be non-flattering and critical, it is very easy to chat myself into a rabbithole of "OMG THIS MAKES SO MUCH SENSE". It is worth ending up there some times, because I think it can build an intuition for spotting self-build echo chambers and getting out of them again. Frequently I now notice Claude just mirroring my initial input in a more refined way. This usually marks a point for me to stop the ping-pong. After countless Iterations I realized the most valuable instructions force Claude to stay away from pretending to make judgements and instead focus on identifying patterns, structures and tensions in my writing. I added a "hard-stop" option which reminds me of my own principles, it fires of frequently with statements like "Here I can not help you, make a call for yourself."
Overall Claude has helped me to stay on track with my writing in the last year and eventually, despite having less free time, publish two to three times faster than the years before. This mode of writing assistance, based on vast context of my past writing, custom instructions and iterative work on full texts from concept towards publication is currently the usecase of highest leverage both for the LLM involved and myself. I gain the most here, but also the risks of decepting myself are higher than in using AI for smaller scale tasks like dictation or auto headlines.
What did I learn from messing around with LLMs in my writing workflow in the last twelve months?
To protect intellectual autonomy, it is important to define the scope of LLM tools deliberately. The higher up in the sense making hierarchy AI's operate, the more important this gets. ChatGPT streamlining the meaning of a highlighted paragraph is one thing, Claude coaching me into self-confirming rabbit holes a whole different tier. The two most powerful rules for instructions I came to value here are a prohibition of judgment and a mandate to provide alternative options. Instead of responding "Your draft lacks clear stakes, add a paragraph explaining why you care about this." my Claude will usually respond with something like "It seems your draft is elaborating a concept without explaining why readers should care. This may be good or bad, your call. You could keep it theoretical or you might consider adding something that explains the impact of your idea: a personal story, a historic example, a recent datapoint." Now you might say, essentially it is almost saying the same thing but more verbose and that is right to a degree. But I noticed adding deliberate friction on any LLM conversation that is related to judgment and optionality changes something in my interaction with these tools.
In a similar fashion as some are recommending to be polite towards AI because of how it changes the behaviour of the human user, not the AI, I think it is valuable to make AI intentionally less convenient in areas I care about. It is a bad idea to use LLMs for reducing "friction" in thinking/writing processes, because the friction is all it is about. When an intellectual challenge becomes "easy" by the help of AI, you are not working the challenge anymore, but a more satisfying replica of it. There are notions of AI becoming a new kind of medium and I think in the realms of workflows, this is to be taken serious. A tool that makes you feel comfortable no matter the actual results you achieve should be considered a distraction, rather an entertaimnent device than a tool. Sustainable AI augmentation needs to be seamful, not Seamless. It must not reduce friction, far the opposite, it has to produce new kinds friction.
I still use voice-to-text for talking with Claude while walking the park, but somehow my workflow evolved away from doing full blown first draft dictations, the likes where I caught Claude in the act of manipulating my voice while transcribing. By now I use LLMs much more often on a daily basis and also within more granular applications, like the ones I described in detail before. My personal trajectory of using Large Language Models right now seems to point towards a smaller scale but deeper integration, with LLMs becoming the "intelligent" glue that makes my digital writing & thinking coherent. Less Magic-8-Ball "Tell me what I should do"-prompting and less "I can do the same thing but 10x faster while doing the Laundry"-self-deception, but more deliberate design of highly capable custom workflows.
[^1]: *"The current generative AI [three year] adoption rate of 54.6% exceeds the 19.7% adoption rate of the personal computer (PC) in 1984, three years after the first mass-market computer (the IBM PC in 1981), and the internet’s 30.1% adoption rate in 1998, three years after the internet was opened to commercial traffic."* [The State of Generative AI Adoption in 2025 (Federal Reserve Bank of St. Louis)](https://www.stlouisfed.org/on-the-economy/2025/nov/state-generative-ai-adoption-2025#:~:text=The%20current%20generative%20AI%20adoption,was%20opened%20to%20commercial%20traffic)
[^2]: ![[2603_Dancing-With-The-Shoggoth.png]] https://knowyourmeme.com/memes/shoggoth-with-smiley-face-artificial-intelligence
[^3]: *"Some people just naturally jump to the conclusion [of] these systems are going to be way smarter than any human within just a small number of years and then there is another camp that says, well, what our brains are doing is so special that machines won't be able to replicate it for a very long time (...). And frankly, both is a speculative position. I personally, I'm first of all willing to embrace the uncertainty about it and I think we all should."* Anton Korinek on [Hard Fork: Is A.I. Eating the Labor Market? + The Latest on the Pentagon, OpenClaw and Alpha School, 27. Feb. 2026](https://podcasts.apple.com/de/podcast/is-a-i-eating-the-labor-market-the-latest/id1528594034?i=1000751908591&r=1036.223)