By George Zotalis (Philadelphia, USA)
Doomsayers and Cheerleaders
Less than 10 years separate us from Attention Is All You Need, the paper that ushered in the era of deep learning machines and Large Language Models. ChatGPT appeared only on November 30, 2022. Artificial intelligence, as a product accessible to the general public, is in its infancy, and our relationship reflects the shock of the new.
What kind of writing does artificial intelligence produce? And can it become literature of ambition and consequence? Many experts have opinions, but no one knows the future. Answers to whether AI writing can reach the point where it is indistinguishable from human writing (call it literary Turing test or the writerly Imitation Game) span the spectrum from categorical No, often accompanied by furious calls to resist the new steamroller (1), to emphatic acceptance, with every imaginable reservation and yes-but in between.
The result is an already extensive and ever-growing bibliography on AI writing and on the material, economic, political conditions, incentives, systems, and ideological assumptions that support, shape, and constrain it.
Many among those who engage with the subject tend to divide into Doomsayers and Cheerleaders. The former warn of traps, dangers, and impending catastrophes of Terminator-style AI. The latter preach the promises and benefits of a brave new world of generalized abundance, where among other things, machines will produce quality literature.
Both camps have subdivisions. The Doomsayers include Alarmists, Censors, and Resolute Cassandras. Alarmists sound the alarm but are otherwise harmless. Censors go a step further: they want to protect us from the dangers of AI, which makes them potentially dangerous since the only hammer they know is censorship. And the Resolute Cassandras, a fringe neo-Luddite minority, demand that we pull the plug yesterday.
The Cheerleaders divide into Advertisers and Propagandists. The Advertisers (aka Accelerationists, the pedal-to-the-metal enthusiasts who can’t wait to get to The Singularity) belong to marketing and need no explanation. When Sam Altman instructed ChatGPT to write a metafictional short story about artificial intelligence and grief, the result was a 1,100-word self-advertisement. The Propagandists want the masses to regard the whole issue as settled and inevitable. They present marketing’s claims as facts.
Both Doomsayers and Cheerleaders present evidence to support their views, but I wonder whether reactions to AI – whatever they may be (against, for, que sera sera) – bubble up from something primary and unconscious that is not easy to identify or articulate. I cannot prove it, but I suspect that on this matter we are first instinctive Doomsayers, Cheerleaders (or whatever else), and only afterward we gravitate towards evidence and factoids that confirm the opinion our brains formed automatically, before us and for us. As with so many other things, views about artificial intelligence do not begin with a tabula rasa but resemble the cowboy cliché: first we shoot, then we look to see what happened.
Calling someone a Doomsayer or a Cheerleader does not imply that they are wrong. In the end, someone will turn out to be right. Or both may end up wrong. The problem is that today there is no sufficient evidence for certainty in either direction (and a large portion of what is presented as empirical data evaporates upon closer inspection). In this uncertain fast-changing landscape of non-decidability, we bring our blind spots, biases, expectations, predispositions, and interests to overcompensate for the lack of proof.
Stochastic Parrots
Large Language Models (Claude 4.5 Opus, ChatGPT 5.2, Gemini 3, Grok 4.1 – to stick with celebrities of the moment) contain and process billions of statistical relationships among words, sentences, paragraphs, and pages (ChatGPT-5 works with over one trillion). They do not understand letters, do not see words, sentences, or paragraphs, do not feel emotions, and do not think ideas. The text machines operate through word embeddings, in which unimaginably multidimensional sequences of numbers (tokens) predict the most likely next sequences of numbers.
If we give as input “Last Christmas…,” the overwhelmingly likely output is “…I gave you my heart.” When I start a text message on my phone and type “what”, the overzealous little robot eager to please me suggests “are,” “is,” “to,” “the” etc.
This is why stochastic parrot is an apt description of AI. The phrase is from Emily Bender’s article in 2021 “On the Dangers of Stochastic Parrots”, and the concept was elaborated further in the book she co-authored with Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2), where LLMs are described as synthetic text-extruding machines.
Reviewing The AI Con in the New York Review of Books, James Gleick (author of the landmark book on Chaos) expressed the prevailing unease about AI in a sentence that could have appeared in Das Kapital had Marx been writing in the 21st century:
The artificial intelligence industry depends on plagiarism, mimicry, and exploited labor, not intelligence. (3)
LLMs process the inputs we give them. Whatever the je ne sais quoi that is happening within the layers of neural networks between input and output, the product of AI is determined, shaped, and limited by what we feed it. Outputs cannot exceed the limits of inputs which is another way of saying garbage in, garbage out.
Artificial intelligence is uneven. In repetitive tasks and jobs amenable to algorithmic management, iteration, automation and optimization, it surpasses human capabilities with ease. In writing, it produces predictable, digestible, cliché-ridden texts – user manuals, DIY instructions, brochures, summaries, advertising copy, standard medical advice, political speeches, memoirs of trauma, travel tips, ideas for internal decoration and so on.
Artificial intelligence is a perfect research assistant and information retrieval tool. It can find in minutes (often seconds) things that would take hundreds of work-hours to locate on your own – or that you might never find at all. But it stumbles and falls at anything that escapes the data on which it was trained. And as for de novo creation of style and voice – the sine qua non of creative writing and art – machines cannot even dream of either one, even if we assume like Philip K. Dick that androids dream of electric sheep.
What does this mean for AI writing? I’ll answer using the example of Don DeLillo, with whose work I am a bit familiar.
Because DeLillo’s novels are part of the vast corpus used to train LLMs, machines can easily reproduce the style and voice of the author of White Noise and The Names. Put in deep research mode they will deliver cold DeLillo-esque irony in subtle homeopathic doses. They can overwhelm you with excessive paranoia from the Libra period. They can give post-9/11 DeLillo. Or they can dub the author of The Silence, where the internet has collapsed and we live without screens.
And like a blender, AI can mix whichever version of DeLillo we want with Augustine’s Confessions, the Collected Works of Osama bin Laden, Fifty Shades of Grey, the Book of Revelation, Asimov’s short stories (many of which were prophetic First Encounters between humans and smart machines), along with nonexistent books by nonexistent authors, and so on. The options are endless. It is no coincidence that the word hybrid is a signifier of the times.
Once a literary text becomes input, the model can serve it in whatever form you prefer – memes and deepfakes. high-end PowerPoint presentations, podcasts, book reviews, dissertations, and much more.
But the model cannot take flight or free itself from what it has learned in order to generalize and write something original. Machines ruminate on existing texts ad infinitum. They cannot something that has not already existed.
What Is the Writing Style of Artificial Intelligence?
This does not mean that artificial intelligence lacks style or voice.
In a recent piece titled Will AI writing ever be good? Max Read described recurring patterns in short stories written by machines. For unknown reasons, AI favors ghosts, shadows, memories, murmurs and silence. In small doses, its anthropomorphic prose seems coherent and appeals to low-demand readers. But in larger doses, AI writing betrays itself through verbal and stylistic tics that bring its strangeness to the surface. Max Read explains:
Even as LLMs get better at producing fluid and plausibly human text, these persistent stylistic tics remain interestingly abrasive – AI text is smooth in a single short answer presented in isolation, but when you’re confronted with an overwhelming amount of it, the strangeness that’s been fine-tuned out begins to reassert itself (4).
Small texts may feel smooth. In large quantities, they become strange and outright bad.
Why?
Why is AI writing irritating?
So far, we have three explanations: technological, economic, and ontological.
The technological explanation stays close to the idea of stochastic parrots. Since machines process preexisting material, sooner or later the constant regurgitation under the hood crystallizes into patterns that the model (for its own reasons) rewards as “quality” writing, without having a clue that an abyss separates them from human writing – perhaps a this is mild case of machine hallucination. The technological argument implies a barrier for AI, but it is reluctant to offer an opinion whether the barrier is insurmountable or whether future technologies will transcend it.
The second explanation posits the lack of economic incentives to build models that write as well as, or better than, humans. According to this argument there is no demand for high-quality creative writing by machines. The money in AI lies elsewhere: automating systems, infrastructures and processes in the real economy, along with the precaritization – if not decimation – of large segments of workers in the knowledge economy (white-collar blood bath is becoming a thing). In healthcare, AI reshapes doctor-patient encounters and automates administrative and bureaucratic tasks so the system can be more productive and hopefully less error-prone. AI in hospitals is not about writing the next House of God (5).
Even if AI Chekhovs, Kafkas, Becketts and Houellebecqs could be built, there is no market for literature by intelligent machines. Other than (perhaps) esoteric demographic groups that cohabitate with AI girlfriends and boyfriends, who would care about these books? And even if they bought them, what would the profit margins be?
Economic analyses also point to a zero-sum game between AI’s literary ambitions and the pursuit of profit. ChatGPT 4.5 was designed to write good prose but proved too slow and expensive for the non-literary, profitable tasks and was withdrawn (6).
The ontological approach shifts the discussion to a different level. Human writing is unique because we are biological things.
Bio-intelligences – from simple unicellular forms with the ability to react to outside stimuli, to the human brain – are products of unimaginably deep evolutionary processes. The first nervous systems appeared 600 million years ago. The history of the human brain began 6-7 million years ago, when hominids diverged from chimpanzees. Anatomically modern Homo sapiens appeared around 300,000 years ago. And human brains with speech, numbers, abstract thought and art have roughly 35,000 years behind them.
All this depth translates into anatomical and functional advantages far beyond efficiency, optimization, and automation at scale – the strong suits of LLMs and their instrumental rationality. The human brain reads nuances, gray zones, ambiguities, multiplicities, implications, ironies, and countless unpredictable situations that cannot be algorithmized.
At the individual level, we have each unique person with their singular experiences, memories, obsessions, pathologies, dead ends, inhibitions, drives, motives, talents, comedies, investments, opportunities, and so on – all products of long-term, uncontrolled, random interactions with human and non-human environments.
Human intelligence is inseparable from our bodily realities, and here the ontological explanation echoes Maurice Merleau-Ponty’s corporéité. The disembodied intelligences of machines lack lived, sensory experiences and knowledge of the surrounding world. They cannot transcend the limits of Nvidia microchips. They have neither interaction with, nor understanding of the world, which confines them to a narrow and shallow range of functions and skills. From this point of view, the instrumental rationality of artificial intelligence is late-stage capitalism du jour, aimed at profit, not creativity.
Perhaps Gleick and others have a point when they argue that the AI industry is not based on intelligence and that using the term intelligence to describe LLMs is embezzlement.
Slop
Any discussion of what artificial intelligence can and cannot write is incomplete if it ignores the infinitely more common phenomenon of AI slop – flooding the internet with robotic garbage.
Low- or zero-effort texts, images, and videos, spam advertising, and endless brain-rot scrolling constitute the most immediate and tangible outcome of artificial intelligence. Talking cats starring in soap operas, Shrimp Jesus, little girls saved from tsunamis while clutching their puppies, zombies playing soccer, handshakes between Pope and Satan at the Vatican, rabbits bouncing on trampolines, fake parades, ads for non-existent products, kangaroos working at airports, flight attendants identical to Hollywood celebrities, and countless similar non-products led to slop being declared Word of the Year in 2025.
This is the present and the future – not artificial Virginia Woolfs.
Artificial intelligence is diffusing into creative fields, but it does not threaten human writing. Writers who use AI will likely replace writers who do not. And the interesting writers will use LLMs in order to avoid and sabotage the neural networks advice and writing.
Notes
- The Large Language Muddle. An angry attack on Large Language Models by the editors of n+1, Fall 2025.
- Emily M. Bender, Alex Hanna. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Harper, 274 pages, 2025.
- James Gleick. “The Parrot in the Machine.” New York Review of Books, July 24, 2025.
- Max Read. “Will A.I. Writing Ever Be Good? Some Notes on A.I. Writing.” December 5, 2025.
- Samuel Shem. The House of God. A pioneering medical novel satirizing the training of young doctors. When published in 1978, it caused a scandal; today it is required reading.
- Nathan Lambert. “Why AI Writing Is Mid.” Interconnects, November 16, 2025.
The article was originally written in Greek for the online magazine O Anagnostis. An English draft created by (what else?) AI was edited by GZ. Given the nature of AI, this brief opinion became outdated before it was even published. The author is a physician who is using AI in his daily work.





