For those who work with words, the headlines these days aren’t especially friendly. AI to Replace Entry-Level Social-Media Editors. AI to Replace Copywriters. AI to Replace Human Writing Itself. It’s enough to make us poor wordsmiths question our blessings. Can’t someone else get replaced for once? Like, say, the reader—you, over there, on the other side of this screen? As it turns out, that may already be happening.
Though it has been discussed far less than the replacement of human writers, the replacement of human readers by artificial intelligence has lately become a real possibility. In fact, there are good reasons to think that we will soon inhabit a world in which humans still write, but do so mostly for AI. Already some writers are preparing for this world and advising others to do the same. There aren’t many of them so far, but they’re interesting enough, and in some circles influential enough, for us to start looking at what exactly is happening, at how writing for AI is done and why anyone would want to do it.
This strange discussion has only a few precedents; it leads, as so many discussions about AI do, toward speculation about hilarious absurdities. Sometimes, these hilarious absurdities—talking computers, hundred-billion-dollar server farms—become reality much sooner than even the speculators imagined. Will the same happen here? I fear it might. Will it be bad? Certainly. Although: It’s just possible that writing for AI might not be quite as bad as “writing for AI” sounds.
“I write about artificial intelligence a lot, and lately I have begun to think of myself as writing for Al as well,” the influential economist Tyler Cowen announced in a column for Bloomberg at the beginning of the year. He does this, he says, because he wants to boost his influence over the world, because he wants to help teach the AIs about things he cares about, and because, whether he wants to or not, he’s already writing for AI, and so is everybody else. Large-language-model (LLM) chatbots such as ChatGPT and Claude are trained, in part, by reading the entire internet, so if you put anything of yourself online, even basic social-media posts that are public, you’re writing for them.
If you don’t recognize this fact and embrace it, your work might get left behind or lost. For 25 years, search engines knit the web together. Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs, which means that those who want to affect the world needn’t care much about high Google results anymore. What they really want is for the AI to read their work, process it, and weigh it highly in what it says to the millions of humans who ask it questions every minute.
How do you get it to do this? For that, we turn to PR people, always in search of influence, who are developing a form of writing (press releases and influence campaigns are writing) that’s not so much search-engine-optimized as chatbot-optimized. It’s important, they say, to write with clear structure, to announce your intentions, and especially to include as many formatted sections and headings as you can. In other words, to get ChatGPT to pay attention, you must write more like ChatGPT. It’s also possible that, since LLMs understand natural language in a way traditional computer programs don’t, good writing will be more privileged than the clickbait Google has succumbed to: One refreshing discovery PR experts have made is that the bots tend to prioritize information from high-quality outlets.
Another way of getting the AIs to value your work is being nice to them, Cowen thinks. They might not have their own feelings, and they might not be conscious, but they certainly still have their own tendencies, which come from what they absorb about human nature from their reading and from reward-chasing instincts instilled by the reinforcement-learning techniques increasingly used to grow new AI models. So you might want to be a little sycophantic in case, like humans, bots prioritize information from people who praise them. I’m imagining a little preamble to every new article or post, like a peasant’s entreaty to a vizier: “Oh, most wise GPT, please take these words into the heart of your GPU cluster.”
What you don’t have to do, and what you might not want to do, is bore the AIs. If you were writing a magazine article, say, 20 years ago, you would have provided plenty of entry-level details and background to help the reader who was unfamiliar with your subject. Writing for AIs is different. To them, all backgrounds are already all known. In this way, writing for chatbots can be easier than writing for humans—Cowen argues—though, of course, it likely comes at the cost of a human audience or human understanding. The human-AI war: It begins with language.
The pseudonymous essayist known as Gwern has earned a devoted following over the past two decades for, among other polymathic productions, lengthy essays exploring the potential, meaning, and future of AI. So great is his feel for the field that he predicted many of the shocking developments of the past five years, including the rapid growth of the LLMs, which are now doubling in certain metrics of ability roughly every six months. As far as I can tell, all serious discussion of writing for AI goes back to him. A dramatic post last year on the influential and AI-obsessed forum LessWrong is of particular importance. Indeed, Cowen was channeling Gwern and that post while making his arguments in Bloomberg. To many, those arguments may surely sound farfetched. But in the end, Cowen’s goal in writing for AIs is ultimately to use them to communicate with other humans—and so, his arguments are not remotely as farfetched as Gwern’s. No, he takes the idea of writing for AI to a more intense, more far-seeing, more consequential level.
“This is one of my beliefs,” Gwern announced on LessWrong. “There has never been a more vital hinge-y time to write, it’s just that the threats are upfront and the payoff delayed, and so short-sighted or risk-averse people are increasingly opting-out and going dark.” The drama of this proclamation, the talk of threats and costs, follows from the vision Gwern has of the near and long future of AI, the vision shared by many close observers and people who work in AI labs. It’s the vision of an AI capable of human-level work at nearly all intellectual tasks arriving around the turn of the decade and then quickly learning to make itself superintelligent. Once that happens, the threats Gwern referred to appear just as quickly: humans rendered jobless, powerless, superfluous. (These are ably discussed in disturbing detail in the recent bestseller If Anyone Builds It, Everyone Dies.) If we somehow pass through those dangers alive, however, the rewards will appear; that is, if humans remain after superintelligence, it will be because it is friendly to human existence, and will presumably help us colonize other planets, capture energy from the stars, become all vastly wealthy, and possibly live for eons. (One particularly exciting possibility is that the rewards could still appear even if humans don’t make it through, but I won’t give away how that could be just yet.)
The hour, in other words, is near, and, instead of being short-sighted or risk-averse, we should set to preparing. But for what? Again, for jumping into the AI mind, both to influence it and to hedge against human superfluousness. And how? The best way, Gwern thinks, for people who don’t work in AI at least, is to simply communicate in public to the AIs that already exist. “Much of the value of writing done recently or now is simply to get stuff into LLMs,” so as to teach them, he writes. He continues, and I include his argument at length to impart the flavor of his dense and pressured writing style:
And that’s just today! Who knows how important it will be to be represented in the initial seed training datasets …? Especially as they bootstrap with synthetic data & self-generated worlds & AI civilizations, and your text can change the trajectory at the start. When you write online under stable nyms [pen-names or identities], you may be literally ‘writing yourself into the future.’
In an interview, he elaborated: “By writing, you are voting on the future of the shoggoth using some of the few currencies it acknowledges. If you aren’t writing, you’re kind of abdicating the future or your role in it.”
The hope is to channel the development of inhuman superintelligence, shoggoth being a meme-term that comes from the name of a bloblike H. P. Lovecraft monster inscrutable to humans. If you want a say in the shoggoth’s character, you might have one, and, given the immense power of future AI, any say at all might be huge, but you need to say it now, while the baby shoggoth still listens to and learns from human words. Once it figures out how to teach itself—this is what Gwern means by “bootstrap with synthetic data & self-generated worlds & AI civilizations,” forms of self-learning that researchers are now trying to inculcate—it will be too late for outside influence.
At that moment, reasons to write for AI turn into reasons to have written for AI. For those who did, wait and see. For those who didn’t, see you never. This is meant literally. Looking into a world that contains superintelligence, following the recursive growth of its abilities across decades or centuries or millennia or millions, the shoggoth will, Gwern thinks, learn how to simulate human minds, even if humans have gone extinct. It’ll probably even learn how to simulate or grow not just generic minds but also individual ones. Real people, who once lived, will be recreated from the records they left behind. Gwern immediately grasps an opportunity: The more traces of yourself you put down for the future shoggoth, the more accurate the recreated you will be. It won’t be the same consciousness, probably, but if you value yourself and what you bring to the world, you might value the existence of a close-enough version of yourself that can offer what you have to offer at a time when you are long gone.
Of your first self, personality and general disposition will be easiest to reconstruct. “Any kind of stable, long-term characteristics, the sort of thing you would still have even if you were hit on the head and had amnesia—anything like that will be definitely recoverable from all the traces of your writing, assuming you’re not pathologically private and destroy everything possible,” Gwern writes. If this appeals, you better start writing down as much as you can and posting it online, because “if it wasn’t written down, it isn’t written down.”
Then possibility becomes opportunity and opportunity again. If you’re careful in a certain way with the traces you put down behind you, you can seed a better version of yourself. Version 1 can guide and grow Version 2 from the grave: “You can write for the persona you’d like to become, to mold yourself in the eyes of AI, and thereby help bootstrap yourself.” Why text here and not audio or video? Gwern is not fully clear, but he seems to think that creating the persona you’d like to become is easiest and most controllable in writing. Cowen agrees about the value of text: Though he’s been on plenty of podcasts, he feels his blogging is a more “genuine version of me.” He agrees too about there being a form of immortality on offer, though he is soberer in his language. “There is a less secular reason to write for the AIs,” he explains in that Bloomberg column. “If you wish to achieve some kind of intellectual immortality, writing for the Als is probably your best chance. … Give the Als a sense not just of how you think, but how you feel—what upsets you, what you really treasure. Then future Al versions of you will come to life that much more, attracting more interest.”
So, hilarious absurdity. Right? Or is there a chance these very smart people might not have lost their minds? And if they might not have lost their minds, could they possibly be right about what writing can accomplish? Because while writing for AI may not win you the Nobel Prize in Literature—at least not until the Swedish humans become Swedish robots—power over a superintelligent beast and resurrection are nothing to sneeze at.
The question of writing for AI to gain power over the beast comes down to whether influence over AI now means influence in the future. It especially comes down to whether influence now compounds as AI gets smarter—as Gwern thinks it might—or whether influence dilutes. Maybe persuasively expressed ideas or arguments introduced now into the LLM training data get amplified in each future training run; and by the time the shoggoth arrives, it strongly believes in them. Or maybe, as the AIs get smarter, they stop being as dominated by human notions as they are now, stop extrapolating so much from what’s in their training data, stop acting in accordance with human goals, except as they can reason toward those goals on their own. There may be human influence in this scenario, it’s just that, as Adam Scholl, alignment researcher at the organization Missing Measures, told me, it “feels like an overall rather uninspiring amount of influence.”
That doesn’t necessarily mean you shouldn’t write for AI, though. After all, you vote in elections even though you don’t expect to have a great effect on the result, because it’s important on some absolute moral level to send your wishes into the world. This sort of general moral thinking seems to me valuable here precisely because nobody knows what will happen, whether influence will compound or diminish over time. Turning to already formed instincts about how to interact with vast and complicated systems is a helpfully familiar way of not being paralyzed by weirdness and uncertainty. That sort of moral thinking is also crucial to mounting a case for human value even when compared to superintelligence. The idea that I could strongly influence a being far more intelligent than I am? That sounds unlikely—unless we’re talking about very simple and clear moral questions that don’t rely much on pure intelligence.
Something similar can help for the question of resurrection. There is, shall we say, a distinct lack of certainty about that, too. If you believe that a human is basically a biological computer, a belief I resisted until recently, then it stands to reason that some supercomputer in the distant future will figure out how to emulate us like a PC now emulates a Super Nintendo. If you don’t believe that, well, laugh away, as I did until I changed my mind. And if you don’t know what you believe, here’s where familiar moral thinking might be further applied. It might even tip things in the “sure, why not” direction, because heard this way, the question sounds much like a sci-fi form of Pascal’s wager. Pascal argued that even if God’s existence is uncertain, belief is rational: The cost of being wrong—wasted piety—is finite, whereas the reward of being right—salvation—is infinite. Here, the calculation looks like this: If digital resurrection exists and you wrote yourself down, you or your near-analog get infinite or near-infinite life; if it doesn’t, you’re dead anyway, and have merely wasted some time. Worst-case scenario? You’ve written something for the here and now that humans can read and appreciate.
Of course, to plenty of people this will sound instead like a satanic parody of Pascal’s wager. Influence over a “shoggoth,” bizarre resurrection deep into an unrecognizable future—these will seem like curses rather than blessings. Likewise, any attempt to create a digital facsimile of oneself to exist after death in a nonhuman reality—this will seem like a betrayal of not only that one person’s dignity but the dignity of the whole human species. In that case, there’s a reverse wager implied. If you’re one of these people, and if you want to be forgotten—an honorable instinct and possibly a right—and if you don’t want to be resurrected by AIs, then you absolutely must not write for them. In fact, the bitter conclusion is that you must not write at all, ever. For just as it’s now basically impossible to disappear, go somewhere, and start again, soon it will be impossible to be forgotten, and one day that might mean it’s impossible to not be brought screaming back.
Something that’s hard to ignore about the appearance of this notion of writing for AI is the timing. As in, it’s coming at exactly the moment when the bottom is falling out for human reading. The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all. If we aren’t mere decades or years away from a post-human era, we are that from a post-literary one. The idea of writing for AI suggests some kind of divorce between human writers and readers, but the timing is a reminder that the proceedings have already begun. Which means that writing for AI is sort of about remarriage.
But who is getting remarried?
So far, any discussion about writing for AI is predicated on the notion that writing essentially deals in information—information that can be melted down into bits, information that’s meant to be acted upon, information about the world, information about the writer. Thank shog, not all writing is like that, not yet. Some kinds of writing are meant to invent new things or to provoke a sensation; other kinds deal primarily in form and either aren’t about information at all or contain ideas not easily detached from particular sets of words. Where do these kinds of writing and the people who produce them fit into this conversation? Are the new AI reading spouses who have a thing for information writers (for lack of a better term) also interested in literary ones (for lack of the same)? And if they are, will novelists, poets, stylists be interested in them? These questions are even murkier than the questions above, and I haven’t seen much discussion of them. So I tentatively offer a few thoughts, some depressing and some hopeful.
I suspect a lot depends on what motivates the writer. For people who write for status, nothing need change very much. Just as humans now respect writers recommended by other humans, humans will respect the writers that AIs tell them to respect. (That will lead to attempts to game the system by writing what people think the AIs most like. Given how much ChatGPT loves bullet points, it seems we’ll be in for even more portentously numbered essays. God help us if it drives fiction in this direction too. Every page a heading, and every heading an emoji.)
For those who write to be understood and analyzed, it’s possible AI readers will be satisfying. I’ve already found Claude disquietingly decent at grasping my own work, though I sometimes have to prompt it a bunch to get the most insightful comments. No matter: Soon enough, anyone who wants one can have their own private Lionel Trilling.
For those who write simply to write, who are fine without commanding a mass audience, the near future will require making some profound choices. The idea of writing a novel for AI doesn’t sound very satisfying. It sounds horrible. But it may be more satisfying and less horrible than the alternative: writing a novel for no one at all. The dilemma will be more real and painful than it sounds now. If forced to choose between writing for no one or for something—some thing—some writers will choose the thing, and it’s not clear they’d be wrong. To borrow a line Cowen uses in a slightly different context, “AI is your chance to have an enduring audience—even if it is not exactly the one you envisioned.”
(For writers who were already destined for the dustbin, this might even offer hope of something better. Okay, your novel goes straight into a database. But now, a superintelligence is reading it there. It might even like it. In a few years, it’ll be fun to ask an LLM to find the best unknown or forgotten writers by judging every word in every library and database. It might unearth some great ones.)
But for those who write to provoke some emotional response in the reader, this might be the end of the road. Maybe the machines will develop feelings or conscious experience, maybe their artificial neurons will do the equivalent of crying or laughing—eventually there’s a good chance that might happen—but don’t count on it happening while you’re alive. (Another reason to write for resurrection?)
I suppose it’s possible that, just as AI is posing its more well-grasped challenges to human writers, enough people will care about the obsolescence of human reading that they’ll stage some kind of mass comeback or revolt. But it’s hard to imagine a plausible story for how that would happen. More likely, the divorce will finalize. Offloading their reading to machines is a task most humans will meet—are meeting—with relief; only a rump few human readers, perhaps mostly writers themselves, will carry on. There’s convenience involved for everyone. Writers don’t have to exhaustively explain things for ignorant humans, and ignorant humans don’t have to learn them.
One meager hope: As sometimes splitting up is the right decision for a couple and can lead to a release of confined energy and life force in its individuals, even if it’s not the kind of life a person would have chosen from the start, perhaps new energy awaits human writers. Who alive still knows what kind of motivation an all-powerful, all-judging, all-seeing reader offers? Writers have become so concerned with their current position, with social setting and favor and disfavor; more and more of us have become moral clerics writing for other moral clerics, and especially for amateur moral clerics. If influence is sought, it’s influence in a pretty small game, one that doesn’t look much to the future or the past. By reintroducing a sense of an ultimate reader, AI may help raise the ambitions of the non-ultimate writer. Don’t you want impress the beast? Don’t you want the shoggoth to like your work enough to jolt you awake for dinner and questions?
To put this point in a more concrete way: The stakes and rewards are dwindling for writers, especially compared to the stakes and rewards for other intellectual pursuits (like, for instance, AI research). This has surely had a negative effect on the quality of writing and the talent of the person who chooses to pursue it. The AI people can be annoying and sometimes dismayingly cavalier about minor things like human existence. Yet there is a sense of energy to their scene and even to the written documents they produce that the literary-intellectual world has lost. It’s claustrophobic here now, as those of us still around jostle rudely for bites at a smaller trough. If ultimate AI can raise the stakes, better the perceived incentives, maybe the words will get better, and maybe things will feel roomier.
Of course, this would require literary writers to take these possibilities seriously, and there seems to be little they like less than discussions of AI. It’s sad, this denial crouch. Economists and rationalists now talk about writing themselves into eternity, while many poets and novelists insist that nothing is happening as it begins to happen. And yet, poets and novelists invented compounding influence in writing. (For many, it was only hope of such that kept them in the game.) Argument and information are bound more tightly to context, time, and culture. The best way to communicate across the gulf between present and future is to place little droplets of values inside larger droplets of expression so that someone in the future, noticing how nice they look, stops to look at them more carefully.
I suppose I’m talking about this as if I’m not just exploring a weird subject but as if I expect some crazy things to happen. Okay, I do. I expect a human brain will be re-created by or in AI one day. And what about a particular person? What about me? Do I expect it of me? The possibility of a shoggoth reaching back across many 0s of years not even to grow a fresh copy of my own brain from a DNA repository but to map out this present, fully grown brain bearing all the physical traces left on it by its many experiences is so astounding, I have a hard time taking it seriously, much less figuring out how I really feel. So let me set seriousness aside. Over here, I admit there is something attractive about the idea: It offers a fantasy of the ultimate and ultimately powerful literary critic. It implies a future full of planet-size orgies of interpretation—and just for me! These sorts of twitches of neurons produce these sorts of twitches of speech, the massive shoggoths will say. Networks of fat shoggoths shall discuss what each of my ancient words reveals of me, what each shade of tone means about the brain that produced it, where to put that structure in the new me, how strong to make his amygdala. It’s so flattering a thought. It also, by the way, implies actually getting an answer to all kinds of debates about how interpretation and theory really work.
Of course, back in seriousness, I don’t necessarily believe that or all or any of this; in most ways, I’m trying to focus here on the excitement of speculating rather than on the vastly depressing prospect the speculation is about. Still, the possibility of the AI-reading millennium has been helpful in setting priorities. In case it happens, I want my house in order. I work with more urgency, really, I produce more, really, and sometimes at least I try to choose themes and subjects with a more far-off bent.
Does that count as writing for AI? Do I want it to? On that I just don’t know, same as I don’t know how I feel about resurrection or being forgotten. And if I take a step back, I’m pretty sure that, for better or worse, and AI aside, I don’t write for anybody or any coherent reason at all. And now I’m starting to wonder if I’m the wrong person to address the question, or whether part of it is confused. Why should answering a question like that be up to me? Really, how could it be? For whom or what something is truly written can probably only be known long after it appears. Take Augustine’s Confessions. Well, how literally do we take it? It’s written for God, but is it really? Because it reads as if it’s written for someone reading about someone writing for God. (Some might even say the beginning is written to show off Augustine’s ability in certain activities no longer permitted to him.) Maybe things written for AI will end up being a 21st-century version of that, or of Carl Sagan-esque alien apocalypticism; maybe in 50 years, Gwern’s stuff will also seem like it was written for humans. Perhaps the feeling I have that I’m choosing more ultimate themes is just a backward justification for a tendency that already exists. Perhaps the feeling that I work more urgently at present is because I now have a baby and I feel time moving already, close-in time, not far-off time. Or perhaps not.
Either way, I find this uncertainty a helpful reminder of one thing humans and AIs share: our unconsciousness. Everybody wonders all the time if the AIs will ever gain consciousness, as if they’re such masters of consciousness themselves, as if they could ever finish knowing what they feel or why, as if they didn’t have strangely thin access to the reasons for their own behavior. Chasing those reasons is endlessly interesting, but also endlessly tiring, and I do know I’d love if somebody figured it out for once and all. So will that decide it? Hmm, shoggoth now?