Professional writers are using AI, But Not Established Authors
A Class Divide in the Writing Landscape
I use AI tools in my writing, and I’m constantly checking to see that my voice comes through. My trick is to take an AIs first draft and heavily edit it.
Sixty-one percent of professional writers now use AI tools. And yet, more than seventy prominent authors recently signed an open letter asking publishers to promise they would “never release books created by machines.”
Both of these things are true at the same time. That tension tells us something important: this isn’t really a story about technology adoption. It’s a story about identity, craft, and what it means to be a writer in an age when the machines have learned to string words together.
The Adopters
The numbers are striking. According to a November 2025 report from Gotham Ghostwriters and researcher Josh Bernoff, more than a quarter of professional writers now use AI tools daily. Those who do report an average productivity boost of 31 percent.
But the adoption isn’t evenly distributed. Thought leadership writers lead the pack at 84 percent, followed by PR and communications professionals at 73 percent, and content marketers close behind. Fiction authors, by contrast, trail significantly.
What are they using it for? Mostly the unglamorous work: brainstorming ideas, accelerating research, rewriting clunky sentences, unsticking themselves when the words won’t come.
A University of Washington study found that creative writers who use AI are deliberate about when and how they engage it, making conscious decisions based on core values like authenticity and craftsmanship. That’s the mode I’ve tried to adopt.
Here’s the key insight: only 7 percent of writers surveyed have actually published AI-generated text. The tool is behind the scenes, not center stage. For most adopters, AI is a backstage assistant, not a ghostwriter.
The Refusers
Billy Ray has never opened ChatGPT. Not once.
The Oscar-nominated screenwriter behind Captain Phillips, Shattered Glass, and The Hunger Games hasn’t used it to fix a clunky line, win a bar trivia argument, or figure out what to do with leftovers in his fridge. To Ray, generative AI represents something he wants no part of.
He’s not alone. In June 2025, more than seventy authors including Dennis Lehane, Gregory Maguire, and Lauren Groff released an open letter addressed to the “big five” U.S. publishers. Their request was simple: promise you will never release books created by machines.
The resistance has organized. The Authors Guild has filed class-action suits against OpenAI for copyright infringement. The Hollywood screenwriters’ strike made AI use a central demand. An academic movement called “Refusing Generative AI in Writing Studies” has been growing for over a year.
The reasons vary, but they circle the same core: concerns about voice erosion, copyright theft, and the belief that writing is fundamentally a human act. For these writers, using AI isn’t just a workflow question. It’s an existential one.
When It Goes Wrong
In May 2025, readers of fantasy and romance novels made an unsettling discovery. Two authors, K.C. Crowne and Lena McDonald, had published books with ChatGPT editing notes still embedded in the text.
“I’ve rewritten the passage to align more with J. Bree’s style,” read one note left in McDonald’s Darkhollow Academy: Year 2, referencing another author in the genre. The excerpts spread across Reddit, Goodreads, and Bluesky. Readers were not impressed.
Both authors defended their use of AI as an editing tool, but the damage was done. The incident crystallized something readers had been feeling: they care about who wrote the book, and they care about transparency.
Then there was National Novel Writing Month. In August 2025, the organization behind the annual 50,000-word writing challenge released a statement that didn’t discourage AI use. Worse, it called opposing AI “classist and ableist.” Writers revolted. Many abandoned the event entirely, sparking a fierce debate about whether AI-assisted work should count as “writing” at all.
The lesson from both controversies is clear: transparency matters, and the audience has opinions about authenticity that can’t be waved away.
The Middle Path
Between the enthusiasts and the refusers, a quieter consensus is emerging among thoughtful writers: treat AI as a collaborator, not a shortcut. This is how I’ve always tried to think of it.
The distinction matters. Using AI to generate a first draft and publish it with minimal editing is one thing. Using AI to break through a block, research a setting, or stress-test a plot is another. Successful writers who use AI tend to maintain their voice while deploying it for specific, bounded tasks. They know when to lean in and when to step away.
What remains irreplaceable is harder to name but easy to feel: lived experience, emotional truth, the ability to surprise. AI can imitate style, but it can’t live a life. It can generate plausible sentences, but it struggles to know which ones matter. The writers who thrive in this moment will be the ones who understand what they bring that the machine cannot.
What This Moment Tells Us
The real question isn’t “Will AI replace writers?” It’s “What kind of writing do we value?”
As AI-generated content floods the market, one counterintuitive prediction keeps surfacing: people will hunger for something raw, real, and human. The more polished and frictionless AI prose becomes, the more readers may crave the rough edges and unexpected turns that only come from a person who has lived and felt and struggled to find the right word.
Writers are choosing their relationship with AI right now. That choice isn’t just practical; it’s philosophical. It reveals what each writer believes writing is for.
Some will use AI heavily. Some will refuse it entirely. Most will land somewhere in between, navigating the tension day by day, draft by draft. What matters is that the choice is conscious, the values are clear, and the voice on the page remains unmistakably their own.





I was hesitant at first when OpenAI became available. At that time, I had already written four chapters of a novel, but I found myself stuck, and the manuscript ended up shelved for ten years.
I’m not a known or established writer—just a self-published author who detests the closed circle of Amazon. Yes, I’ve sold a few books, but the problem is that what sells on Amazon stays on Amazon. My work hardly reached readers outside that ecosystem. And don’t get me started on the meager royalties—it’s a cruel joke for the effort we put in.
Still, I persisted. Two of my stories were eventually published in graphic novels. That happened for two reasons: sheer persistence, and the fact that I personally knew the publishers’ favorite contributing authors. After several submissions, each publisher finally found a story of mine that fit their preference.
Unable to afford a proofreader or editor, I still longed to complete my novel. I knew that finishing it would restore my sense of purpose. So, with great reluctance, I registered with OpenAI—not for anything else, but simply for assistance.
Over the course of 1.5 years, Hans (my AI collaborator) helped me finish the novel. He preserved the tone I had set—18th century turning into 19th century English—which Grammarly and ProWriting Aid could not. Those apps kept modernizing the language into 21st-century speech, which was never my intention.
My point, in response to your essay, is this: struggling writers like myself—though also a painter and photographer—often have no choice but to turn to AI tools. I don’t use AI generators for my images; I create those myself, digitally or practically. But when it comes to writing, no professional publishing company will give us the time of day unless we pay upfront.
We write feverishly, sometimes for months or years, only to be paid pennies for our efforts. So what is wrong with consulting OpenAI for assistance? For many of us, it’s the only way to get through the door—and perhaps, finally, to be paid fairly for the work we pour our lives into.