
Without better gatekeeping on AI, publishing threatens to collapse
A few months ago, I went to buy Callie Hart’s breakout novel, Quicksilver, on Kobo. I couldn’t find it. What I did find, however, was Quicksilver, published by a dude named *Kevin, sporting Callie Hart’s cover. Oh, Hart’s name was still present…buried somewhere in the description. But it was clearly Kevin who would be turning the profit here.
Kevin’s theft of Hart’s novel, it turns out, was just the tip of the iceberg. Under a veritable tsunami of AI copy thefts, AI-generated submissions, and now, of course, AI scammers, publishing is becoming a redundant and criminal industry. The current moment reflects an existential crisis, and in order to save itself, the industry as a whole—including readers—will need to act swiftly and decisively.
The Oversized Impact of ‘Everywhere AI’
Let’s talk for a moment about where abuses with AI are occurring. First, there are the Kevin scammers, who simply use AI to copy a book and sell it as their own—often using the same cover (to cash in on all those people who genuinely want to read the title). Additionally, there are the agents, magazines and publishing houses that are being flooded with AI-‘authored’ submissions. Some admit their creations are AI; others will try to pass off their submissions as human-authored.
Agencies and publishing houses are run by humans, however, who have to take time out of their day to vet each submission. AI-generated submissions increase their workload by a magnitude. AI slop promises to turn traditionally slow publishing paths to an utterly glacial crawl.
AI in the hands of ‘bad actors’—or even just ordinary folk who have not been given enough information to make informed choices about their use of AI—put us all at risk.
But subsidiary publishing is also being impacted by AI slop. Just the other week, I attended a session with Draft2Digital, an indie publishing support company. The representatives were asked about how the company is dealing with AI. Their answer? While the company tries to identify and remove AI generated-content as quickly as they can, the company is being inundated. Imagine this on the scale experienced by companies such as Kindle and Kobo, which often relies on readers reporting bad-actor content to the respective sites.
The point here is simple: no company is going to be able to catch every AI slop scam. And even the ones they do manage to catch, it won’t be done quickly enough to mitigate the damage it does.
And then there are the authors whose inboxes are being stuffed with AI-generated letters promising them book clubs, marketing support, book reviews. No one is able to trust what they read or see these days, leading to a crisis in reality itself.
Ethical Dilemma, Practical Effects
At its base, AI scams are about greed, and we need to be clear about one thing: greed is going to be the downfall of the human race. Forget for a moment the millions of metric tons of fresh drinking water consumed (often for free) to create AI data and run AI tech. Yes, AI is destroying the last, tattered shreds of our environment, and it is robbing future generations of its rights to clean drinking water.
AI scientists—and I have personally interviewed many—are typically very moral and want AI to be used ethically. But companies and individuals who utilize AI don’t adhere to the same ethical frameworks as science. They are not policed. Companies cannot, at their heart, be moral, because they are entities (not people) meant to generate profits for someone. AI in the hands of ‘bad actors’—or even just ordinary folk who have not been given enough information to make informed choices about their use of AI—put us all at risk.
With publishing, the impact of a general lack of policing could be devastating, for a very simple reason.
In this model, much like AI-machine learning models as a whole, the entire industry collapses under this weight.
Consider the AI slop hawker who generates books using AI and posts them on Amazon. The marketplace becomes literally flooded with AI-gen fodder, which is difficult to read and turns most readers off (because, news flash, they are not good stories!).
Underpinning all of this is the fact that the AI used to generate these inadequate stories was literally created by training on millions of human-authored creations, authors who were not paid for their works. This goes beyond copyright infringement—this is literally human scammers replacing human authors with stolen, machine-generated versions of their own work. Scammers, who make the money, rather than the humans who initially spent sometimes years inventing their fictions, and were likely compensated only peanuts.
It’s not much of a leap to imagine how platforms such as Kobo and Amazon might not keep up with speedy con artists who publish multiple AI-generated books (under several aliases) per day. With the speed of creation, AI slop hawkers would not need to work hard to make a dishonest buck. Human authors, meanwhile, can’t afford to compete.
Let me now return to readers, who, in this (happening now!) scenario, have access to a marketplace inundated by AI slop hawkers, with no clear indicators as to what is AI-originated versus works created by human authors whose voices are unique and whose stories are important.
That’s the point of scammers, of course: to come across as authentic in order to fleece people out of their hard-earned cash. It doesn’t matter if they steal $0.99 or $9.99 from you—what they do is impact your experience of reading.
If the books people read are poorly written—if the Substack you peruse is nothing more than word salad with AI prompts left in the margins—readers won’t return. They will turn their backs on you, on books, and potentially reading as a whole.
In this model, much like AI-machine learning models as a whole, the entire industry collapses under this weight.
Readers Matter
Unfortunately, the real gatekeepers for the publishing industry are going to be readers like you and me. Readers deserve to know whether they are buying and consuming AI-generated work, and they will need to keep companies’ feet to the fire to ensure their platforms are doing the work of distinguishing human from AI…. Before those so-called creations reach the marketplace.
It’s not fair that we, the consumers, need to battle tooth-and-nail to protect the sacred trust that books create with readers. It’s truly indecent that so many unethical and deeply unwise people are profiting as they collapse (perhaps within years) an industry that has survived for centuries. But this is just one fight among so many when it comes to how AI is being utilized.
We need to teach our neighbours and friends the ethical and moral values attached to AI. Assure that first-time ‘author’ who used AI to ‘finally write that book’ that their authentic work would have been more interesting and more valuable to readers. We need gatekeepers to hold AI slop hawkers accountable, and the companies who stand to make a profit from them accountable, too. It can’t all be on the backs of readers.
Publishing as a whole needs to speak up—and quickly, too. Otherwise, I fear, we’re all writing the last chapter on books.
Leave a comment