Just three years have passed since OpenAI released ChatGPT. Today, more than 1 billion people use standalone artificial intelligence tools each month, according to a recent report from DataReportal. Consumer demand has driven software companies like Meta and Anthropic to scramble to find more data to train their AI models. An easy target? Books.
This past March, I stumbled on an article on TheAtlantic.com titled, "Search LibGen, the Pirated-Books Database That Meta Used to Train AI." So, I did just that, entering my name in the search box.
Imagine my surprise when I discovered that The Last Letter from Sicily had been illegally uploaded and potentially used for training. I wasn't alone; the dataset includes millions of books and academic papers. Original works are fueling the AI revolution without our permission, driving AI-forward corporations like Meta to net billions in revenue growth, leaving readers and writers to pay the ultimate price.
About a month after my discovery, Wisconsin author, AllWriters' Workplace & Workshop LLC Director and Founder, and fellow Authors Guild member Kathie Giorgio, posted an invitation for authors to join her on a panel at the Southeast Wisconsin Festival of Books to discuss this topic and how AI has impacted reading, writing, and publishing.
On November 7, I sat with two other authors, Ross Hightower and Liesel Shurtliff, to discuss what has happened and how the industry has changed. We were joined by Aaron Nodolf, an intellectual property and patent attorney and partner at Michael Best & Friedrich LLP.

The session was well attended by curious readers and writers, many of whom raised important concerns and asked insightful questions. We spoke about how human storytelling remains irreplaceable and highlighted recent legal battles over AI training data, including Anthropic's $1.5 billion class-action settlement and ongoing cases against Meta, with implications for the future of copyright law and creative work.
One author asked for our thoughts on AI use cases. My response: I don't recommend people use AI like a calculator. It's just too unreliable, especially for research and citations.

Case in point: I searched Google for the rest of Mussolini's quote, "We must be a warlike nation." The AI response, which appeared above my search results, asserted that Mussolini actually said, "We must be a peace-like nation," and that the Fascist dictator had won the Nobel Peace Prize.
In case you're wondering, he was not a Nobel Peace Prize winner.
Finally, I expressed my overarching concern: the use of AI in publishing devalues authors and reduces per-project pay rates. I shared a job post for which an advertiser offered $100 for 25 articles. Unfortunately, this is not a one-off. I have, in the past few months alone, seen several AI-humanizing jobs, some even posted by respected national media companies.

Publishers justify such rates by claiming that the writer could use AI to streamline their output. Gone are the days of paying writers $2 per word like I did two decades ago as an editor at Shape, or even $0.50 per word, as I've seen in recent years, if this trend continues.
Publishing is a costly business, and increasingly, publishers are turning to low-priced solutions. Bloomsbury Founder and CEO Nigel Newton recently shared his views on AI, suggesting, in an interview reported in The Guardian, that the technology could benefit writers.
"I think AI will probably help creativity, because it will enable the 8 billion people on the planet to get started on some creative area where they might have hesitated to take the first step," he said. "AI gets them going and writes the first paragraph, or first chapter, and gets them back in the zone. And it can do similar things with painting and music composition and with almost all of the creative arts."
But will the writer stop after that first paragraph? Should writers use AI to write an entire chapter? These are important ethical questions in a rapidly changing publishing landscape. 
Newer genre-fiction publisher Inkitt provides infinitely customizable AI-driven content. While its authors receive small royalties, a Bloomberg article revealed that their contracts grant Inkitt exclusive intellectual property rights to revisions, expansions, and adaptations without author input or approval. Interviewed authors reported seeing the resulting sequels only after publication.
How can we trust that a human has written the books we see listed for sale, even when allegedly written by our favorite authors? That's a legitimate question, especially at a time when authors, including publishing expert Jane Friedman, are stumbling on unauthorized AI-generated works bearing their byline. Just as maddening, authors are now forced to compete with AI knockoffs of their human-authored books.
Tens of thousands of TikTok videos promote ways to use constant AI publishing to game the system and earn passive income. And enterprising "authors" are taking to Reddit to share their schemes.

We know how this started, but where will it end? And at what cost to readers, writers, and publishers?
I have much more to share on this topic based on my research and experience, and I look forward to presenting to future audiences. You can find a list of all of my speaking topics on my "Book Lindsay" page of this site.
Catch me at an upcoming event! Find my list of appearances on my Events page.
