To Ai or Not to Ai
To AI or Not to AI… Is that Really the Question?
Some Thoughts on the Pros and Cons of a New Technology
AI tools have sparked heated debates across creative industries, with positions often polarized between wholesale rejection and uncritical embrace. My stance sits in the thoughtful middle ground: AI can be a valuable assistant for writers and creatives, helping with tedious tasks and offering fresh perspectives on our work. However, this utility doesn’t erase the serious ethical concerns about how these tools were developed—particularly regarding consent and compensation for the creators whose work was used without permission to train these systems. I wanted to explore both the benefits of AI as a tool and some of the troublesome issues that still haven’t been resolved.
What AI Does Well
I generally believe that AI could be a useful tool. I don’t think it can replace human creativity. But I do think it can help with the more tedious things in the world. When I worked in customer service, I would have loved to have an AI assistant that could give me suggestions for what to tell the customer. Note that I don’t think the AI could replace me—it wouldn’t have my empathy for the customer, nor would it have my ability to suggest creative solutions. But it would have helped me find better resources faster.
In writing, I think AI can be used as a tool to help writers improve their writing. I’ve been using some tools to look at sections of my work and suggest things I could do to make it better.
Note: I’m not saying I asked AI to re-write my story. Instead, the AI-assistant gave me suggestions like:
Make the subtext clearer: If Oliver is so at home in the woods, does that mean she isn’t? Perhaps she notices how freely he moves, while she feels weighted down by the expectation that she should be like him.
I still have to go into that chapter and figure out how to make the subtext clearer. I’m not asking the AI to write it for me. Why would I want it to do that? That’s the fun part!
What’s Problematic About AI
Setting aside the clunky, repetitive text you often get if you make the mistake of asking an AI to write a story for you, there are some issues regarding AI that still haven’t been answered.
- Management using AI to replace workers rather than assist them
- Many AI LLMs used content scraped from the web without permission
Cory Doctorow has covered the first concern much better than I ever could. If you haven’t read his blog, Pluralistic, a good article to start with is: AI can’t do your job.
“AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job”
A chatbot can’t do your job because it’s not human.
AI is Not Human
Many people fall for the logical fallacy applied to AI and how they scraped their original content—that AI is somehow human. Every comment section on every AI article will have at least one person saying something like this:
“The reason it’s not stealing is because this is exactly how we humans create new content anyways….” ~ @littleripper312 from a comment on Why the AI is stealing argument is irrelevant
But he is equivocating on the word “human.” Because, in his argument, humans have always read and copied in their own words the works of other artists, that makes it okay for the non-human computer AI to do the same. But AIs are not human. They are computers programmed by humans.
And the large, very wealthy corporations which built the AIs 100% want us to think of AI as human. That way the above argument is no longer a logical fallacy of equivocation, it is just fact. But they have a vested interest in pushing that idea. If they suddenly had to report what content they used to create their models and pay the authors of that content, their profits would sink significantly.
This fundamental mischaracterization of AI systems as being similar to human learning isn’t just a philosophical problem—it has real consequences for creators. By promoting this false equivalence between human inspiration and machine training, AI companies have conveniently sidestepped a crucial ethical consideration: obtaining consent from the creators whose work they use. While humans naturally absorb influences through cultural exposure, AI companies made deliberate decisions about whose content to scrape and process, all without asking permission.
The saying goes “It is easier to to ask forgiveness than it is to get permission.” But as far as I’ve been able to tell, AI companies aren’t even doing that. They just shrug and say, “prove we took your content.” Or they imply that your content (in my case, over 18 years of weekly articles on web design and HTML comprised of 500,000 words or more) is just a drop in the bucket.
They never asked for forgiveness for scraping it, nor did they ask for my consent. And consent is important, too.
Consent is Always Forgotten
If the AI companies had approached me (or more likely, the parent company hosting the content) and asked to suck in all our writing to train their AIs, chances are we would have said, “yes.” If they had offered to pay—even a nominal amount—for that content, the probability of a positive answer would have increased astronomically.
But they didn’t ask. They just took it, making the claim, “It’s freely available on the web, so anyone can read it. And that’s all we’re doing, we’re having our AI models ‘read’ the content.”
But AI isn’t human. It doesn’t read. In fact, if you search for less biased, more analytical sources online, you’ll learn that AI models treat words in a vastly different fashion than humans do. For example, they think there are only two letter Rs in the word “strawberry.” (source, source, source - openAI ). Even if a human writer can’t spell strawberry, they can count the number of Rs in the word when correctly spelled. And their non-AI spellchecker would tell them that “strawbery” is spelled incorrectly.
Writing is Hard, Writers Have Rights, and Writing Should be Compensated
Some people argue that AI models are going to begin training on “synthetic” data (content generated by other AI tools and systems rather than by humans). And that because of this, the argument that AIs stole content will be irrelevant.
I would argue that while that is true for the models using that synthetic data, doesn’t mean it’s also true for the synthetic data itself. How did they create that data? Did they pay authors for the rights to use their content in the tool that wrote the synthetic content? If they can show, incontrovertibly, that they did get consent and better yet, paid for the rights, then sure.
But companies are run by humans. And humans are greedy. And “ask forgiveness, not permission” is a human trait that they are 100% relying on to get away with using the creative works of thousands of creators without permission, credit, or compensation. AGAIN!
I am not opposed to using AI as a writing tool
It can help improve clunky sentences and do boring tasks like writing summaries. It can even write full-blown novels that people enjoy and are willing to read.
But none of those facts negates the underlying evil that AI companies scraped content from unsuspecting web authors without permission and are now trying to claim that it’s “okay” because humans read the web all the time.
Rather than trying to sweep the ugly truth under an equivocal rug why not just admit that consent is important and at the very least, authors should have been given the chance to say “no, thank you” when the AI companies were deciding to scrape the web and use that content as their data source?
It’s About Consent, Not Rejection
AI tools are just that—tools made by companies with specific goals in mind. I’ll continue to use AI as an assistant in my writing process while maintaining my creative and editorial control. However, the way these tools were created raises important ethical questions that shouldn’t be dismissed.
The real issue isn’t whether writers and other creators should use AI. It’s that AI companies should have sought permission beforehand, and should be working to compensate and repair relationships with the people they harmed.
I hope that in the future, people recognize that creative work has value. Acknowledging that value through consent and compensation isn’t anti-tech—it’s pro-creator!
Let’s stand together to demand useful tools that are developed ethically!
If you enjoyed this rant and would like to get notified of future books, posts, or other mentions, join my newsletter: Dryads, Dragons, and Druids.