Published Works and AI: How Far Will We Trust The Machine?
- 1 day ago
- 5 min read

With the understanding that authors around the globe have been reading, interpreting, and processing the allegations that have been hurled against an author for AI use in their novel. I am not a journalist. I am an editor and a reader. This issue has been so widespread that I feel naming it directly would detract from my points.
I have been reading the published articles, listening to podcasts, and watching videos because I want to understand, as fully as possible, where my fellow rhetors are coming from.
When large language models began to enter online spaces in a more overt manner, as is the case with other pieces of technology, people began to test it to see if it could save them time or money. Eventually, people used LLMs to revise their resumes, to generate pictures or videos, to write text, and to edit text to varying effects.
Alongside the prevalence of these models came AI detectors. While a detector of this kind can make predictions on the possibility of AI use, they are highly fallible. Essentially, the reason why LLMs produce certain types of text is because humans produce the same kind. Em dashes, multiple metaphors, lists of three, overuse of semicolons, and grammatical “perfection” among other indicators are produced from these models because humans have been shown to write that way or have shown to be engaged by certain styles of writing.
Granted, when I read indicators that make me suspect AI use, I do raise an eyebrow. If feedback was solicited, I might say something like, “While I think that the topic of this article is very timely, note that this section might raise suspicion of AI because…”
It has become more difficult to tell the difference between writing that has been written by a human and the result of a prompt given to a machine, even if some changes were made after the result has been given. This is highly unsettling at least. Additionally, we have the knowledge that any suspicion of AI use [no matter how light] can vastly damage a writer’s credibility and access to opportunities, particularly if that writer is still learning the language they are writing in and/or are a part of a marginalized community.
My stance has always been that AI tools do not provide the kind of assistance that many writers believe. Additionally, I cannot get past the negative environmental impacts. I am not negating the writers who have felt as though they have been assisted by AI in their work, and I understand that my language will not dissuade people who have already made the choice to trust these programs. After reviewing different authors’ feedback sets as well as the prompts they have chosen to use, I have found the following: - The default for feedback from AI tools is to somewhat refine grammar [though these refinements are not correct for all style guides or manuscript needs] and to provide a positive feedback loop. These models are designed such that people will continue to use them. -When asked for constructive criticism, the model cannot act as a stand-in for a given audience member in the same way a beta or ARC reader can. When asked to criticize, the model will search for problems so as to satisfy the user. However, if the user pushes back, the model does often relent. Should a model have difficulty finding patterns in a piece of writing to criticize, they might look for problems that aren’t there and, as a result, cause writers to undermine themselves rather than think critically about their own work.
-A model like this cannot be “genuine.” It will mold itself to whoever might be using it with whatever prompt they might be using. If you’ve received something that felt genuine, then you’ve probably gotten a result that has accurately molded itself to what you’ve provided. Unfortunately, that’s not the same thing.
-When you ask an AI detector to find AI, it will be searching for common patterns that generated text will use. If an author does not have the same vocabulary, has a preferred set of vocabulary, or has a distinctive style, text might be flagged for AI where there is none. For those who might have been authors in the future, detectors will be deterrents. Students have been unfairly penalized for work that was proven not to be AI-generated.
I anticipate that this issue will both escalate and change shape over time, and while this guidance does not claim to stem the flow of AI-based commentary and nitpicking where it doesn’t belong, there are ways that you can help your case if you were accused of AI and did not use it.
First and foremost, document everything and retain document history. If you have notes, outlines, mind maps, or anything that can prove that you originated an idea and have personally drafted your piece, keep it and take pictures with it. If everything is mostly electronic, dedicate a folder or set of folders to your project and timestamp each.
If you have a completed manuscript and are seeking services from an editor or beta reader, keep an original digital copy. Send a copy to your collaborator. They will likely use “track changes” and the “comments” function. Once you get a copy of your document with changes tracked, notes, deliverables, or anything, save those as they are. Then, create a clean copy for yourself with all of the revisions you want to keep.
Personally, I do not use any AI software in my editing nor do the editors I know and work with. If your collaborator does use these, please take great care when reviewing the resulting manuscript to ensure that your needs have been met. There is a higher possibility that presses will not accept work that has integrated AI. While, as I have said, it has become difficult to distinguish generated texts, if a publisher believes you or a collaborator has used AI, they are less likely to give you a chance. Self-publishing platforms have become equally wary.
Additionally, there are initiatives such as the Human Authored scheme. This allows authors to register their books as human authored such that a small logo will appear on publications that will designate those books as written by a person.
By the same token, while it is unfortunate that one should feel the need to do this in the first place, one can put a note within their book that states something to the effect of:
“The entirety of this text was human authored and has not utilized any generative AI for conception or drafting.
Programs used to aid this text were as follows: Microsoft Word.
Research material used includes: [Book Title], [Article Title]...”
“Are you saying we have to…?”No. I am not saying you have to do anything.
I am saying that you are right to distrust generative AI, that you are right to be wary of the presence of AI in written work, and AI detectors are notoriously fallible. In order to protect yourself and your work against accusations of AI use in case there is a need to do so, documentation of all material is valuable, and disclaimers that include the material you did use can be helpful.
Everything that you publish, regardless of the name you choose to publish under, will be a reflection of you. Your choices and the work you put into your manuscript will show and shape that reflection. Consider yourself worth the time and effort, and each carefully chosen word will speak to someone.


![A Letter to a Reader [Far From the Last]](https://static.wixstatic.com/media/nsplsh_4f96cfb9a9464e5d8eb25fe1b6a55fc7~mv2.jpg/v1/fill/w_980,h_653,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/nsplsh_4f96cfb9a9464e5d8eb25fe1b6a55fc7~mv2.jpg)

Comments