ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Who (or what) wrote this article?

Linked InXFacebook

Freelance writer Theo Green explains that determining whether an article has been written by a human or an AI tool is not as simple as some would have you believe 

 

Artificial intelligence (AI) writing tools such as ChatGPT, Claude, and Gemini have become increasingly powerful and easy to use in recent years. These tools can generate essays, reports, blog posts, and more with remarkable fluency and speed. Often, they have free versions which, while lacking some of the functions of paid versions, are still very useful. As a result, they are being used across education, journalism, marketing, and even creative writing.

 

With this rise in popularity comes a growing concern: how can we distinguish between content written by a human and that generated by an AI? How to identify AI-generated text is a question that is particularly important in areas where trust, originality, and authenticity matter. These include academic integrity, misinformation detection, and journalistic transparency.

 

As a (recent) student, for example, I was aware that ChatGPT was available to me as a tool for planning my assignments. But if I used its unmodified outputs to masquerade as my own original content, then I would quite likely get marked down. This was because AI doesn’t always get things right, and because at the time, it was frequently rather out of date. It was also because the style of the output was often not at all what my tutors expected: AI-generated text often has certain characteristics that identify it.

 

There are a number of tools and techniques that look for these characteristics to identify such content. However, while these characteristics can be detected, using them is by no means a certain way of identifying whether an article has in fact been written by a machine.

 

 

Key characteristics of AI-generated text

What are the signs of AI writing? Detecting AI-written articles is an important skill for anyone wishing to test the trustworthiness and accuracy of content. However, doing so successfully is hard, more an art than a science.

 

Luckily, there are some useful indicators to help you detect AI-written articles with a little practice. I’ve outlined a few of them below.

 

Repetition and reliance on certain phrases. One of the most common giveaways of AI-generated writing is repetition. Language models often repeat phrases, sentence structures, or even entire ideas within a single article. This happens because AI wants to reinforce a message, but it may do so unnecessarily, or without the subtle variation a human naturally introduces.

 

In addition, AI tends to overuse transitional phrases such as “in conclusion,” “moreover,” or “it is important to note.” While these are perfectly acceptable, excessive reliance on them can make the writing feel mechanical.

 

Unnatural formality. AI writing often sounds overly polished—grammatically perfect but strangely lifeless. It can sometimes adopt a formal, neutral tone that lacks the spontaneity or emotional nuance common in human writing. Slang, colloquialisms, and regional idioms are rarely used, unless explicitly requested.

 

Lack of personal experience or insight. AI can summarise known facts and synthesise information well, but it struggles to provide original thought. Its responses are built on patterns in the data it has been trained on, not on personal understanding or critical reflection.

 

As a result, AI-generated content may be technically accurate but lacking in depth. There’s often a noticeable absence of opinion, surprising insights, or unique angles on a topic. The result is writing that feels safe, generalised, and impersonal.

 

When I am writing, I often draw on memory, emotion, or sensory experience to support my points. AI, which lacks consciousness or experience (despite what a few people claim), cannot do this convincingly and typically avoids it altogether.

 

Predictable or generic structure. Most AI tools follow a textbook format: introduction, body, and conclusion. While this is common in human writing as well, AI tends to apply it with excessive symmetry and rigidity. Paragraphs are often similar in length and structure, and the logical flow is neat—sometimes unnaturally so. This predictability can make articles feel like they were assembled from a template, rather than developed organically by a person with a clear argument or voice.

 

Errors and inconsistency. Despite their strengths, AI tools occasionally generate incorrect information. These are called “hallucinations” and occur when the AI model lacks enough data to answer a question and so resorts to data it judges to be related to the data it needs.

 

They may even contradict themselves within the same piece of writing—asserting one idea in one paragraph and undermining it in the next. AI machines are, after all, simply machines – they are dependent on the databases that are used to train them: and if two contradicting pieces of information are included in those databases, then the machine may not have any way of deciding which piece of information to use.

 

AIs can also present outdated information. Certain models have limitations on their access to the internet and data beyond certain years. ChatGPT, for example, couldn’t update its knowledge base by accessing the internet when it was first released, being reliant on training data that had already been scraped from it; as a result it lacked knowledge of any events more recent than 2023.

 

The danger here is that because AI doesn’t understand what it’s writing—it merely predicts word sequences based on statistical likelihood—it can produce content that appears to be very credible but that is misleading or illogical upon closer inspection.

 

 

Detecting machine-generated text

Because AI-generated text has these, and other, typical characteristics, claims are often made that detecting AI-generated text is possible. There are a number of techniques that are frequently used to do this.

 

AI detection software. A number of detection tools aim to determine whether text was written by a language model. Popular ones include GPTZero, Originality.ai, and Turnitin’s AI detection module. These systems use metrics like perplexity (how unpredictable the text is) and burstiness (variation in sentence complexity) to assess authorship.

 

However, these tools are not foolproof. Their accuracy varies depending on the prompt (the question or statement designed to elicit a certain response from an AI), writing style, and how much editing has occurred after the AI draft. Different tools will use different metrics, and their outputs will be influenced by the feedback they receive from their individual human users.

 

As a result, false positives and negatives remain a concern with these tools. For instance, at the time of writing and prior to sub-editing, the tool QuillBot rated this article as 55% written by AI, whereas Grammarly rated it as 0% written by AI. Which one do you want to believe? 

 

Searching for online matches. Another technique is to copy part of an article into a search engine and look for matches online. While this is a technique normally used for catching instances of plagiarism, it can also be used to catch AI-generated text because AI often generates content that is structurally similar to other public domain or previously generated text, especially when dealing with common topics.

 

A similar technique is to think about the question the author may have used to generate an article and pose that question to an AI. Depending on how similar the AI’s answer is to the article you are investigating, you should have a good idea about whether AI was used at all, used for planning, or used for wholesale text generation. In this case, as I acknowledge at the bottom, I have used AI to help me plan the article but not to write it.

 

Human review and editorial experience. Experienced editors and educators can often detect AI-authored content through close reading. They look for tonal inconsistencies, suspiciously generic arguments, or a lack of personality in the writing. While subjective, this “gut instinct” is still one of the most effective tools we have—particularly when combined with contextual knowledge, such as knowing the most common writing styles, sentence structure, and word order used by AI models.

 

 

The future of detecting AI-generated text

AI-generated writing is becoming more sophisticated, but it still carries subtle signs of its origins: repetition, unnatural tone, lack of originality, structural predictability, and occasional factual errors. 

 

However, despite this, it is hard to be 100% certain that a piece of text has been generated by an AI (unless of course tell tell-tale content is included such as – in this case “Would you like me to guide you through a few practical ways to spot AI writing using some examples?” at the end of the article). And a reasonably sophisticated user of AI may well be capable of going through a piece of AI-generated text and editing it so that it seems to have been written by a human.

 

AI writing tools are rapidly improving, and distinguishing machine-generated content from human writing will become increasingly difficult. This reality calls for a multi-pronged approach: stronger detection tools, improved digital literacy, and updated policies on disclosure and authorship.

 

Ultimately, we are likely to move toward a model where human–AI collaboration becomes the norm. In such a landscape, the focus may shift from whether AI was used to how it was used—and whether the result meets ethical and professional standards.

 


 

Theo Green is a freelance writer. This article was planned in part using ChatGPT and DeepSeek AIs but was written by a human

 

Main image courtesy of iStockPhoto.com and sompong_tom

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543