ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Prompting the AI writing machine

Freelance writer Theo Green explores the new literacy behind AI-assisted journalism and explains how writers can make the most of powerful generative AI tools

 

As AI tools become increasingly normalised in the workflows of most people, a new form of editorial craft is emerging; one not focused on style, sourcing or even fact-checking, but on the ability to create AI prompts.

 

From drafting headlines and summarising interviews to generating initial drafts or suggesting angles, large language models (LLMs) like ChatGPT, Claude and Gemini have become auxiliary writers, researchers and editors for journalists who require drafts of articles or complex topics synthesised into more understandable chunks.

 

But these tools do not work autonomously; their value hinges on how well they’re instructed. And that instruction takes the form of “prompts” - language commands that guide the model’s output. These become critical when trying to generate the information you require, accurately and precisely.

 

In this context, prompting is no longer a simple interaction. It’s a form of professional literacy that writers can take advantage of.

 

 

From curiosity to editorial utility

The early use of generative AI in journalism was often tentative as most models had limited knowledge of things past a certain date and had no connection to the internet. Journalists tested the waters with low-risk tasks such as summarising long transcripts, rewriting social posts or drafting simple content.

 

But as the tools became more sophisticated, so too did the ambitions of their users. Now, AI is used across every layer of content production: story ideation, audience targeting, structural editing, even linguistic localisation.

 

Yet amidst this growing utility, one reality has become apparent: poor prompts produce poor journalism. Using drafts based on poor prompting can be almost useless, even damaging to the writer’s personal brand.

 

Broad or vague inputs such as “Write an article about climate change” may yield passable drafts, but rarely anything close to publishable. On the other hand, carefully constructed prompts, those that specify tone, target readership, editorial stance, structure and sourcing expectations, can produce outputs that resemble the starting point of a solid first draft.

 

The difference lies in the intentionality of the prompt and how much information the AI can glean from it to help it fulfil its task.

 

 

Prompting as editorial framing

For journalists, prompting is more than functional. It’s a form of editorial framing: an extension of the decisions the human author already makes about the topic, the audience, and about the scope. The best prompts operate on a few relatively simple principles.

 

Take the following example:

  • Weak prompt: “Summarise the recent EU climate policy.”
  • Stronger prompt: “You are a policy analyst writing for a general audience. Summarise the key provisions of the EU’s recent climate policy reform in 500 words, with emphasis on emissions targets, funding mechanisms and political opposition. Use a neutral tone suitable for a centre-left digital news outlet.”

The second version introduces multiple parameters: role, purpose, format, focus, tone and publication context. Each of these helps constrain and sharpen the model’s response, allowing it to provide a better, more concise and far more useful draft.

 

This isn’t solely based on output quality: it’s about alignment. Journalists are expected to write with a specific voice, meet editorial standards and anticipate audience knowledge levels. LLMs cannot allow for these factors unless they are embedded within the prompt itself. But if these points are embedded within the prompt itself the AI tool can produce relatively reliable drafts.

 

 

The rise of prompt frameworks

With prompting taking on the structure of a formal editorial tool, frameworks have emerged to that can guide this process. Although many were initially developed for educators and coders, several can provide a large amount of use within the field of journalism, marketing and creative writing.

 

A particularly adaptable one in this situation is the PARTS model:

  • Persona: Who is the AI simulating (e.g. an investigative journalist, a climate correspondent)?
  • Action: What is the task (e.g., summarise, draft, analyse, propose)?
  • Recipient: Who is the target audience?
  • Topic: What is the subject matter or thematic domain?
  • Structure: How should the output be organised (e.g. list, article, question-answer)?

Applying this model correctly ensures that prompts are intentional, constrained and editorially aligned with what the human author requires. It encourages journalists to make their assumptions explicit: something that aligns well with AI’s ability to scan and use websites and other data.

 

 

Iteration and conversational drafting

An underappreciated strength of LLMs is their ability to work iteratively. Rather than treating AI interactions as one-off queries, journalists are increasingly engaging them in conversational drafting—adjusting prompts mid-stream, refining angles and probing alternative framings. This can be used to hone a draft in a particular style or focus it more on a certain point.

 

This iterative approach mirrors how humans write most articles without AI: an initial plan becomes a rough draft, which is then reshaped and finalised by further consideration and feedback. AI becomes a drafting partner rather than a ghostwriter.

 

For example, a prompt may begin with a request to “Summarise new electric vehicle legislation,” followed by refinements: “Make the tone more analytical,” “Add three quotes from lawmakers,” or “Reframe this as a business story rather than environmental policy.”

 

Each prompt becomes part of an ongoing dialogue. The author’s job is not simply to prompt and publish, but to guide, interrogate, and revise, ensuring the AI produces the most accurate and concise output possible.

 

 

Pitfalls and risks: Not just about bias

Much has been written about LLMs producing biased, fabricated or plagiarised content. These concerns remain valid, particularly in journalism, where accuracy and attribution are non-negotiable. But a subtler risk lies in over-reliance on defaults, especially when prompts are poorly constructed. Poor prompts can lead to sub-optimal work and can hinder projects rather than aid them.

 

Generic outputs can reinforce generic journalism: predictable formatting, formulaic framing and surface-level analysis. Worse still, without precise instruction, AI models may reproduce only dominant cultural narratives and omit more obscure perspectives: it may appear authoritative even when a sizable amount of information or knowledge is missing. This can change the value of the article massively.

 

Prompt engineering becomes a guardrail against these risks. Specificity, nuance and editorial judgment must be encoded from the outset in order to facilitate the best possible use of AI. This approach allows the best possible articles to be produced when using AI tools as a starting point for drafts.

 

There are also ethical considerations around transparency. Should an author disclose when an AI model was used on certain aspects of an article? Should journalists keep records of prompts used during the writing process and say to what extent they relied upon AI? What are the limits of ethical AI use: hiding biassed prompts in published articles is unlikely to be acceptable to anyone.

 

As prompt literacy deepens, so too could the professional standards for its documentation within work produced. (And for transparency: yes, I have used AI to help me plan this article, but I have done the heavy lifting of constructing the article myself.)

 

 

Prompting as professional practice

It’s tempting to treat prompt writing as a technical trick: a way to increase efficiency and enable quicker production. However, it is increasingly being viewed as a distinct component of the writing process, with its own values, skills, and limitations, both in relation to the human author and the AI.

 

Creating an effective prompt is not merely asking the AI model to answer a question. It’s about translating your intent into a format that machines can interpret. It’s the latest evolution in a long-standing line of journalistic skill: using language precisely, persuasively and purposefully.

 

In a landscape where AI tools are becoming more and more widely used, the ability to write effective prompts is becoming a skill that has ever greater importance when creating helpful, informative and well-written articles.

 

 

Toward a new editorial literacy

As AI continues to reshape how articles are written and constructed, prompt literacy will probably be integrated into journalism education, as well as literature education. Already, some websites and news outlets have started to collect reusable prompt templates for common tasks: translating news summaries, adapting stories for newsletters, or preparing interview questions.

 

But beyond basic templates lies something more useful: a shift in ability. Authors must approach AI not just as users, but as the designers of the draft that the AI will produce. Prompting becomes a kind of art. Where intention and interpretation are based on a well-written and well-formed prompt, the author can significantly impact the outcome of the AI’s work and its usability and informational value.

 


 

Theo Green is a freelance writer

 

Main image courtesy of iStockPhoto.com and gece33

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543