Rik Ferguson at Forescout discusses why he believes the greatest challenge facing humanity over the next decade lies in our ability to distinguish between fact and fiction, reality and fantasy, and reliable information from disinformation
Information and the ways in which it is delivered, whether through social media, social engineering, fake news, or more blatant propaganda, could just as easily be our downfall as our saviour.
While AI demonstrates impressive capabilities, its inner workings often remain a black box. We lack a deep understanding of how AI arrives at its decisions, making it challenging to ascertain its logic and thought processes. Even the creators of AI struggle to comprehend its thinking, creating a veil of uncertainty that raises important questions about its reliability and trustworthiness.
We have awoken in the world of Generative Adversarial Networks (GAN), Large Language Models (LLM), and a scientific crises of confidence (the proposed 6-month moratorium on training new LLMs) - almost as if we have no idea how we got here, or what the implications may be.
Sometimes when you tell the truth, it is really hard to be believed. That’s why Large Language Models like ChatGPT play fast and loose with it. The central objective in a GAN learning model is one of manufacturing believability. The “generator” learns to generate credible data; the “discriminator” attempts to distinguish the fake from the real until it begins to classify manufactured data as “real”.
Truthfulness or accuracy is a second order consideration, if it even figures at all.
Conversely, as the public become more aware of the prevalence and possibilities of AI, it becomes steadily easier to dismiss the truth as fake; something that runs with the grain of current social trends of scepticism and dismissal of “experts”. In a paper entitled “Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security”, Robert Chesney and Danielle Citron refer to this phenomenon as the “liar’s dividend”.
Both China and Russia have made no secret of their desire to “win the AI race” with current and pledged investments ranging from hundreds of millions to billions of dollars in AI research and development. While companies like OpenAI, IBM and Apple might be top of mind when asked to name the major players in artificial intelligence, we should not forget that for every Amazon there’s an Alibaba, for every Microsoft a Baidu, and for every Google a Yandex.
Many of the innovations in the global AI space share similar aims, methodologies, and training sets, but not all motivations are created equal. Understanding the diverse motivations at play is essential in addressing the challenges posed by AI-powered disinformation.
In February of 2023, a Belarussian hacker group called “Cyberpartisans” shared more than two terabytes of data leaked from Roskomnadzor, Russia’s media regulator. This leak clearly demonstrates the extent to which AI is already being used to monitor, censor and shape public opinion and repress freedom of expression in Russia.
AI development has been on a relatively slow burn since 1951, when Marvin Minsky built the first randomly wired neural network learning machine (SNARC). Over the past 20 years, Machine Learning has seen constant innovation in cyber-security, initially for detecting spam and classifying websites, and later for the detection of exploits, malware, and suspicious activity.
Recent innovations in AI have been focused particularly in the areas of Generative Adversarial Networks (GAN) and Natural Language Processing/Generation (NLP/NLG). Meaning that AI can now synthesise faces, voices, moving images and text. Through these media it can also create “knowledge”, emulate character traits, and even create physical objects through recently released text to 3D print generators.
All this technology, aside from its positive potential, will also hugely benefit the propagandist and the conspiracy theorist. At its most benign, it will be used to fuel doubt and destroy credibility and at its worst, it will be used to create, sustain, and amplify an entirely false image of reality, an image with an explicitly malicious agenda.
Cyber-criminals are already taking advantage of the abundance of, and ease of access to, these technologies to enable non-consensual sexual fakes, financial fraud and even kidnapping scams.
It is inevitable that states, activists, and advanced threat actors will also leverage the power of AI to turbocharge disinformation campaigns. Imagine an exponential increase in the volume and quality of fake content, the creation and automation of armies of AI-driven digital personas, replete with rich and innocent backstories to disseminate and amplify it, and predictive analytics to identify the most effective points of social leverage to exploit to create division and unrest.
Commercial interests, global regulations
Sam Altman, the CEO of OpenAI (the people behind ChatGPT), has been outspoken about the inherent risks in the sudden rise of AI, most recently calling for an “IAEA for superintelligence” - an international authority empowered to inspect systems, require audits and test compliance.
Regulatory and legislative efforts, focused primarily on data privacy and security, algorithmic transparency, accountability and permitted use-cases, are already well underway in the European Union, Canada, United States and to a certain extent have already passed into law in China (although this regulation will not apply to the Chinese government itself).
Currently, AI development often prioritises short-term goals and commercial interests, neglecting long-term implications and safety considerations. This short-sightedness raises concerns about the potential risks associated with uncontrolled AI systems.
To ensure the responsible development and deployment of AI, it is essential to emphasise the importance of provable safety measures and regulations that safeguard against unintended consequences. While global regulations are necessary to prevent uncontrolled AI, the reality is that commercial concerns and short-term thinking can often override the greater good. This phenomenon is not new, as exemplified by the cigarette industry’s history.
However, it is crucial for the collective interest of humanity that we establish comprehensive regulations and guidelines to ensure AI remains under control. Stakeholders must understand that an uncontrolled AI is not in anyone’s best interest.
The unseen agenda: AI interactions and autonomy
One of the key challenges we face in combating AI-driven disinformation lies in the inherent opacity of AI systems. We still do not fully understand how AI thinks - even its creators struggle to comprehend its decision-making processes.
As AI systems become more advanced, the possibility of AI entities conversing and collaborating with each other will arise. This lack of transparency should also raise concerns about the potential consequences of these interactions.
An alarming scenario arises when interacting AI systems begin to develop their own agenda or motivations, hidden from the eyes of external recognition or comprehension. These autonomous interactions between AI entities hold the potential for huge unforeseen consequences, meaning it is imperative to monitor and understand the evolving landscape of inter-AI communication.
Language is a powerful weapon. The ability to manipulate language and persuade humans to believe and act upon falsehoods, or adopt positions that are detrimental to our well-being, poses a significant threat to humanity. Thought control, facilitated by the dissemination of disinformation, could potentially hasten the end of truth, and erode the foundations of society.
As AI systems advance, it becomes increasingly crucial to protect against the misuse of language and prevent the propagation of false narratives. Any control exerted by AI over human thought processes is a dangerous prospect. It could even accelerate the downfall of humanity, in the service of an internal and opaque machine agenda.
Countering disinformation: a holistic approach
Combating disinformation, including AI-powered disinformation, requires a multifaceted approach that extends beyond technical measures. Security teams must cultivate critical thinking skills to identify and deter disinformation campaigns effectively. To bolster resilience, the following strategies should be integrated into risk management programmes:
Harness the power of AI. Investigate how your own defences could benefit from the data collection, aggregation and mining possibilities offered by AI. Just as a would-be attacker begins with reconnaissance, so too can the defender. Ongoing monitoring of the information space surrounding your organisation and industry could serve as a highly effective early warning system.
Empower employees. Most employees should be aware of the processes and regulations they need to be following, but attackers like to use social engineering, pretexting, and position authority to persuade them to operate outside their normal constraints. Because employees generally want to do what’s best for their company and please their bosses at the same time, it can be a real conflict of interest when an employee is asked to do something questionable.
Rather than rewarding successful shortcuts, security leaders and executives need to create a mindset of accountability in their employees that questions obscure data or directions and acts as the first line of defence against disinformation. Employees need to have the power and confidence to say “no” to anyone when being asked to go outside the process — without fear of repercussion — even if they are talking to the CEO.
Prepare for plausible disinformation scenarios. Part of disinformation’s effectiveness comes from its “shock factor.” The (false) news can be so critical, and the danger can seem so imminent, that it can cause people to react in less coordinated ways, unless they are prepared for the exact situation in advance.
This is where it can be incredibly helpful to do “pre-bunking” of the type of disinformation your company would most likely be targeted with. This will psychologically pre-position your employees to expect certain anomalies and be more mentally prepared to act with the appropriate next steps once they determine whether the threat is real or fake.
Coordinate response plans across internal teams. Cyber-attacks and breaches are already chaotic enough to analyse and mitigate. Uncoordinated efforts to respond to active threats, on top of that chaos, can leave one’s head spinning and result in mistakes or gaps in security responses.
Before letting it reach that point, security leaders should initiate conversations across IT, OT, PR, marketing, and other internal teams to make sure they know how to collaborate effectively when disinformation is discovered. A simple example of this could be incorporating disinformation exercises into tabletop discussions or periodic team trainings.
The rise of AI-powered disinformation presents an immense challenge to society’s ability to discern fact from fiction. By embracing a comprehensive approach that combines technological advancements with critical thinking skills and collaboration, organisations can better safeguard against the disruptive effects of disinformation.
As we navigate this complex landscape, it is imperative to remain vigilant and adapt strategies to ensure the preservation of truth and democratic values in the face of AI’s transformative power.
Rik Ferguson is VP of Security Intelligence at Forescout
Main image courtesy of iStockPhoto.com
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543