ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI is facing its Oppenheimer moment

Why IT researchers should sleep on unleashing new technologies, rather than losing sleep over them

 

Linked InTwitterFacebook

The list of inventions whose creators have had second thoughts about is expansive, from the small – the Keurig coffee pod, an environmental scourge whose inventor wishes people would just make their coffee in a pot – to the world-ending – the atomic bomb, about which its architect, J Robert Oppenheimer, was so famously regretful.

 

As the lightning development of ChatGPT’s capabilities stuns not just users and experts in the AI field, but the very inventors who paved the way for its emergence, so too does a certain sense of remorse.

 

One of those raising the alarm is neural network pioneer Geoffrey Hinton, the “godfather of AI”, who, on retiring from Google, alerted the public to the dangers of chatbots and the ways they can be exploited by bad actors.

 

Another innovator who has voiced concerns about the power of generative AI is Sam Altman, CEO of the company that launched ChatGPT in November 2022. In a live panel interview with the Times Of India in June he said he is losing sleep over the dangers of ChatGPT.

 

These comments are no doubt causes for concern. The more closely a researcher is involved in the development of a new technology, the less their words of caution sound like scaremongering. Even though algorithms can remain opaque to their own creators, these are the people who have the deepest understanding of generative AI systems, as well as the ways in which they can be leveraged by bad actors.

 

Ask for forgiveness, not permission

 

Responsible approaches to innovation today are constrained by the relentless pressure of time-to-market, or TTM. What can be at stake is clearly demonstrated by how it’s ChatGPT that has been spiking in search engines and dominating public discourse, not its Google equivalent, Bard – even though the gap between launch dates was only six months.

 

Agile product development, beta versions and cloud-based plug-and-play solutions are all geared towards slashing TTM – an essential feature of effective sales pitches. The speed of product development is so crucial that it can override any other priorities: the richness of product features for example, product quality, or security aspects.

 

In general, businesses getting a minimum viable product (MVP) to market don’t risk reputational damage but act competitively. Launching a version of a new product with only the core features that are fundamental to competitive advantage is today the norm, not the exception.

 

However, product failures don’t always have impunity. Alphabet, Google’s parent company, lost $100 billion in market value thanks to a factual error in Bard’s first demo.

 

Although practices inspired by Silicon Valley have served businesses well in adapting to a whirlwind of market conditions rife with fierce competition, volatility and accelerated technological advancement, they could be incongruent with responsible innovation.

 

Facebook founder Mark Zuckerberg’s motto of “move fast and break things” is exactly what is unfolding in front of our eyes. As a result of ChatGPT, the new creative tool gaining unprecedented popularity, academic integrity officers need to scour the net endlessly for quotes made up by generative AI, and lawyers run the risk of getting sanctioned for “bogus judicial decisions with bogus quotes and bogus internal citations.”

 

The cautionary tale of responsible innovation

 

BlackBerry’s seemingly unstoppable rise in the early 2000s, with its series of email-capable smartphones revolutionising the way people communicated, only to be eclipsed by Apple’s iPhone and rendered firmly obsolete after the emergence of Android handsets, is a salient lesson of security by design losing out to the emerging Silicon Valley ethos.

 

Because it wasn’t just BlackBerry’s shortsighted refusal to take Apple’s advances seriously that caused its downfall, but also its single-minded focus on security. Supposedly, the only way to make BlackBerry’s software malfunction was to fire a bullet at the device running it. The fact that President Obama could keep his personal BlackBerry when moving into the White House also testifies to the historic smartphone’s robust security features. BlackBerry ensured its device’s security patch was regularly updated, but never introduced new features and resisted the new trends of touchscreens and downloadable apps.

 

That said, it’s never too late to play to your strengths. BlackBerry has recently been reborn as one of the world’s largest cyber-security firms after acquiring Cylance, an AI and cyber-security company, in an age when security and cyber-threats are being pushed to the forefront.

 

Bold and responsible

 

What big tech is trying to do, at least at the level of policies and announcements, is marry the boldness of Silicon Valley entrepreneurs with a responsibility for “societal values and the greater well-being of humanity.” But the question is how this balance between disruption and the preservation of values can be struck.

 

There are certain signs that even big tech companies find these two contradictory approaches hard to reconcile. The voices of ethical AI researchers have increasingly been drowned out by business interests to get new products onto the market as soon as possible. As a result, several of them have been dismissed.

 

As a technology executive at Microsoft put it in an internal email viewed by the New York Times, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.”

 

In the current setting of breakneck competition, a six-month or even longer moratorium in AI research that tech leaders called for in March this year is unrealistic. What a safer future for humanity really requires is less about innovators expressing contrition after the fact and more about responsible AI enforcement, and ethical researchers whose voices are actually listened to.

 

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join free today
Join Business Reporter