Technology

Opening Shots: AI, friend or foe?

AI is still in its early stages, but with even its own high-profile developers such as Elon Musk issuing concerns about it, do we really know enough about the risks?

I recently attended the Web Summit in Lisbon, where the world’s first humanoid robot, Sophia, spoke at a press conference and was interviewed by journalists. While I wasn’t the one asking the questions, she did appear to be able to engage in complex conversation, although with the occasional slip-up.

Developed by Hong Kong-based firm Hanson Robotics, Sophia is also the first robot to receive citizenship of another country, Saudi Arabia. While that’s a bit of a gimmick, it does illustrate how fast things are moving in the area of AI and where it might be heading to in the future.

AI is everywhere. There is no single sector of business which has not started using artificial intelligence (AI) in some ways, whether it be through chatbots or voice-powered personal assistants, algorithms which help organisations understand their customers or driverless vehicles.

Although these advancements are for the good of the people and are helping making businesses more efficient, the development of AI does not come without a warning. Numerous high-profile people such as Elon Musk, founder of SpaceX and OpenAI as well as Professor Stephen Hawking, have all been vocal with their concerns.

Musk, who has been a driving force behind the advancement of AI, has gone on record as saying that with AI “we are summoning the demon,” while Hawking has exclaimed that it could be the “best or worst thing ever happened to humanity.”

Both have signed an open letter calling for research on the societal impacts of AI.

This doesn’t necessarily mean we have to worry that our voice-powered personal assistant chatbots is going to suddenly turn rogue and become the Terminator. Rather, what Musk and Hawking are saying is that there is still a lot we do not know about AI – and while it is beneficial, the development should be done slowly and be regulated. Their main concern is the possibility that people could develop lethal autonomous weapons using it.

Musk, along with 116 founders of robotics and AI companies from 26 countries, has called for these to be banned. The letter they signed claimed such weapons threatened to become the third revolution in warfare, which could be used in undesirable ways by dictators or terrorists against innocent populations, and that once this Pandora’s box was open it would be hard to close.

Other concerns been raised included the question of what happens when AI intelligence exceeds human intelligence. This year Google’s AlphaGo AI beat the Go world champion, Ke Jie – the ancient Chinese game has been described as more complex than chess.

As AI becomes more sophisticated in everyday life it is likely it will become better than humans at everything. It can process information faster than us, it never gets tired, and doesn’t need to take breaks to eat or sleep.

An ex-Google executive, Anthony Levandowski, has even founded a non-profit religious organisation called The Way of the Future, whose mission is to worship an AI-like god figure, as he believes AI’s intelligence will one day supersede that of humans.

This all raises the question of what are we potentially getting ourselves into with AI? Musk has said on Twitter that Levandowski should be “on the list of people who should absolutely not be allowed to develop digital superintelligence.”

On the contrary, Musk is pushing for AI to develop slowly and with regulation to help control it before we invent something we might regret. As he said on Twitter, “Got to regulate AI/robotics like we do food, drugs, aircraft and cars. Public risks require public oversight. Getting rid of the FAA wouldn’t make flying safer. They’re there for good reason.”

AI is already helping us to do lots of things in our everyday life and will change the way businesses work. But if we don’t put measures in place to regulate it and do appropriate research, the end result may be a long way from what humans originally intended.


by Joanne Frearson