‘Godfather’ of AI is among hundreds of experts calling for urgent action to prevent the ‘potentially catastrophic’ risks posed by technology
- Yoshua Bengio has signed an open letter warning the danger AI poses
- Half of experts estimate there’s 10% chance AI could lead to human extinction
A godfather of AI is among hundreds of tech bosses and academics calling for an international treaty to avoid the technology’s ‘catastrophic’ risk to humanity.
On the eve of the AI Safety Summit, Turing award winner Yoshua Bengio has signed an open letter warning the danger it poses ‘warrants immediate and serious attention’.
It cites a survey that found over half of AI researchers estimate there is more than a 10 per cent chance advances in machine learning could lead to human extinction.
Notably, among the signatories is one of China’s leading AI academics, Professor Yui Zeng, a key representative of Beijing who is set to lead one of the sessions at the event in Bletchley Park.
Government officials may well see his backing as a positive signal that China – whose invitation to the summit has proven highly controversial – is willing to cooperate on international regulation.
On the eve of the AI Safety Summit, Turing award winner Yoshua Bengio has signed an open letter warning the danger it poses ‘warrants immediate and serious attention’
Who has signed it?
Yoshua Bengio – Godfather of AI, and 2018 A.M. Turing Award Laureate
Yi Zeng – The leading expert on AI safety and governance in China, who recently briefed the UN Security Council on AI risks
Max Tegmark – Distinguished physicist and co-founder of the Future of Life Institute
Connor Leahy – CEO of Conjecture, an AI Safety lab
Bart Selman – Former President of the Association for the Advancement of Artificial Intelligence, a leading authority on artificial intelligence
Victoria Krakovna – Senior Research Scientist at Google DeepMind, Co-founder of the Future of Life Institute
Eleanor ‘Nell’ Watson – President of the European Responsible AI Office, Senior Fellow of the Atlantic Council
Gary Marcus – Renowned cognitive scientist, who recently testified in the US Senate alongside OpenAI CEO Sam Altman
Jaan Tallinn – Co-founder of Skype, Centre for the Study of Existential Risk, and Future of Life Institute
Luke Muehlhauser – Board member of Anthropic, a leading AI lab
Ramana Kumar – Former Senior Research Scientist at DeepMind
Geoffrey Odlum – Former Director at the US National Security Council, responsible for G-8 affairs
The letter states: ‘We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced AI systems to humanity, encompassing threats from misuse, systemic risks, and loss of control.’
It continues: ‘This responsibility does not rest solely on a few shoulders, but on the collective strength of the global community. Our future hangs in the balance. We must act now to ensure AI is developed safely, responsibly, and for the betterment of all humanity.’
Among the 250 high-profile signatures is Mr Bengio, considered one of the three ‘godfathers of modern AI who has been heavily involved in the upcoming summit. This included sitting on the expert panel reviewing a government paper setting out in stark terms the dangers AI posed.
The Canadian computer scientist from Canada – recipient of the Turing Award in 2018, considered the ‘Nobel Prize of Computing’ – fears the pace of development and warns the risk of extinction from the technology is on par with nuclear war.
Professor Zeng, from the state-controlled Chinese Academu of Sciences, is set to lead a private discussion on the risk that AI tools could ‘unexpectedly develop dangerous capabilities’ at the summit.
He is among several scientists from top universities to represent China, alongside tech giants Alibaba and Tencent.
The decision to invite China has sparked fury among some MPs, with former prime minister Liz Truss leading calls to rescinding the invitation.
Writing on X, formerly Twitter, she said: ‘The regime in Beijing has a fundamentally different attitude to the West about AI, seeing it as a means of state control and a tool for national security.’
However, Rishi Sunak, who invited President Xi Jinping himself, has remained firm – arguing there can be ‘no serious strategy’ without engaging all of the world’s leading AI powers.
Elon Musk’s hatred of AI explained: Billionaire believes it will spell the end of humans – a fear Stephen Hawking shared
Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence.
The billionaire first shared his distaste for AI in 2014, calling it humanity’s ‘biggest existential threat’ and comparing it to ‘summoning the demon.’
At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand.
His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.
That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: ‘The development of full artificial intelligence could spell the end of the human race.
‘It would take off on its own and redesign itself at an ever-increasing rate.’
Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.
During a 2016 interview, Musk noted that he and OpenAI created the company to ‘have democratisation of AI technology to make it widely available.’
Musk founded OpenAI with Sam Altman, the company’s CEO, but in 2018 the billionaire attempted to take control of the start-up.
His request was rejected, forcing him to quit OpenAI and move on with his other projects.
In November, OpenAI launched ChatGPT, which became an instant success worldwide.
The chatbot uses ‘large language model’ software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt.
ChatGPT is used to write research papers, books, news articles, emails and more.
But while Altman is basking in its glory, Musk is attacking ChatGPT.
He says the AI is ‘woke’ and deviates from OpenAI’s original non-profit mission.
‘OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.
The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction – but what does it actually mean?
In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.
Experts have said that once AI reaches this point, it will be able to innovate much faster than humans.
There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.
For example, humans could scan their consciousness and store it in a computer in which they will live forever.
The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves – but if this is true, it is far off in the distant future.
Researchers are now looking for signs of AI reaching The Singularity, such as the technology’s ability to translate speech with the accuracy of a human and perform tasks faster.
Former Google engineer Ray Kurzweil predicts it will be reached by 2045.
He has made 147 predictions about technology advancements since the early 1990s – and 86 per cent have been correct.
Source: Read Full Article