- Thousands of tech leaders have signed an open letter asking for a brief suspension of AI development.
- Elon Musk and Steve Wozniak were among a couple of CTOs, CEOs, and analysts who signed the letter.
- The letter shared worries that AI could give rise to danger in society.
Gary Marcus, Andrew Yang, and Steve Wozniak were among some CTOs, CEOs, and analysts who also signed the open letter, which the United States published think tank Future of Life Institute (FOLI).
Why AI Should be Temporarily Raused
Previously in 2015, tech leaders openly encouraged the development of AI as it was a medium of improving the world, but they were cautious of the possible pitfalls.
Elon Musk, who has his fingers deep in the pies of technology, was once effusive of AI.
Now, they have seemingly had enough, and they didn’t shy away from letting everyone know, with Elon Musk at the forefront.
In their letter, they urged AI companies to “immediately pause” developing AI systems for at least six months.
Advanced AI could represent a profound change in the history of life on earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”
They argued that
Contemporary AI systems are now becoming human-competitive at general tasks. Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” A major concern of theirs is that “human-competitive intelligence can pose profound risks to society and humanity.”
FOLI went one further and claimed that there is a seeming battle between firms to see who can create the supreme AI.
They claimed that “no one – not even their creators – can understand, predict or reliably control.”
This was on the premise that they were possible AIs in the works that could be more advanced than GPT-4.
GPT-4 is the most recent version of Open AI’s chatbot, which was released in mid-March. It is regarded to be ten times more enhanced than the first version of ChatGPT.
However, Open AI executives said that they had not yet begun developing GPT-5.
The CEO, Sam Altman, affirmed that the firm had given prime concern to safe practices when developing GPT-4 and spent about six months implementing safety practices.
Open AI also recently stated,
at some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models.”
To some tech enthusiasts, the time has come.
What did the Tech leaders say?
Some tech leaders were not as enthusiastic about this proposed petition, though. Ben Goertzel, the CEO of SingularityNET, an AI marketplace, felt that research and funds should be focused on “bioweapons and nukes” instead.
Mike Novogratz, the CEO of Digital Galaxy, told investors that there had been wrongfully more regulations on crypto than AI.
Elon Musk once tweeted, “having a bit of existential AI angst.” He reportedly told Tesla administrators, “I’m a little worried about the AI stuff.”
Another AI researcher, Timnit Gebru, opined, “I think that we should all be terrified about this whole thing.”
To many, AI can lead to possible calamity.
Disclaimer: Voice of Crypto aims to deliver accurate and up-to-date information, but it will not be responsible for any missing facts or inaccurate information. Cryptocurrencies are highly volatile financial assets, so research and make your own financial decisions.