News JVTech Artificial intelligence could trigger a nuclear war! These experts are very concerned and are letting it be known
Unless you live deep in a cave, you can’t miss the ChatGPT phenomenon. But this artificial intelligence has the whole world worried and admired, to the point that some experts fear nuclear war.
36% of the experts surveyed fear a nuclear war caused by artificial intelligence
When artificial intelligence spread to the general public with the recent example of ChatGPT, enthusiasm among young people was total. While some were worried about being replaced by an AI in a few years, it has to be admitted that the answers the OpenAI chatbot provides are stunning.
It is therefore entirely legitimate to wonder to what extent AIs can influence our world. If this question hasn’t crossed your mind, know that a team from New York Univers ity didn’t wait for the arrival of ChatGPT to address the issue last May. This team of researchers brought together a panel of 480 experts with different professional backgrounds
This panel of experts only had to answer the statements presented to it with “agree” or “disagree”. Regarding artificial intelligence, they were asked to give their opinion on the following two statements: “AI could soon lead to revolutionary societal changes” and “decisions made by an AI could lead to a nuclear catastrophe”.
If 73% of the 480 experts surveyed agreed that artificial intelligence could be the starting point for revolutionary change, it was more worrying to state this 36% of them considered this technology dangerous and could well be the cause of global catastrophes like total nuclear war.
Even Elon Musk considers AI to be “one of the greatest risks to the future of civilization”.
So yes, if from today’s perspective we still see ChatGPT as a harmless chatbot that makes it possible to answer complex questions, write schoolwork or analyze large amounts of data, this opinion is clearly not shared by everyone. Some experts even believe that ChatGPT has the skills needed to replace some financial industry professionals or even content creators. And this is frankly worrying as ChatGPT is not (yet) infused with science and sometimes makes inaccuracies or even errors in its various analyses. So the risk would be an uncontrollable spread of erroneous information that serves as a basis for decision-makers.
Even Elon Musk worries about it, telling a press conference at the World Government Summit that AI is “one of the biggest risks to the future of civilization.” However, that didn’t stop the American billionaire at the helm of Twitter from starting to develop its own AI to compete with the OpenAI chatbot.
However, it is necessary to analyze the results of this study with due consideration, since certain questions to which the participants had to answer were likely to be more controversial than others, which would have led to more or less biased answers. In addition, it has been reported Some experts chose not to voice their opinions on issues deemed too influential, arguing that “AI should not be treated like a human making their own decisions”.
The researchers at the initiative of this survey finally stated that the answers to these questions lie between a certain objectivity and a signaling behavior.