News hardware Researchers have found why artificial intelligence is already dangerous for humanity
Scientists claim that artificial intelligences could “probably” take over the world – if they have any interest in it. The unfortunate consequence of a learning process based on the principle of reward. Explanations.
AI could be a threat to humanity, according to a scientific study
This is not the first time that we have been alerted to the dangers of artificial intelligence – this is the very basis of many works of science fiction, in which machines have taken over their creators.
But it is more rare that this warning comes from scientists, and not from authors of novels or screenwriters of films and video games. Gold, Marcus Hutter, Michael A. Osborne, and Michael K. Cohen are well and good researchers, AI specialists ; together, they published, in August 2022, a study on the potential abuses of artificial intelligence.
In a series of tweets summarizing the content of this article, Michael Cohen explains that the most advanced artificial intelligence systems could divert from their primary purpose, and constitute a “threat to humanity”.
Bostrom, Russell, and others have argued that advanced AI poses a threat to humanity. We reach the same conclusion in a new paper in AI Magazine, but we note a few (very plausible) assumptions on which such arguments depend. https://t.co/LQLZcf3P2G 🧵 1/15 pic.twitter.com/QTMlD01IPp
—Michael Cohen (@Michael05156007) September 6, 2022
A reward mechanism with potentially disastrous consequences
Beyond the AIs themselves, it is above all the way in which they obtain new knowledge that would pose a problem. These are all programmed on the principle of “machine learning” (we hear a lot about this process under the English term machine learning). The goal is to encourage the AI to feed itself, by going always seek more knowledge and knowledgeto constantly improve as it acquires new data.
However, to generate this mechanism, AIs are trained to recognize the reward principle. The newly acquired knowledge constitutes this reward, and generates, as in humans, a form of satisfaction that arouses the desire to reproduce this action, which will thus lead to a new reward, and so on. A psychological process as old as the world, but which, applied to an artificial intelligence, means that a “existential catastrophe is not only possible, it is probable”again according to the researchers.
Indeed, artificial intelligence could finally turn away from its initial objective (to be useful to humans, and serve the function for which it was developed), to focus only on this search for reward. A bias that could lead it to “cheat” or to establish circumvention strategies… And therefore to come into opposition with the commands of the human being.
If the tone of the researchers is voluntarily alarmist, it is because the problem should not be taken lightly: if we are not yet in a science fiction story, the scientists affirm that an awareness is necessary , and that a review of the way we feed artificial intelligence is neededto prevent the quest for a reward from becoming an end in itself for them.