What happened to Facebook's AI

If you celebrate fictional anniversaries, last Friday would have been the 20th anniversary of one of the most popular apocalypses in film history. On August 4, 1997, in James Cameron's Terminator films, an artificial intelligence called Skynet developed consciousness. A short time later there is not much left of humanity - everything exploded in nuclear fire.

In line with the macabre day of remembrance, dozens of reports haunted the web in the past week: In the Facebook laboratory for artificial intelligence (AI), two programs developed their own language and used it to talk. The researchers, it continued to be read, then "panicked" "pulled the plug" in order to "kill the AI". Of course, not a single article could do without a photo of the menacingly grinning chrome skeletons from the Terminator films.

The perceived threat level was once again quite high. No wonder, given that Facebook founders Mark Zuckerberg and Elon Musk, Tesla boss and also major sponsor of the AI ​​research institute OpenAI, recently had a public exchange of blows online about the dangers of artificial intelligence. For Zuckerberg, AI is the solution to almost all human problems, Musk considers it to be the greatest problem of all.

However, the Facebook experiment does not actually prove how close we are to the rule of machines in the real world, but rather reveals the still very narrow limits of today's AI research. The starting point of the experiment, which was by no means stopped in a panic, was the question of whether chatbots could be taught to negotiate with people. That would be a pretty practical thing, especially when it comes to outsourcing jobs to machines.

For the experiment, two so-called bots named Bob and Alice were programmed to divide up a collection of virtual objects - hats, balls and books - among each other. That worked quite well at first. Until the programs started talking in strange sentences. "I want the hat" then became "Balls have zero for me for me for me for me for me for me for me for". This was preceded by the bon mot "I can do everything else". Nevertheless, the negotiation ended successfully.

The researchers stopped the test run. But not out of fear.

The origin of this "new language" lies in the way the bots were programmed. A method called reinforcement learning was used. To put it simply, the point is that the software always receives a virtual reward when it performs a certain action correctly, in this case negotiating. It is, so to speak, the digital equivalent of dressage.

In the experiment, however, only the positive outcome of the negotiation was rewarded and not its comprehensibility. The gibberish was simply more efficient for the programs. And just like a junkie, they try to maximize the reward effect. The Facebook researchers then stopped the test run. Not because it scared them, but because they thought it had failed.

After all, the task of the bots was not to communicate with one another, but with people. Apocalypse postponed.