How can artificial intelligence fight terrorism?

Artificial intelligence: liability terror

In contrast to algorithms, artificial intelligence should have a self-learning component and be able to “program” itself. The crucial question of trust: Should this new learning content be checked? Can, must it be determined, manipulated in which direction this (previously programmed) self-learning should run? Are there even (secret) possibilities of manipulation?

In 2013, Snowden revealed the NSA leak of the Internet under social self-sacrifice, which was previously (publicly) unknown as a leak. After the Chancellor had declared as the only effective countermeasure, "Spying on friends, that's not possible", everything has remained the same to this day. Except for the public knowledge about it and beyond Snowden's life situation.

Now tapping information is not yet a manipulation of the same, espionage is not yet sabotage. The possibility of interference from a leak, however, is the same: something happens that is "actually" not wanted. If you want to keep control of artificial intelligence in the face of this danger - you can leave the whole project. Pure (data mining) algorithms, other assistance programs are not AI, they are not self-learning and can therefore be traced and controlled with natural, human intelligence. Without such controllability, responsibility for AI decisions made on the evaporation substrate or external responsibility and liability terror would become.

Dr. med. Alexander Ulbrich, 70599 Stuttgart