Technology is bad for computers

Computers aren't just better than people. They also know better what is good and what is bad

People make mistakes, computers are perfect. So who should solve for us the problems that algorithms have brought on us? Exactly: algorithms.

Facebook CEO Mark Zuckerberg's hearing in the US Senate was a memorable appearance. Mainly because Zuckerberg kept repeating the same techno-fundamentalist phrase, according to which all problems in society can be "solved" with artificial intelligence, i.e. with program codes. Hate Speech? Terrorist content? Fake news? The AI ​​will fix it. "99 percent of the IS or Al-Qaeda content that we take down on Facebook is now identified by our AI systems before a person even sees it," announced Zuckerberg.

First of all, it is a noble request. Before you let a digital cleaning team in the Philippines clean up the rubbish of the network company, you delegate this task to machines that are more effective and don't get sick. But the automation of the filtering mechanisms is not about adhering to discursive standards, which Facebook has always interpreted differently from liberal societies - keyword nudity or censorship. The point is that a cybernetic control system is installed that successively decouples people from the system.

In the mechanistic world view of Silicon Valley, humans are faulty machines that tend to eruptive behavior and short-circuit actions and which are therefore better replaced by a machine that acts completely rationally. It's like autonomous driving: because the driver likes to accelerate and provoke accidents through risky overtaking maneuvers, he is placed in a robotic vehicle in which, as in the Google car Waymo, there is no steering wheel or gas pedal installed - it takes command the computer.

Pull people out of circulation!

This design contains a political statement: Where the gas pedal and rudder are dematerialized and replaced by software, humans can no longer do anything stupid. “Take humans out of the loop” is the motto. That may be rational. According to the World Health Organization, 1.24 million people are killed in traffic accidents every year. But this also threatens a loss of control. Because anyone who sits in a robot car is at the mercy of technology for better or for worse. He is navigated like the user in the network. It is actually superfluous, controlled by obscure algorithms that act as a kind of traffic cop: You decide which information vehicles are allowed to enter the arena - and which are not.

With algorithmic regulation, the general political weather situation can also be controlled in news systems such as Facebook: The developers can use program code to bring the climate of opinion to a comfortable temperature, and heat up, cool down or freeze debates. Networked traffic can also be programmed on the computer: you can set speed limits, define distances, and impose driving bans.

In analog road traffic, the driver is responsible for observing traffic signs, disregarding them results in sanctions. If you drive too fast, there is a fine. In fully automated traffic, the vehicles are programmed to conform to the standards: They follow programming commands. There can be no legal conflict because the code itself is the law.

You don't have to convince machines

It makes no difference whether the communication system is traffic or news, because society is modeled as a machine in political cybernetics: where people are ousted from decision-making systems and replaced by algorithmic systems, a techno-authoritarian political mode applies. You have to convince people with arguments or emotions. The machine does what it was programmed to do.

With regard to the automatic filter systems, the question arises to what extent the program code of Facebook and Google overwrites the “code” of the rule of law. The primacy of politics is still stable: the legislature imposes regulations on Internet companies, although the mathematical logic of the algorithms is in sharp contrast to the qualitative rules of interpretation of the democratic constitutional state. But by binary determining what is extremist and what is not, computer intelligences decide what is actually included in the democratic bargaining chip. This is an unauthorized legal preliminary examination. Just as one deprives the driver of the driver in the robotic vehicle in cheap paternalism of the steering wheel and accelerator, one deprives the citizen of the instruments of political participation - and incapacitates him.

The sociologist Simon Schaupp has pointed out that “cognitive political reflection” will be replaced by “performative problem solving of big data algorithms”. For the ideology of cybernetic capitalism, self-organization means above all the "absence of political struggles". In this “self-organized” society, a distinction is no longer made between opposing interests, but only between “maintaining (correct) and destabilizing (incorrect) the system”.

Politically correct is what is true

Even with the automatic selection of fake news, the legitimacy check of expressions of opinion is reduced to a mathematical proof procedure, the conflicting aspects of the political are scaled down. Politically correct is what is mathematically true. This mathematization of value decisions pursues the purpose of immunizing oneself against social criticism. According to the motto: The machine has decided.

The rhetoric of AI as a broad-spectrum therapeutic agent for all social disasters implies that computers are not only the better drivers, but also the «better» opinion leaders and watchdogs. That is an authoritarian thought. If you spin it away, what is good or dangerous for a society can be defined in a programming matrix - and the programmers would have the authority to interpret social debates.

When Mark Zuckerberg speaks of the one percent of terrorist content that cannot yet be identified by AI systems, it is not about the correctness of this "mission" and certainly not about a definition of terrorism, but about technical optimization and affirmation of data domination. The Facebook boss thinks his social network in terms of cybernetics: He wants to avoid "disruptions" and keep the system in balance. In this respect, it follows an internal logic if Zuckerberg wants to further automate the process control of his data factory, for example through facial recognition systems and AI-based fact checks. In this way he can take power away from users and feed them into his algorithmic systems. The calculation is simple. If computer intelligence will delete one hundred percent of the terrorist content in the future, humans will no longer have anything to decide.