Who is responsible for the robot`s solution?
Is it ethical to argue with a navigator or a robot vacuum cleaner? Scientists-developers only welcome this, the more diverse the modality in communicating with artificial intelligence, the better it learns. But do we not humanize it due to communicating with it on equal terms?
This, of course, is a philosophical question. But describing the image of the future world, create a single integrated picture of it, based on which the legislative and Executive authorities can act, is a job for philosophers, isn`t it? This is what the founder of our journal, GennadyKlimov devoted his last works to. He was very wary of what he called the "digitalization of stupidity". And today we see clear examples of this phenomenon.
Isaac Asimov, whose 100th anniversary we recently celebrated, formulated three basic laws of robotics back in 1942: "Arobot cannot harm a person or allow a person to be harmed by its inaction. A robot must obey all orders given by a human, except when those orders are contrary to the First Law. The robot must take care of its own safety to the extent that it does not contradict the First or Second Laws."
That time it seemed that robots will be such humanoid machines that will obey our orders, but with which it will be scary to be on the same planet (see films about the Terminator). Perhaps it is because of the warnings of science fiction that the development of robotics has taken a completely different path, and now we must worry about how our relationship with a much less personalized entity will develop.
In the global IT industry, questions about the ethical relationship between human and artificial intelligence have become more acute in 2018. All global corporations have established ethics commissions that review emerging precedents. The urgency of this problem reached Russia, as usual, a little later. In 2019, platform solutions for digitalization were actively discussed. But experts in the theory and practice of this process agree that 2020 will be devoted to discussing ethical issues.
One of the main issues is the distribution of responsibility. It is not clear who is responsible for solving algorithms? This is the first question that officials ask. What if the decision of artificial intelligence gave out benefits to the wrong person, or missed the smuggled goods? In the case of a wrong decision made by a person, there is someone to fine or put in jail. And then what to do?
Last year, IT industry experts discussed the concept of "technical person" with the Ministry of Justice of the Russian Federation (along with individuals and legal entities). At that time, the Ministry of Justice was not ready to discuss this issue. Perhaps, as our world moves to a new cyber physical level,the "technical person" who has decided something on his own will turn out to be a full -fledged acting character of laws and regulations. But the question of punishment remains open. Now, if the algorithms are wrong, then the people who are unlucky suffer, not the developers.
The next problem: how much are citizens willing to trust the decisions made by algorithms? Surveys show that there are more people in Russia than in the rest of the world who are ready to entrust the solution making to a robotic judge (perhaps because of a deep distrust of the existing judicial system). But if you train a robot judge on the decisions that judges make now, we will get the same thing, only quickly. Self-learning neural networks are not transparent, and it is impossible to trace why the algorithm came to such conclusions.
Still, I will give you a classic case from this sphere. Amazon has developed an automated recruitment system and trained it on the example of previous solutions. Artificial intelligence decided that the best candidates were white men, since it was white men who were preferred to be hired by human personnel officers. In other words, digitalization strengthens old prejudices. So far, in all areas related to governance or law, it has not been possible to avoid the effect of overlapping history. We have to disable the algorithmic network so that it doesn't double the errors. At the moment, there are no solutions that implement the principle of fairness in the digital world. It turns out only a deterioration, the very digitalization of stupidity.
RANEPA experts who prepared a report on ethical problems of digitalization for the Gaidar forum held in early 2020, named three areas of such problems: big data, the"inhumanity" of artificial intelligence, and social inequality. We can already see what happens to data: the government does not have time to control and protect it. The concept of privacy disappears completely. We are constantly monitored by cameras. In China or Singapore, they don't warn you about them, they are everywhere, even in the water closets. In Moscow, too, it is coming to that. Surveillance cameras are equipped with a facial recognition system. It turns out that we give our data without consent, with the potential for total surveillance. When we sign a contract for any service, we never read it, and, as a rule, we allow you to do anything with our data. We wrote about what this can lead to in the previous column.
If public services are algorithmized, social inequality will worsen. Retiree, the elderly, residents of remote localities, and those with less digital literacy will suffer.
Even such convenient things that have come into our lives, such as digital food delivery systems and taxi ordering, have led to an increase ... exploitation of people, in fact, to modern slavery. Technology platforms have divided segments of the population into those who can quickly and cheaply order a taxi or pizza, and those who spend 20 hours at the wheel or on their feet delivering this service to you.
So far, there are more questions than answers. And we will continue to write about these issues.