Our judgment, like a pendulum, continuously swings between optimism and pessimism. This inclination is self-evident when we discuss the technological developments, occurred in the last decades, that have modified our way of living.
In 1964, Umberto Eco published Apocalypse Postponed, an essay that was meant to put in order the different judgments expressed on mass society. Eco tried to find a correct and rational middle ground between those who were enthusiastic about cultural innovations and those who loathed them. As an old catchphrase says, "in medio stat virtus".
The current situation
The same arguments could be shifted to our troubled years, where two opposite parties are fighting over different topics such as social networks, privacy, personal relationships, online hate, irresponsibility, and so on. Those who have faith in the birth of a just world and those who predict the end of our existence. As the pendulum above, we feel different emotions about the future of technology.
Recently, media have spread some news about the racist, discriminating and insensitive behavior of Artificial Intelligence's applications. This is usually a matter of social network management, recruitment procedures, predictive policing.
Well, there is no wonder; technology is not neutral.
Technology is created by humans for humans and carries within it all the prejudices and personal histories of those who develop it. It clearly appears in applications where technology has a voice and relates directly to its creators.
Microsoft's Tay bot
In 2016, Microsoft released on Twitter its most advanced bot: Tay, to improve its conversational skills in the social network. In less than 24 hours, Tay started using an offensive and racist language, forcing Microsoft to shut it down.
The causes of this media disaster were soon discovered: during that short time, the Artificial Intelligence, which wasn't given an understanding and limitation of misbehavior, learned from Twitter users to use inappropriate language.
Youtube's moderation system
Another example to be mentioned for its pervasive presence in our lives is the social networks' moderation system. As we all know, in 90% of cases, an Artificial Intelligence that is trained to recognize inappropriate contents will control users' posts. Well, it is not uncommon that users have been the target of discriminatory censorship performed by the moderation system.
It is interesting to mention the episode involving YouTube, which has penalized, economically and publicly, the LGBTQ themed contents of numerous creators. In this case, the system was not able to distinguish between sexually explicit themes and videos that show the authors' sexual and gender orientations.
Many cases could be mentioned as examples, and many others that have not received media visibility and, thus, remain unclosed.
OpenAI and university courses
However, in recent years, many subjects have understood the importance of this topic. OpenAI , a non-profit company that sees Elon Musk and Bill Gates among its investors, has set itself the goal of creating a free and secure Artificial Intelligence, to improve the life of all humanity, without discrimination.
Many universities, on the other hand, began to develop, within their training offers, examinations and specializations concerning ethics in Artificial Intelligence. Harvard, Stanford and the Massachusetts Institute of Technology among others. All the most important pools of talent in the technology field have finally understood the importance of teaching their students this kind of technology, which is not neutral and must be conceived according to our conscience.
Ultimately, there is only one keystone. Machines don't care about our future; our wellness depends solely on people who develop them.
Photo by Nadine Shaabana on Unsplash