Ethics of Algorithms – Why Should We Care?

Have you ever been discriminated against by an algorithm? Probably. You just don’t know about it.

Teilen
von Kilian Vieth and Joanna Bronowicka, January 3, 2018
Ethics-of-Algorithms

Have you ever been discriminated by an algorithm? Probably. You just don’t know about it. Softwares using algorithms are at the core of new business models of both start-ups and NGOs. And those algorithms incorporate the potential to discriminate against their users. This is why the ethics of algorithms is not just a question for big tech giants. If you want to do good with the software you build, manage or use, you should be aware of the ethical issues posed by algorithms.

Algorithms are increasingly used to make decisions for us, about us, or with us – oftentimes without us realizing it. So far, research focused on how big data is produced and stored, but we also have to scrutinize how algorithms make sense of this growing amount of data. Being aware of how automated decision-making works becomes a crucial resource for professionals in many areas.

What is an algorithm?
It is computer code that carries out a set of instructions. Algorithms are essential to the way computers process data. Theoretically speaking, they are encoded procedures, which transform data based on specific calculations. They consist of a series of steps that are undertaken to solve a particular problem, like in a recipe. An algorithm is taking inputs (ingredients), breaking a task into its constituent parts, undertaking those tasks one by one, and then producing an output (e.g. a cake). A simple example of an algorithm is “find the largest number in this series of numbers”.

Why do algorithms raise ethical concerns?

Let’s have a closer look at some of the critical features of algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too.

One key issue concerning algorithms is that they keep information away from us. Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media newsfeed looks. In other words, algorithms perform a gate-keeping function.

For example, algorithms, rather than managers, are more and more taking part in hiring (and firing) of employees. Deciding who gets a job and who does not, is among the most powerful gate-keeping function in society. Research shows that human managers display many different biases in hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect. Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, problems depending on how they are programmed and what input data is used.

Also beyond the workplace, algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power.

Some algorithms also deal with questions, which do not have a clear ‘yes or no’ answer. Thus, they move a way from a checkbox answer “Is this right or wrong?” to more complex judgments, such as “What is important? Who is the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms.

We can observe this development for example in policing. In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, who were not necessarily under investigation, was guided by a computer-generated “heat list” – an algorithm that seeks to predict future involvement in violent crime. A key concern about predictive policing is that such automated systems may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime will be detected. Since more police means more opportunities to observe residents’ activities, the algorithm might just confirm its own prediction. Right now, police departments around the globe are testing and implementing predictive policing algorithms, but lack safeguards for discriminatory biases. But predictions made by algorithms provide no guarantee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations.

The complexity and opacity of many algorithms are huge problems as well. Many present day algorithms are very complicated and can be hard for humans to understand, even if their source code is shared with competent observers. What adds to the problem is a lack of transparency of the code. Algorithms perform complex calculations, which follow many potential steps along the way. They can consist of thousands, or even millions, of individual data points. Sometimes not even the programmers can predict how an algorithm will decide on a certain case.

Facebook's newsfeed algorithms are an everyday example for this complexity and opacity. Many users are not aware that when we open Facebook, it is an algorithm that decides what to show us and what to hold back. Facebook's newsfeed algorithm filters the content you see, but do you know the principles it uses to hold back information from you? In fact, a team of researchers tweaks this algorithm every week – they take thousands and thousands of metrics into consideration. This is why the effects of newsfeed algorithms are hard to predict – even by Facebook engineers! If we asked Facebook how the algorithm works, they would not tell us. The principles behind the way newsfeed work (the source code) are in fact a business secret. Without knowing the exact code, nobody can evaluate how your newsfeed is composed. Complex algorithms are often practically incomprehensible to outsiders, but they inevitably have values, biases, and potential discrimination built in.

Should an algorithm decide your future?

Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of data we produce every day. Algorithms make our lives easier and more productive, and we certainly don’t want to lose those advantages. But we need to be aware of what they do and how they decide.

Algorithms can discriminate against you, just like humans. Computers are often regarded as objective and rational machines. However, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have different goals in mind than the users would have. Questions about ethics of algorithms are surely not for policy-makers only. Professionals who design algorithms should also be part of the debates about ethical standards and legal safeguards for algorithms.

Further Reading:

This article is a condensed version of a publication prepared by Kilian Vieth and Joanna Bronowicka from Centre for Internet and Human Rights at European University Viadrina. It was prepared based on a publication “The Ethics of Algorithms: from radical content to self-driving cars” with contributions from Zeynep Tufekci, Jillian C. York, Ben Wagner and Frederike Kaltheuner and an event on the Ethics of Algorithms, which took place on March 9-10, 2015 in Berlin. The research was supported by the Dutch Ministry of Foreign Affairs. You can find the printable version in pdf format here. It is also available in an Arabic version.

About the Authors:

The Centre for the Internet and Human Rights (CIHR) is a vibrant hub for academic research about technology and society. Kilian Vieth is a researcher and communications assistant at the CIHR. His research interests include critical security and surveillance studies, digitalization of labor, and the politics of anonymity. Joanna Bronowicka is a sociologist and Project Coordinator at the CIHR. Her research interests include digital policies, migrations and human rights. You can find out more at cihr.eu.