Tech & Rights

Are Decision-Making Algorithms Always Right, Fair and Reliable or NOT?

Algorithmic decision-making (ADM) is swiftly changing our societies. But does it hold up its promise of objectivity, or in the end do more harm than good?

by Anna Ackermann

ADMs offer the possibility of decision-making without the drawbacks of human bias. However, in reality the data fed into these systems is already tainted, which in turn leads to discriminatory machine learning models outcomes. Rather than preventing discrimination, this bakes in even further the discrimination of marginalized social groups.

There are two types of ADMs that make decisions about us: machines which have the capacity to learn, and those which don’t. Learning in this case means that the algorithm changes through experience. Both systems can cause various problems, but in this article we focus only on the former, algorithmic decision-making systems that 'learn'.

What is the purpose of decision-making algorithms?

The rationale for employing algorithmic decision-making is to increase efficiency and cut out human biases. At a first glance, this sounds great. If we apply for a loan for example, we want the query to be processed quickly and fairly. Unfortunately, this promise seldomly holds up. While the use of ADMs is promoted on the basis of improving objectivity, often the main rationale is saving money and resources - for which society has to pay a dear price.

How do decision making algorithms work?

Algorithmic decision-making is the delegation of decision-making and implementation to machines. For this to be possible, a machine learning model derives conclusions which it learned by identifying patterns in a training data set. The big advantage of algorithms in comparison to humans is that they are able to detect correlations - and therefore patterns and causalities - in huge datasets.

Where do we use algorithmic decision-making?

Algorithms are already part of our daily lives. From curating social media feeds to detecting cancer in X-Rays - our reality in the 21st century is constructed by algorithms. While these may be amongst the more prominent use-cases, algorithms are also increasingly found in places where one might not expect them. The police rely on algorithms to decide how likely you are to commit a crime, the company you are applying to uses them to decide whether they should hire you or not, and public authorities allow them to assign you to welfare programs.

Support our work ensuring technology doesn't harm your rights Donate

In some cases the use of algorithms can support humans and increase the quality of our lives, such as supporting doctors in detecting diseases after they’ve worked a 12-hour shift. Unfortunately, too often the opposite happens. Objectivity is paramount in any decision-making which impacts a person’s life, however algorithms have been responsible for decisions which were clearly flawed and/or reproduced existing discriminations.

What are the dangers of algorithms making decisions?

Some believe that algorithmic decision-making might be more objective than human decision-making, but this is not necessarily true. Although their biases are less likely to be fixed, ADMs might be even more discriminatory than human decision-making.

While such mistakes might seem easy to fix through appropriate monitoring - called debiasing - due to the nature of AI an effective remedy remains elusive. With its internal workings hidden and not readily understood, AI decision-making remains obscure. The resulting lack of transparency has earned self-learning algorithms the nickname of “black box AI”. A significant new challenge with these machine learning systems is, therefore, ascertaining when and how they will introduce some bias into the decision-making process.

Algorithms are reliable in terms of always producing the same output for the exact same input (at least in case of those systems that do not learn ‘on the go’). They are expected to output objective decisions, and can’t have a bad day or take a dislike to someone.

So far so good. But this means if they are coming to false or harmful conclusions, these are also being outputted steadily. And this is exactly the problem. Since they have to learn from the past, this leads to two issues, firstly, they are not responsive to alterations in reality until they re-trained and secondly, they only see an individual in comparison to others, and therefore inadvertently reproduce or magnify historical patterns of bias. This happens because the input data used to train the systems is almost always skewed due to past discriminatory practices or due to under-represention members of marginalized groups. Of course humans are also often not champions of changing their behaviour or values, but unlike algorithms, neither are we known for our objective decision-making.

In our society, some people can take an elevator to the top, while others have to take the stairs. Imagine an employer relies on AI to sort through the hundreds of applications for a new role. There's Joe, who looks great on paper - he had parents who could afford to send him to good schools thanks to connections - but he isn’t the right fit for the job. Now imagine Jessica, a single mom with some gaps in her CV, but she has the passion and knowledge to be a good fit. While a human would at least theoretically be able to evaluate individually in job interviews, an algorithm would learn to only hire the candidate whose CV matches the profile of the generic ideal worker. This robs people of the chance to control their own fate.

In a world that is constantly changing and we are collectively trying to break down harmful stereotypes, the logic of algorithmic decision-making is counterproductive. We all want to be judged by our own actions, not the ones our “in-group” have done before us.

How does AI-learning work in practise?

You want your machine learning model to find you the best candidate for a given job. You do not tell your algorithm what makes the perfect candidate, since you might not be sure yourself. Instead, you feed the model the resumes of former applicants you hired and mark some resumes as excellent (those who you hired and perform very well), and others with lower marks.

Is it likely to judge all the applicants fairly?

An algorithm-based hiring tool at Amazon used this strategy, and the results were less than desirable. In 2015, experts found out that Amazon’s new recruiting engine was biased against women. The company’s experimental hiring AI tool ranked job candidates, on the basis of whether their resumes resembled those submitted by successful applicants over a 10-year period. Unsurprisingly, most of these came from men: male dominance within the tech industry is well-documented. As a result, Amazon’s AI system identified a pattern that candidates judged by Amazon to be desirable were male.The tool penalised resumes that included the word “women’s,” as in “women’s chess club captain.”

The important question to ask here is - does this make algorithm based hiring tools any worse than regular employers? After all, it is not uncommon for employers to make decisions based on personal biases. The answer is yes, also because of the so-called “control problem.” Industrial psychologists and engineers studying the human operators of complex machines have long identified one particular danger of the devolution of responsibility to machines: algorithms generate a false impression of objectivity, making it difficult to question its findings.

How do algorithms affect society?

In a democracy, we expect that authorities like the courts or banks have to be able to explain the decisions they made. If we are denied loans or are put into prison, we have the right to know why. In comparison, ADMs and how they are used are not compatible with democratic standards.

A further red flag is that complex machine learning models make the surveillance of humans far easier. 30 years ago it was very difficult to track someone’s movements, but now their smartphones may give away all the information needed to predict where you will find them in the future. Increasingly worrisome is biometric mass surveillance, which makes the real-life identification of people possible. This is especially dangerous for marginalized communities (think about members of the LGBTI community or undocumented people for example) who have good reasons to fear being identified in certain places. No one should have to fear prosecution and lose the right to privacy. Authoritarian governments have been and continue to be in power - such technologies in their hands could pose a great risk to people and prevent them from expressing disagreement, for example by attending a protest.

Conclusion

If algorithmic decision-making holds up to its promises - being more objective and less discriminatory than humans - it could hold great potential. Unfortunately, systems using machine learning tend to enhance human bias, while going undetected more easily. The implications for the discrimination of minorities, as well as creating a more equal society, are troubling. With the proliferation of ADM systems, it is crucial that we direct our attention towards the careful regulation of these technologies. Technologies should be used to increase the freedom and independence of all of us, not roll them backwards.

Image credits:

Christina@wocintechchat.com / Unsplash

Donate to liberties

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

We’re grateful to all our supporters

Your contributions help us in the following ways

► Liberties remains independent
► It provides a stable income, enabling us to plan long-term
► We decide our mission, so we can focus on the causes that matter
► It makes us stronger and more impactful

Your contribution matters

As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.

Subscribe to stay in

the loop

Why should I?

You will get the latest reports before everyone else!

You can follow what we are doing for your right!

You will know about our achivements!

Show me a sample!