BEL: +32 (0)488 90 57 80 | IRL: +353 (0)87 7794892
Wexford & Antwerpen
info@haycom.eu

Why tech is not ?just a tool?

November 20, 2019 In Digital Freedom

Throughout October 2019, digital rights-watchers welcomed new reports
warning about the human rights crises of Artificial Intelligence (AI)
and other digital technologies. From Philip Alston?s caution that the UK
risks ?stumbling zombie-like into a digital welfare dystopia? to David
Kaye?s critique of internet companies’ and States’ failure to respect
human rights online, civil society is increasingly demanding greater
insight into the impact of technology on society. Individuals who do not
work on ?digital rights? are also becoming progressively more aware of
the exponentially increasing power and control of technology giants such
as Facebook and Google.

Whilst every citizen is and will continue to be affected (whether
positively or negatively) by the rise of technology for everyday
services, the risks are becoming more evident for some of the groups
that already suffer systematic discrimination. Take this woman who was
automatically barred from entering her gym because the system did not
recognise that she could be both a doctor, and a woman; or this evidence
that people of colour get worse medical treatment when decisions are
made by algorithms. Not to mention the environmental and human impact of
mining precious metals for smartphones (which disproportionately impacts
the global south) and the incredibly high emissions released by training
just one single algorithm. The list, sadly, goes on and on.

The idea that human beings are biased is hardly a surprise. Most of us
make ?implicit associations?, unconscious assumptions and stereotypes
about the things and the people that we see in the world. According to
some scientists, there are evolutionary reasons for this, in order to
allow our ancestors to distinguish between friends and foes. These
biases, however, become problematic when they lead to unfair or
discriminatory treatment ? certain groups being surveilled more closely,
censored more frequently, or punished more harshly. In the context of
human rights in the online environment, this matters because everyone
has a right to equal access to privacy, to free speech, and to justice.

States are the actors that are responsible for respecting and protecting
their citizens? human rights. In the past and still today in most cases,
representatives of a state (such as social workers, judges, police and
parole officers) would make decisions that impact its citizens? rights:
working out the amount of benefits that a person will receive, deciding
on the length of a prison sentence, or making a prediction about the
likelihood of them re-offending. Increasingly, these decisions are being
made by algorithms.

Many well-meaning people have fallen into the trap of thinking that
tech, with its structured 1s and 0s, removes humans? messy bias, and
allows us to make better, fairer decisions. Yet technology is made by
humans, and we unconsciously build our world views into the technology
that we produce. This encodes and amplifies underlying biases, whilst
outwardly giving the appearance of being ?neutral?. Even the data that
is used to train algorithms or to make decisions reflects a particular
social history. And if that history is racist, or sexist, or ableist?
You guessed it: this past discrimination will continue to impact the
decisions that are made today.

The decisions made by social workers, police and judges are, of course,
frequently difficult, imperfect, and susceptible to human bias too. But
they are made by state representatives with an awareness of the social
context of their decision, and crucially, an ability to be challenged by
the impacted citizen – and overturned if an appropriate authority feels
they have judged incorrectly. Humans also have a nifty way of being able
to learn from mistakes so that they do not repeat them in the future.
Machines making these decisions do not ?learn? in the same way as
humans: they ?learn? to get more precise with their bias, and they lack
the self-awareness to know that it leads to discrimination. To make
things worse, many algorithms that are used for public services are
currently protected under intellectual property laws. This means that
citizens do not have a route to challenge decisions that an algorithm
has made about them. Recent cases such as Loomis v. Wisconsin, which saw
a citizen challenge a prison sentence determined by the US?s COMPAS
algorithm, have worryingly ruled in favour of upholding the algorithm?s
proprietary protections, refusing to reveal how the sentencing decision
was made.

Technology is not just a tool, but a social product. It is not
intrinsically good or bad, but it is embedded with the views and biases
of its makers. It uses flawed data to make assumptions about who you
are, which can impact the world that you see. Another example of this is
the use of highly personalised adverts in the EU, which may breach our
fundamental right to privacy. Technology cannot ? at least for now ?
make fair decisions that require judgement or assessment of human
qualities. When it comes to granting or denying access to services and
rights, this is even more important. Humans can be aware of their bias,
work towards mitigating it, and challenge it when they see it in others.
For anyone creating, buying or using algorithms, active measures about
how the tech will impact social justice and human rights must be at the
heart of design and use.

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

Millions of black people affected by racial bias in health-care
algorithms (24.10.2019)
https://www.nature.com/articles/d41586-019-03228-6

Anatomy of an AI System
https://anatomyof.ai/

Profiling the unemployed in Poland: Social and political implications of
algorithmic decision making
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

Project Implicit
https://implicit.harvard.edu/implicit/takeatest.html

Digital dystopia: how algorithms punish the poor (14.10.2019)
https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor

(Contribution by Ella Jakubowska, EDRi intern)