Today, algorithms are ubiquitous and everywhere. Intelligent systems decide what news you’ll read, how likely you are to default on your loan, or if you’ll make a good and productive employee. Driven by big data analytics, human decision-making in finance, business, health care, policing, and public policy is being increasingly replaced by automated systems. The Big Data Promise vows to support decision-making in a fair and unbiased fashion. Circuits can’t be prejudiced – computers don’t care about your gender, skin colour or sexuality, the argument goes.
In this talk we’ll be looking at a couple of alarming cases that question the dictum of radical objectivity and fairness of algorithmic decision making and machine learning. At a growing rate, people who perform well lose jobs because an algorithm says so. In the US, machine learning already supports judges in assessing the probability that a delinquent will reoffend, which can lead to harsh prison sentences. In many instances, such systems are found to be considerably biased against ethnic minority groups and people of colour. Quite contrary to the promise of radical efficiency, fairness and objectivity, algorithmic bias can severely promote and reproduce inequality and injustice. How is this possible?
Together, we’ll be discussing some of the ethical, legal, social and political challenges that handing over decision making capacities to machines entail. How do these systems work, and what is the role of Big Data and surveillance? Who is to blame when a machine makes the wrong decision? What should be done in terms of regulation, education and campaigning to make sure that Big Data doesn’t just benefit Big Business?
Juljan Krause is the Editor of the journal Evental Aesthetics and a researcher at the School of Electronics & Computer Science at the University of Southampton. His current research project investigates the social and political implications of building the first quantum internet.