Algorithms making decisions: the problems

When algorithms take the driver's seat, they can be as biased as humans - a problem when "computer says no"...
22 September 2020

Interview with 

Karen Yeung, University of Birmingham

COMPUTER

A blank computer monitor in front of a wall of images.

Share

Recently in the UK, the government attempted to use an algorithm to generate replacement grades for public exams interrupted by the COVID-19 pandemic. The results were catastrophic: many students claimed they had been treated unfairly and cheated of cherished university places. The government were forced to backtrack. But this is far from the only critical social decision that’s being automated nowadays, as Phil Sansom heard from law and computer science expert Karen Yeung…

Karen - There is a long history of using statistics to inform all sorts of allocation decisions. The problem happened because someone somewhere lost sight of the critical importance of assessing each student on their own merit. And that of course is grossly unfair, and hence we saw the outcry that emerged after those grades were published.

Phil - This idea then, of using what's happened in the past to automate decisions about the future: is this something that happens in other places that I wouldn't expect?

Karen - It happens everywhere. This is one of the really serious problems that we haven't yet been able to find a way to address, because if we take past patterns of shopping behaviour, for example, to build a profile of what it is that you like and what you don't like, then we're going to imagine that you'll like the same kinds of things tomorrow as you did yesterday. Now that's a fairly benign example, but to use one that I find quite powerful to illustrate this problem: a number of years ago in fact, a set of researchers at Carnegie Mellon did an experiment partitioning 1000 simulated users who were asked to search for jobs on the internet. And they allowed them to search away, and then they had them look at the same news sites to see what kinds of personalised ads were served up to them. And what they found is that male users were shown high paying job ads six times more frequently than female users, on the assumption that the historic data showed that women do not acquire high paying jobs; and because they do not click on high paying jobs, the assumption is that women are not interested in high paying jobs, and that they don't have the capacity to meet the criteria of high paying jobs.

Phil - Wow. So you're saying that, not only do you lose the fact that people can be unpredictable, but also you get stereotyped.

Karen - Absolutely, absolutely. There is a basic stereotyping logic when you use historic data as a predictor of the future.

Phil - Is that something that you can fix by just doing better at your statistics or your analytics?

Karen - I'm not convinced that we can. How could you eliminate, in a non-arbitrary, non-subjective way, historic bias from your dataset? You would actually be making it up. You would have a vision of your ideal society, and you would try and reflect it by altering your dataset accordingly, but you would effectively be doing that on the basis of arbitrary judgment.

Phil - We talked about getting a job; we talked about crucial exam results; are there other areas where this kind of thing is a problem for crucial life moments or decisions?

Karen - Yeah, so I think one of the things that has emerged in recent years in particular is public sector decision-making, particularly in relation to eligibility for certain kinds of public service, has increasingly been automated.

Phil - That actually happened in the UK, like automated universal credit or something?

Karen - Automated universal credit is a nice illustration. There's the robodebt fiasco that you may have heard of in Australia, where attempts were made to claim back predicted overpayments, and a number - many, many thousands of people - were deprived of benefit checks. And in fact one young man even committed suicide because he was erroneously charged.

Phil - Is this technology a straight-up bugbear? Because it seems like it can be quite useful for chewing through huge, complicated, tedious tasks.

Karen - It's absolutely true. And computational systems are wonderful at automating very repetitive, straightforward tasks, and there are so many tasks that we should celebrate when they become automated. But I think what we need to attend to is thinking about these technologies as complex sociotechnical systems, particularly when the consequences are concrete for people's lives. And of course the rich are able to escape these kinds of systems and can usually speak to a human if they want to speak to one; and there are many other stories of algorithmic horror shows, if you like, where people have been essentially trapped or find it impossible to challenge the outcomes that are being produced from these systems, because they simply don't have an entry point.

Phil - Do we have a proper plan to train people to be really good at using data and automation like you're talking about, and being trained to figure out when's the right place to use them?

Karen - I don't think we have yet.

Comments

Add a comment