Zweig urges critical interaction with algorithms

Professor Dr. Katharina Anna Zweig. Credit: Thomas Koziel
Professor Dr. Katharina Anna Zweig. Credit: Thomas Koziel

Algorithms affect our daily life: They decide what online advertisements we see, but also determine acceptance for loans. At the University of Kaiserslautern, Professor of IT Dr. Katharina Anna Zweig and her team in the Algorithm Accountability Lab are working on the critical handling of such calculation methods. Among other aspects, they are exploring methods to supervise such algorithms. At the Cebit computer trade fair in Hannover, from 20th to 24th March at the research stand of Federal State Rhineland-Palatinate (hall 6, stand C17), they will illustrate how algorithms work, what decision-making processes they use and what they learn from the collected data.

When we surf the internet, we leave behind a digital footprint. Online shops use this to provide personalised purchase recommendations. There are many examples like this. Algorithms make predictions and calculate human behaviour. Though, it is often unclear which data these calculations are based on. “Today, decisions are increasingly being made by such calculation methods. In some cases it can be useful, for example, as decisions are not affected by discrimination”, says Professor Katharina Anna Zweig from the University of Kaiserslautern, who is head of the Graph Theory and Complex Network Analysis Group, and who has a long history in the research of algorithms. “However, errors can occur, for example, in an algorithm that predicts whether or not a person will repay a loan. In such a case, would it be better to approve as few loans as possible, that will hardly ever fall through; or rather more loans, to grant more people access to the related opportunities?”

What if calculation methods could even predict the probability of young delinquents reoffending? If the probability that a young delinquent would reoffend is high, perhaps the judge would impose a prison sentence instead of probation. Should as few as possible mistakenly go to jail or rather as few as possible be wrongly released on probation? The computer scientist from Kaiserslautern addresses such issues and explores algorithms that determine important life situations of people. “Decisions made by algorithms require supervision”, explains Professor Zweig. “Therefore, we need a system that inspects them in a democratically legitimised manner”.

At the Cebit, Zweig and her team will provide information on the opportunities and risks calculation methods entail and how they learn using the collected data. Using an exhibition piece, they will show how algorithms are calculated via “decision trees”. “The principle is similar to a marble run, along which the balls roll until reaching a switch, which determines the further direction”, explains Zweig.

Together with philosopher Lorena Jaume-Palasí and journalists Lorenz Matzat and Matthias Spielkamp, in the previous year Zweig founded the platform “AlgorithmWatch” to better inform the public about the power of algorithms. Zweig has a type of algorithm inspection in mind, as was suggested by Austrian jurist Viktor Mayer-Schönberger. “A team of experts, similar to auditors, could check the codes and either approve them or not”, says the computer scientist. Trustworthy calculation methods would then receive an official seal and the customer would be able to see that there is nothing to be concerned about.