"Discrimination by artificial intelligence" was the topic of Professor Dr. Georg Borges' lecture on June 1. He illustrated the complex interconnections when algorithmic decision making is compared to decision making by humans. More than 80 people followed the exciting online event, which was part of the "Trier Talks on Law and Digitalization (TGRD)".
Artificial intelligence (AI) has become a buzzword and is already an integral part of modern life. AI offers purchase suggestions, sorts out job applications or determines the recidivism rate of released offenders. All of these processes are based on algorithmic decisions, that are made by machines and implemented by humans.
When machines chose people, legitimate concerns are being raised, due to the lack of transparency and comprehensibility of the decision making process: no information is available on the criteria that is being used to select and make decisions. Yet are decisions made by humans more comprehensible, for example in examination situations or when granting a loan? Is it discriminatory if a landlord does not rent his apartment to a foreign citizen who presents guarantees, but to a young blonde waitress?
In his lecture, Professor Borges first looked into the question of what discrimination actually means. He stated that AI can be discriminatory if the data is entered accordingly. Therefore, it is extremely important to have protection against erroneous decisions and test the decision making systems on a regular basis. He presented research results from the "ExamAI" project, which examines what effective control and testing procedures for AI systems might look like. After all, decisions making by AI can also have its advantages: Machine decisions are replicable and open up the possibility of verification - in contrast to human decision making.
The Trier Talks on Law and Digitalization will be held in the summer semester of 2021 under the theme of "The Digital Dimension of Law". The public lecture series was moderated by Professor Dr. Benjamin Raue.