List of current ML epistemology projects

There seems to be a flurry of funding for ML epistemology projects with lots of them starting in 2023. This list my attempt to get on overview what is going on in the field.

I try to include only projects with a very specific ML epistemology focus (or at least epistemology has to be in their project title). I’ll try to keep this list updated at least throughout 23.

Caveat: All data according to my best knowledge, errors are my fault. If you happen to have any updates or feel misrepresented please leave a comment or drop me a mail.

Update January 24: The rate of new ML epistemology projects has slowed down considerably over the course of the last year. Maybe the field is now saturated.

The List (last updated January 16th, 24)

Complexity reduction, explainability and interpretability

Lead: Eric Raidl and Miriam Klopotek
Institution: Uni Tübingen, Uni Stuttgart
Type: Heidelberger Akademie der Wissenschaften
Status: starting (Q2 24)

Goals/Research questions:
To use methods from many particle physics to reduce opacity in ML-methods. It is hoped that such methods give a more principled approach to complexity reduction than current XAI methods.

Scientific Understanding and Deep Neural Networks

Lead: Florian Boge
Institution: TU Dortmund (Germany)
Type: Emmy-Noether Group (DFG)
Status: active (Summer 23)

Goals/Research questions:
Analyze XAI research with respect to concepts of explanation and understanding. Is explanation without understanding possible? Impact of AI on science, specifically case studies in physics and biology.

The Epistemology of Statistical Learning Theory

Lead: Tom Sterkenburg
Institution: LMU (Germany)
Type: Single researcher project (DFG)
Status: Ended (Spring 23)

Goals/Research questions:
Statistical learning theory as general epistemology for ML. Defense of formal results as opposed to pragmatism in epistemology.

From Bias to Knowledge: The Epistemology of Machine Learning.

Lead: Tom Sterkenburg
Institution: LMU (Germany)
Type: Emmy-Noether-Group(DFG)
Status: active

Goals/Research questions:

To close epistemological gaps in the treatment of inductive bias in statistical learning theory and develop a pragmatist epistemology of real ML systems. It is hoped to apply this epistemology to problems of algorithmic fairness.

The Epistemology of AI systems

Lead: Thomas Raleigh
Institution: U Luxembourg (Luxembourg)
Type: Research group? (FNR)
Status: active (September 23)

Goals/Research questions:
Application of social epistemology improve Human-AI relation. Trusting and understanding AI generated testimony.

Note: Apart from the job offer I couldn’t find anything about this project yet.

BRIO – Bias, Risk and Opacity in AI

Lead: Guiseppe Primiero
Institutions: lots (Italy)
Type: PRIN group
Status: active (22-25)

Goals/Research questions:
Epistemological analysis of trustworthy AI. Application and development of different formal logics with respect to ML problems – in the epistemology context the stated goal is reducing opacity.

Note: Huge project situated at different Italian universities with a clear focus on ethics of (X)AI. They seem to do some epistemology “on the side”.

Epistemology and Ethics of Machine Learning

Lead: Konstantin Genin –
Institution: Uni Tübingen (Germany)
Type: DFG excellence cluster group
Status: active

Goals/Research questions:

It is not clear from the description on their website what the epistemology part of their research is. The projects seems fairly ethics focused.

Data science in psychopathology: gold rush in the data mine

Lead: Jan-Willem Romeijn
Institution: U Groningen (Netherlands)
Type: VICI (Dutch)
Status: Started 23

Goals/Research questions:

To make hidden inductive assumptions of data science methods in psychopathology explicit using tools from philosophy of science.

Explainable Intelligent System

Lead: Lena Kästner
Institution: Uni Bayreuth, Uni Saarland, TU Dortmund (Germany)
Type: VW Foundation
Status: active

Goals/Research questions: Explore the connections between explainability and understandability. Evaluate currently XAI methods.

Note: This project consists of three work packages, two of which are ethics related. From their publication record it seems that the focus is indeed on ethics of XAI and they are mainly interested in the use of XAI in human resources.

Machine Discovery and Creation

Lead: Finola Finn, Donal Khosrowi
Institution: Uni Hannover
Type: Uni funded
Status active

As many projects this too considers normative and epistemological implications of AI. Its main focus is generative AI, while its main epistemological interest seems to be novel ways of scientific discovery.

Leave a comment

Your email address will not be published. Required fields are marked *