**Meta-Inductive Justification of Universal Generalizations – Gerhard Schurz**

In this talk Gerhard defended his account of induction against some objections Tom published in an earlier paper. Unfortunately I am not very well acquainted with meta-induction and Gerhard’s presentation was a word document which he quickly skimmed through. So I had a hard time following. The approach to justify inductions seems to be to justify all (or a collection) of inductive rules at once rather than just one specific rule (quite similar to Solomonoff induction again!). This allows one to prove some neat mathematical theorems that guarantee to convergence to the true inductive generalization. Not having read Tom’s paper I didn’t quite get what his objection was, but it seemed to have to do with a philosophical satisfactory justification for induction not following from the maths. For example meta-induction is not stable across time – a grue problem. Gerhard basically accepted this critique, trying to counter it by introducing a bridging principle between formal and informal epistemology. This he called optimality principle, which normatively requires one to choose the optimal inductive principle as selected per his optimality theorem. There was lots of back and forth between him an Tom considering possible objections. In the end Tom seemed unconvinced.

**The Limits of Explainable Machine Learning – Rianne de Heide**

Similar to Gitta’s talk this was about a limiting result Rianne proved with her collaborators on what kind of properties one can expect from post-hoc XAI explanations. Rianne proposed two criteria for good post-hoc explanations. The first is continuous recourse sensitivity, meaning that changes in recourse should continuously depend on the input. The second is robustness of the explanation with respect to label changes. With this definition in mind one can then prove that explanations satisfying these conditions do not exist. Of course this leads one to wonder, if then no good XAI explanations exist. The following discussion revolved mostly about the continuity demand on good explanations, making it clear that continuity is doing most of the work in the proof. The question being if it is a sensbile metric for recourse sensitivity. I would imagine that one could come up with cases where continuity conflicts with the obligation to treat individuals individually.

**An Approach to Solve the Abstraction and Reasoning Corpus by Means of Scientific Discovery – Rolf Pfister**

As you might now the abstraction and reasoning corpus (ARC) is one of the more recent competition to evaluate AI methods on human-centric tasks. Current human performance on the questions is 80% while the best AIs achieve 30%, so there is quite some room for improvement. Rolf suggests using ideas from scientific discovery to improve performance. He and his coworkers specifically employed conceptual spaces, Mill’s methods of induction and heuristic derivations of laws to come up with an algorithm. So far this algorithm solves 3 of ARCs problems which still is far from their best competitors, but he hopes to improve the performance.

**Artificial Agency – Daniel Herrmann**

Okay… this was a very formal talk with the stated aim of creating a model of agency which conforms to rational choice theory. Basically Daniel presented lots of definitions on how some variation of a possible world semantics can be used to talk about agents and actions. The philosophical salient point here was, if didn’t get too much confused by the references to category theory, that actions pick out the agent and not vice versa. So first you set up your possible actions, then the formalism will tell you who the agents are. I guess this might result in some very counterintuitive agents…

**Statistical Learning Theory and Occamâ€™s Razor – Tom Sterkenburg**

The concluding talk of the workshop and also somewhat concluding Tom’s project on the epistemology of statistical learning theory (SLT), pushed the mainly mathematical results of learning theory to their philosophical limits. If one is inclined to think of SLT as an overarching framework for ML, then results like no free lunch theorems and the fundamental theorem should be of utmost importance for the epistemology of ML. For example no free lunch theorems would lead one to a skeptical conclusion about the possibility of learning, which the fundamental theorem would suggest otherwise. Tom argued that the conditions built into the fundamental theorem prevent it from being used in any foundational epistemic enterprise, but it can be used for – what he termed – forward-looking epistemology. This is a pragmatist epistemology which accepts that any justification we might mathematically get is model (hypotheses class) relative and depends on the situation we are learning in. Now a problematic case from current ML comes immediately to mind: For most practically applied learning methods (DNNs for example) we just don’t know to which function class they are biased too. This means that, at least at the moment, we cannot use SLT as a general framework for ML epistemology. There are other problems too. Algorithms with infinite VC-dimension tend to work well in practice but SLT can say nothing about them. My take home from this talk is, that there still is lots of work to be done in epistemology of ML especially on the (non-formal) philosophy side. Tom’s talk suggests to me that we should consider pragmatist approaches very carefully, maybe taking into account the practical engineering success of ML methods in greater detail.