Notes for Day 1 here Meta-Inductive Justification of Universal Generalizations – Gerhard Schurz In this talk Gerhard defended his account of induction against some objections Tom published in an earlier paper. Unfortunately I am not very well acquainted with meta-induction and Gerhard’s presentation was a word document which he quickly skimmed through. So I had… Continue reading ML epistemology workshop – Day 2
ML epistemology workshop – Day 1
I recently attended Tom’s closing workshop on his Philosophy of statistical learning theory project. It was a great workshop and I learned a great deal from the talks. I provide a streamlined version of notes I took, for all those who were interested but couldn’t attend. The abstracts of the talks can be found here: https://www.mcmp.philosophie.uni-muenchen.de/events/workshops/container/ml_2023/index.html#schuster.… Continue reading ML epistemology workshop – Day 1
What is anticipatory ethics about?
It was at ESDiT22 where I first encountered the term anticipatory ethics. This got my hopes up that there might be a new approach to the problem of unintended consequences in philosophy of technology. The problem of unintended consequences:The most troubling ethical problems in technology arise from unintended consequences. Unintended consequences can be known in advance, but most… Continue reading What is anticipatory ethics about?
Two positions in epistemology of ML
My friend Florian Boge is looking for a PhD and a Postdoc for his upcoming project Scientific Understanding and Deep Neural Networks at TU Dortmund. This project will try to get a better grasp at concepts of explanation and understanding in XAI using the philosophy of science toolbox. It was also featured in The List… Continue reading Two positions in epistemology of ML
The inverse Bananarama Conjecture
Like all tools, ChatGPT is best in experienced hands. Following the opening quote of this article, we term this the Bananarama Conjecture. https://doi.org/10.1016/j.frl.2023.103662 For some reason the enticingly titled ChatGPT for (Finance) research: The Bananarama Conjecture ended up in my feed. Finance is not something I usually follow, but I try to follow the ChatGPT… Continue reading The inverse Bananarama Conjecture
The strangest error
If the problem of induction is unsolvable then we won’t have a theory of strange errors. If the new problem of induction is unsolvable then we won’t have a theory of artifacts in ML. A problem in ML which has not received too much attention in philosophical circles so far is the problem of strange… Continue reading The strangest error
List of current ML epistemology projects
There seems to be a flurry of funding for ML epistemology projects with lots of them starting in 2023. This list my attempt to get on overview what is going on in the field. I try to include only projects with a very specific ML epistemology focus (or at least epistemology has to be in… Continue reading List of current ML epistemology projects
Researchers have an obligation publish as little as possible
This first post is about why this blog might have been a mistake. You don’t have any obligation to read it. Global scientific output is increasing every year. And that is only in publication counting schemes that only include “serious” scientific publications like long-form research articles and patents. Add less rigorous outlets like blog posts… Continue reading Researchers have an obligation publish as little as possible