The illusion of generalization

Contrary to optimistic claims in ML literature, I often cannot help but think that deep neural nets are indeed overfit and do not generalize well. But of course that claim hinges on what one means by generalizing well. About this there has been considerable confusion in the more practical engineering oriented ML literature, which at… Continue reading The illusion of generalization

ML epistemology workshop – Day 1

I recently attended Tom’s closing workshop on his Philosophy of statistical learning theory project. It was a great workshop and I learned a great deal from the talks. I provide a streamlined version of notes I took, for all those who were interested but couldn’t attend. The abstracts of the talks can be found here: https://www.mcmp.philosophie.uni-muenchen.de/events/workshops/container/ml_2023/index.html#schuster.… Continue reading ML epistemology workshop – Day 1

The strangest error

If the problem of induction is unsolvable then we won’t have a theory of strange errors. If the new problem of induction is unsolvable then we won’t have a theory of artifacts in ML. A problem in ML which has not received too much attention in philosophical circles so far is the problem of strange… Continue reading The strangest error