Do LMMs really train themselves?

Recently Holger Lyre presented his paper “Understanding AI”: Semantic Grounding in Large Language Models in our group seminar. And while I generally remain skeptical about his claims of semantic grounding (maybe the occasion for a separate post) here I want to address a misunderstanding in his paper about what he calls “self-learning”, “self-supervised learning” or… Continue reading Do LMMs really train themselves?