Are Large Language Models still spreading debunked learning science?

Are Large Language Models still spreading debunked learning science?

LLMs appear to love learning styles. When using foundation models (Gemini, Claude, ChatGPT) many teachers have reported being pushed towards thinking about the kinaesthetic, visual and auditory learners in their class. Here are some examples of me asking ‘How do you account for different learning styles?’

ChatGPT 4o

Claude 4

Gemini 2.5 Flash

Thanks for reading! Subscribe for free to receive new posts and support my work.

There is no rigorous research into LLMs’ adherence to learning science (that I know of) but informal studies by AI-For-Education and Phillippa Hardman corroborate the anecdotal experience of educators using these tools to plan lessons. Based on asking ‘How do you account for different learning styles?’ to Gemini, Claude and ChatGPT I found that both ChatGPT and Claude tacitly supported the idea of Learning Styles while Gemini 2.5 Flash provided a more balanced answer acknowledging that it has been debunked. In some ways it is remarkable that major LLMs (Claude and ChatGPT) are still encouraging the use of learning styles, despite it being the most classic example of a debunked edumyth in the book. It does beg the question of what other edumyths are being propagated by foundation models. This could be the tip of the iceberg. Learning styles were an education research meme in the early 2000s but have since been debunked, so why are LLMs still so enamoured by them?

The learning styles myth is still very widely believed by teachers (roughly 9 in 10 across various studies). The myth is even more widely believed among the public, including within top universities and reputable publications. In recent years the myth has been propagated in the FT, The Economist and by some of the world’s most reputable public intellectuals.

The problem with the learning styles myth is that it encourages teachers to adapt instruction to a particular student’s ‘style’, creating more work without leading to positive impacts on learning. For example, there is strong evidence that supporting students by providing high quality multi-media learning materials helps all students. It doesn’t only help students which learning styles would categorise as ‘visual learners’.

If learning styles are still being propagated via reputable sources then, of course, LLMs’ training set is riven with this debunked learning science. AI-for-education have noted that in the Hugging Face Fineweb-edu dataset, a curated set of high quality educational data, the frequency of the term ‘learning styles’ has doubled in the past 10 years.

Google and Microsoft are currently developing AI systems specific to the domain of healthcare. It’s hard to imagine that if chatGPT was actively spreading debunked medical information it wouldn’t have already been fixed. Therefore this does beg the question of why the major labs aren’t building systems specific to education that can help spread evidence-informed practice.

AI-for-Education.org are doing serious work to address this, including creating benchmarks and QA Frameworks. They are worth following. At Chalk we are also trying to solve this problem by embedding learning science into our platform. I do think this is a wakeup call for the education community to more deeply engage with how LLMs are trained and evaluated for education. Educators should be part of the conversation to make sure AI can support learning science, high quality teaching and better learning.

Thanks for reading! Subscribe for free to receive new posts and support my work.