AI Can’t Solve Education’s Real Problem and I’m Not Sure the “Renaissance Learner” Is the Right Answer Either
Dan Fitzpatrick’s recent article on Ben Gomes, Google’s Head of Learning and Sustainability, raises an important question for education at a time when AI is dominating so much of the conversation. The article argues that while AI may improve access, efficiency, and even aspects of teaching and learning, it cannot solve education’s deepest problem because that problem is ultimately human, not technological. It explores ideas around teacher burnout, motivation, purpose, and the kind of learner schools may need to nurture in the future, including Gomes’ idea of the “Renaissance learner.”
To be clear, there was a lot in the article that I agreed with. At its heart, it makes a point that teachers have always known, even if the current AI conversation sometimes forgets it. Learning has never just been about access to information. It has never just been about better content, faster feedback, or more efficient delivery. Those things can help, but they are not the thing itself. What makes learning come alive for a young person is usually much more human than that. It is often a relationship. It is encouragement. It is trust. It is feeling seen. It is having someone make you believe that learning matters and that you matter within it. That is why the comment in the article that stayed with me most was the idea that AI can carry knowledge, but it cannot carry desire. That feels exactly right.
For all the excitement, anxiety, hype, and noise, AI does not solve the deepest problem in education because the deepest problem in education was never simply information scarcity. It was never just about not having enough content, or not being able to explain things clearly enough, or not having a tool that could personalise learning pathways. The deeper challenge has always been human. It is in belonging, motivation, connection, identity, culture, trust, and whether young people experience learning as something that has meaning in their lives. Tools can amplify that. They cannot create it from nothing. The article gets that right when it says the tools amplify direction, but they do not provide it. A student with no desire to learn does not suddenly become transformed because the tools have got better. That is why I agree with the central claim made by Gomes.
The piece is strongest when it connects this to teachers. If teachers are the ones who so often “unlock” students, then the global teacher shortage is not just a labour market problem or an operational issue for national school systems. It is a crisis of human possibility. The article mentions the projected worldwide shortage of 44 million teachers and frames AI, at least in part, as helping make the profession sustainable again by reducing burnout and reclaiming time. That matters. If AI can take some of the paperwork and give teachers back parts of the job that have been crowded out by workload, then that is worth taking seriously. There is a caveat, only if that reclaimed time is given back to the relational core of teaching rather than being swallowed up by new expectations from school leadership. That distinction is important to me.
Because there is a danger in how schools and systems interpret this kind of argument. If leaders hear “AI can save teachers time,” but what they really mean is “teachers can now do even more,” then we have missed the point entirely. The value of AI in schools is not that it helps us squeeze more output from already stretched people. The value that it might help protect the human work matters most. If a teacher has more time to know their students, notice who is disengaged, build trust, provide more thoughtful feedback, and adapt learning to make it more meaningful, then AI is serving education well. If it simply becomes another mechanism for efficiency, surveillance, or intensification, then we are just using new technology to deepen an old problem. That is one of the reasons I found the article compelling. It recognises that the issue is not primarily technological. It is human. Where I become more cautious is with the phrase “Renaissance learner.”
I see why this idea has appeal. In the article, it is presented as a way of broadening learning to what it might look like in a world where AI can take over more of the procedural and mechanical aspects of tasks. The example given is that engineers may move more easily into design, and designers more easily into coding, because the tools begin to lower some of the traditional barriers between fields. Gomes suggests AI should shift education away from focusing on the mechanics of learning and more towards higher-level conceptual understanding and abstraction to become a "renaissance learner".
I understand the attraction of that argument, and I agree that schools do need to think carefully about what knowledge, understanding, and capabilities matter most in an AI-shaped world. But I do not agree with the shift as it is being framed here. Foundational knowledge still has an essential place. The process of learning still matters. There is real value in students building knowledge over time, wrestling with ideas, practising, remembering, and developing the intellectual foundations that allow deeper thinking to happen in the first place. Higher-order thinking is not separate from foundational knowledge; it is built on it.
The idea of a "renaissance learner" feels like a polished, future-facing label for a much older educational ideal. It still centres the individual student as someone who must become more expansive, more flexible, more interdisciplinary, more adaptive, and more capable of moving across domains. On one level, that sounds positive. But on the other hand, it still assumes that the answer to a rapidly changing world is to produce a new kind of optimised individual. That is too narrow, and perhaps even part of the problem. I think the student we should be talking about is not primarily a “Renaissance learner.” It is a deeply human learner.
What I mean by that is someone who can think critically, yes, but who is also grounded. Who can touch the grass, as my students say. Someone who can use AI without being used by it. Someone who can move across disciplines, but who also understands that knowledge is not just something to be consumed and recombined. It is shaped by culture, context, values, history, place, and relationships. A student is not just an adaptable brain navigating an information-rich world. A student is a whole person. They are socially located. They are culturally located. They are connected to other people. They are trying to make meaning, not just perform competence. That is where I think the “Renaissance learner” idea starts to fall short for me.
It risks making breadth the aspiration, when I argue that wisdom matters more than breadth. It risks celebrating flexibility without asking what anchors the student ethically, culturally, and relationally. It risks imagining the ideal student as someone who can range across knowledge domains with the help of AI, while not paying enough attention to whether they are also becoming thoughtful, responsible, compassionate, and able to act with discernment in a world shaped by increasingly powerful technologies.
Here in New Zealand, I am cautious about the future-of-learning rhetoric that leans on particular historical metaphors as though they are universal. The “Renaissance learner” carries with it a very particular intellectual tradition and image of knowledge. It is not meaningless, but it is limited. It does not automatically speak to more relational, collective, place-based, and intergenerational understandings of learning. For me, any serious vision of education in the age of AI has to leave room for that. It has to recognise that learning is not only about developing individual capacity. It is also about belonging, responsibility, connection, identity, and our relationships with community and with the worlds we inherit and shape. That is why I would frame a learner differently.
I would rather discuss how young people are critically literate, culturally grounded, ethically awake, and able to remain human in an AI-shaped world. I would rather discuss how students can ask good questions, challenge systems, understand context, and use technology as a thinking partner rather than a substitute for thought. I would rather discuss how students can bring together knowledge, empathy, judgment, and responsibility. That matters more than whether they resemble a modern version of the Renaissance man.
The article does offer an example that points in exactly the right direction. The story of the special education teacher using AI tools to build a music app for a student who communicates through blinking is powerful because it shows what actually matters. The breakthrough was not the tool itself. The breakthrough came from care. A teacher had a student with a specific need and used a new tool to respond in a way no generic product would. That is a beautiful example of AI at its best. Not replacing the human relationship, but extending what becomes possible because that relationship exists. That is the more important takeaway from the article.
AI matters. It may reshape what we teach, how we teach, and what counts as essential human capability. It may reduce some barriers. It may widen access. It may help teachers reclaim the profession that has been eroded by workload. But it will not solve education’s real problem because education’s real problem is not a lack of smart tools. It is whether our systems create the conditions in which young people want to learn and in which teachers can help them want to learn. That is a human challenge.
So yes, I agree with the article’s central argument. AI cannot solve education’s real problem. But I am less convinced by the idea that the answer is to produce a “Renaissance learner.” I think what we need instead is something both older and more urgent. Students who are deeply human, able to think critically, act ethically, stay grounded in who they are, and participate meaningfully in the world with others. If AI can help create more space for that, then it has real value. If it distracts us from that, then no amount of innovative language will change that.
Comments