AI Can’t Solve Education’s Real Problem and I’m Not Sure the “Renaissance Learner” Is the Right Answer Either

Dan Fitzpatrick’s recent article on Ben Gomes, Google’s Head of Learning and Sustainability, raises an important question for education at a time when AI is dominating so much of the conversation. The article argues that while AI may improve access, efficiency, and even aspects of teaching and learning, it cannot solve education’s deepest problem because that problem is ultimately human, not technological. It explores ideas around teacher burnout, motivation, purpose, and the kind of learner schools may need to nurture in the future, including Gomes’ idea of the “Renaissance learner.”


To be clear, there was a lot in the article that I agreed with. At its heart, it makes a point that I think teachers have always known, even if the current AI conversation sometimes forgets it. Learning has never just been about access to information. It has never just been about better content, faster feedback, or more efficient delivery. Those things can help, but they are not the thing itself. What makes learning come alive for a young person is usually much more human than that. It is often a relationship. It is encouragement. It is trust. It is feeling seen. It is having someone make you believe that learning matters and that you matter within it. That is why the comment in the article that stayed with me most was the idea that AI can carry knowledge, but it cannot carry desire. That feels exactly right.

For all the excitement, anxiety, hype, and noise, AI does not solve the deepest problem in education because the deepest problem in education was never simply information scarcity. It was never just about not having enough content, or not being able to explain things clearly enough, or not having a tool that could personalise learning pathways. The deeper challenge has always been human. It sits in belonging, motivation, connection, identity, culture, trust, and whether young people experience learning as something that has meaning in their lives. Tools can amplify that. They cannot create it from nothing. The article gets that right when it says the tools amplify direction, but they do not provide it. A student with no desire to learn does not suddenly become transformed because the tools have got better. That is why I agree with the central claim of the article.

I also think the piece is strongest when it connects this to teachers. If teachers are the ones who so often “unlock” students, then the global teacher shortage is not just a labour market problem or an operational issue for national school systems. It is a crisis of human possibility. The article points to the projected worldwide shortage of 44 million teachers and frames AI, at least in part, as something that could help make the profession sustainable again by reducing burnout and reclaiming time. That matters. If AI can genuinely remove some of the administrative burden and give teachers back parts of the job that have been crowded out by workload, then that is worth taking seriously. But only if that reclaimed time flows back into the relational core of teaching rather than being swallowed up by new expectations from school leadership. That distinction is important to me.

Because I do think there is a danger in how schools and systems interpret this kind of argument. If leaders hear “AI can save teachers time,” but what they really mean is “teachers can now do even more,” then we have missed the point entirely. The value of AI in schools is not that it helps us squeeze more output from already stretched people. The value is that it might help protect the human work that matters most. If a teacher has more time to know their students, to notice who is disengaged, to build trust, to give more thoughtful feedback, to adapt learning more meaningfully, then AI is serving education well. If it simply becomes another mechanism for efficiency, surveillance, or intensification, then we are just using new technology to deepen an old problem. That is one of the reasons I found the article compelling. It recognises that the issue is not primarily technological. It is human. Where I become more cautious is with the phrase “Renaissance learner.”

I can see why this idea has appeal. In the article, it is presented as a way of broadening what learning might look like in an age where AI can take over more of the procedural and mechanical aspects of some tasks. The example given is that engineers may move more easily into design, and designers more easily into coding, because the tools begin to lower some of the traditional barriers between fields. Gomes suggests this should shift education away from too much focus on the mechanics of learning and more towards higher-level conceptual understanding, abstraction, and the bigger ideas that sit underneath more routine forms of work.

I understand the attraction of that argument, and I agree that schools do need to think carefully about what knowledge, understanding, and capabilities matter most in an AI-shaped world. But I do not agree with the shift as it is being framed here. Foundational knowledge still has an essential place. The process of learning still matters. There is real value in students building knowledge over time, wrestling with ideas, practising, remembering, and developing the intellectual foundations that allow deeper thinking to happen in the first place. Higher-order thinking does not sit apart from foundational knowledge; it is built on it. 

Partly that is because it feels like a polished, future-facing label for a much older educational ideal. It still centres the individual student as someone who must become more expansive, more flexible, more interdisciplinary, more adaptive, and more capable of moving across domains. On one level, that sounds positive. But on the other hand, it still assumes that the answer to a rapidly changing world is to produce a new kind of optimised individual. I think that is too narrow, and perhaps even part of the problem. I think the student we should be talking about is not primarily a “Renaissance learner.” It is a deeply human learner.

What I mean by that is someone who can think critically, yes, but who is also grounded. Someone who can use AI without being used by it. Someone who can move across disciplines, but who also understands that knowledge is not just something to be consumed and recombined. It is shaped by culture, by context, by values, by history, by place, and by relationships. A student is not just an adaptable brain navigating an information-rich world. A student is a whole person. They are socially located. They are culturally located. They are connected to other people. They are trying to make meaning, not just perform competence. That is where I think the “Renaissance learner” idea starts to fall short for me.

It risks making breadth the aspiration, when I would argue that wisdom matters more than breadth. It risks celebrating flexibility without asking what anchors the student ethically, culturally, and relationally. It risks imagining the ideal student as someone who can range across knowledge domains with the help of AI, while not paying enough attention to whether they are also becoming thoughtful, responsible, compassionate, and able to act with discernment in a world shaped by increasingly powerful technologies.

From where I stand in New Zealand, I would also say that I am cautious about the future-of-learning language that leans on Western historical metaphors as though they are universal. “Renaissance learner” carries with it a very particular intellectual tradition and image of knowledge. It is not meaningless, but it is limited. It does not automatically speak to more relational, collective, place-based, and intergenerational understandings of learning. For me, any serious vision of education in the age of AI has to leave room for that. It has to recognise that learning is not only about developing individual capacity. It is also about belonging, responsibility, connection, identity, and our relationships with community and with the worlds we inherit and shape. That is why I would frame a learner differently.

I would rather talk about young people who are critically literate, culturally grounded, ethically awake, and able to remain human in an AI-shaped world. I would rather talk about students who can ask good questions, challenge systems, understand context, and use technology as a thinking partner rather than a substitute for thought. I would rather talk about students who can bring together knowledge, empathy, judgment, and responsibility. To me, that matters more than whether they resemble a modern version of the Renaissance man.

The article does, however, offer an example that points in exactly the right direction. The story of the special education teacher using AI tools to build a music app for a student who communicates through blinking is powerful because it shows what actually matters. The breakthrough was not the tool on its own. The breakthrough came from care. A teacher saw a specific student, understood a specific need, and used a new tool to respond in a way no generic product ever would. That is a beautiful example of AI at its best in education. Not replacing the human relationship, but extending what becomes possible because that relationship exists. That, to me, is the more important takeaway from the article.

AI matters. It may well reshape what we teach, how we teach, and what counts as essential human capability. It may reduce some barriers. It may widen access. It may help teachers reclaim parts of the profession that have been eroded by workload and administration. But it will not solve education’s real problem because education’s real problem is not a lack of smart tools. It is whether our systems create the conditions in which young people want to learn and in which teachers can help them want to learn. That is a human challenge.

So yes, I agree with the article’s central argument. AI cannot solve education’s real problem. But I am less convinced by the idea that the answer is to produce a “Renaissance learner.” I think what we need instead is something both older and more urgent. Students who are deeply human, able to think critically, act ethically, stay grounded in who they are, and participate meaningfully in the world with others. If AI can help create more space for that, then it has real value. If it distracts us from that, then no amount of innovative language will change that.

Comments

Popular posts from this blog

Making Sense of the Proposed New Zealand Science Curriculum Changes

Beyond Button-Pushing: ISTE + ASCD 2025 and the Heartbeat of Education

AI and Marking: A Radical Change?