Are we throwing the baby out with the bathwater?


One of the curriculum goals for my school this year is to implement digital technologies in the classroom. As part of attaining this goal, staff in the school have had to give input on how they will incorporate technology in their classrooms. I sat and I listened and I have come to this conclusion. It’s time to admit we don’t know what we’re doing when it comes to educational technology.


Ten years ago, the common practice was buying computers and dropping them into schools: Every school got a computer room or every classroom got a smart board. The Ministry of Education finally realized that such purchases don’t boost student learning.

The problem is that when purchasing educational technology the decisions are based on as much evidence as your decision about whether to buy latest i-phone 6. Our experience as educators has shown that perfectly sensible assumptions about how devices ought to work in classrooms are often way off the mark.


As an example of this, the most common assumption I hear is that digital technology and in particular google (so much so it is not a verb!) has changed what the concept of knowledge is. As Google executive Marissa Mayer stated in 2010:
"The internet has relegated memorization of rote facts to mental exercise or enjoyment" 
She is right, in principle, you can look anything up, but in the classroom, students don’t, because it takes effort. You actually have to understand your search results. Also, as those who trawled through pages and pages of search results can attest, google searches provide large amounts of irrelevant information.

Depending on the search terms I use, top hits will be unrelated to the information I am searching for. The brain can beat Google for filtering irrelevant information, but the right information must be in your head. Students do not know what they do not know.

Students can look stuff up, but they don't. Once the number of words you don't know hits two percent, the odds go way up that readers find the text difficult and will quit.

The 2% figure comes from Carver (1994), who called texts with 0% unknown words "easy," 1% unknown words "appropriate" and 2% or more "difficult." 

The fact is that "looking things up" is trickier than it sounds, and requires underlying knowledge to get right, a point made forcefully by George Miller in 1987. As he points out, meaning comes from context, and internet searching can't provide much. So students look up a concept and take away an abbreviated, context-free understanding of that concept and so develop misunderstandings of the idea.


It's not that "looking things up" isn't useful. It's that teachers not doing the looking-up underestimate the extent to which students will view it as mental work, and overestimate the likelihood that an accurate understanding will be learned.

From this example we can see that our assumptions lead us when deciding whether a new technology will actually improve student learning.

Conversely, we can not simply continue to use traditional practices that are not effective out of fear that change might make things worse. Moving forward calls for different strategies, depending on whether a new technology changes how we deliver instruction or whether it changes the content itself.

My greatest concern is the decision around changing what students will learn, based on our assumptions of how technology is changing the world. An example of this is the emphasis in recent years on introducing coding in schools. The consequences of diminishing the role of learning facts or can’t be known for years. 

From conversations with fellow teachers, many were doubtful about new classroom technologies and are simply brushed off as dinosaurs, or frightened of change.


However, I think there's something to this. Those educators who are skeptical have come from the view of experience and cannot just be easily dismissed. I also don't think it's that those educators who are enthusiastic about technology are being hypnotized by all that is shiny and new either. But I do think our assumptions are a problem. It often seems obvious how students will interact with a new technology or approaches, but these assumptions can easily mislead.

This is not a critique of educational technology. The term is just too broad (like "future focused learning") making it difficult to draw clear conclusions about effectiveness. Meta-analyses have been done (Hattie has the effect size of classroom technology at around 0.3) but there will be so many exceptions and caveats, this seems limited.

However when considering our assumptions around introducing educational technology in the classroom, let’s remember: Technology may change quickly. The way the human brain interacts with the world doesn't. Make sure evidence drives our decision making.

Comments

Matthew Mac said…
I've just discovered your blog and am impressed with the clarity in which you summarise your thoughts! A lot of the posts you have blogged on, I have had my own thoughts on, but often cannot put my finger on what is bugging me. I read your posts and think "That's it!". Keep it up!
Doctor_Harves said…
Thank you for your support. I sometimes think I'm just a crazy man howling at the moon.

Popular posts from this blog

Branching Out in Physics Lessons with Decision Trees

How to Assess Learning in the Age of AI

ISTE23 - The Power of Connections