What Teachers Should Know
By Pedro De Bruyckere, Paul A. Kirschner, Casper D. Hulshof
One of the most frequently cited reasons for justifying the need for change in education, or at least for labeling education as old-fashioned, is the enormous technological (r)evolution our world has undergone in recent years. Nowadays, we have the Internet in our pocket, in the form of a smartphone, which has exponentially more computing power than the Apollo Guidance Computer that put the first men on the moon! A school with desks, blackboards or whiteboards, and—perish the thought—books seems like some kind of archaic institution, one that, even if it does use a smartboard or a learning platform, operates in a manner that bears a suspiciously strong resemblance to the way things were done in the past.
In education, we often have the feeling that we are finding it harder and harder to reach our students. That is why we are so feverishly interested in smartboards or learning platforms or anything new on the market that might help. Every new tool seems like a possible solution, although sometimes we really don’t know what the problem is or even if there is one.
Regrettably, we have become saddled with a multiplicity of tools, methods, approaches, theories, and pseudotheories, many of which have been shown by science to be wrong or, at best, only partially effective. In this article, which is drawn from our book Urban Myths about Learning and Education, we discuss these miracle tools and the idea that young people today are somehow “digital natives,” and we examine the fear that technology is making our society and our students less intelligent. To illustrate that many claims about technology in education are in fact spurious, we will focus in this article on five specific myths and present the research findings that dispel them.
Myth 1: New technology is causing a revolution in education.
School television, computers, smartboards, and tablets such as the iPad—it was thought that all these new tools would, or will, change education beyond recognition. But if you look at the research of someone like Larry Cuban, it seems that classroom practice has remained remarkably stable during recent years.1 Even Microsoft cofounder Bill Gates—whom you would hardly suspect of being against technology in education—summarized his view on the matter as follows: “Just giving people devices has a really horrible track record.”
The correct use of tools and resources nevertheless does have the potential to change education. Very often these change phenomena are general rather than specific. For example, the influence of the printed word is gigantic, but this influence—like so many other tools and resources—is anchored in society as a whole. You need to come down to the level of something like the book or the blackboard if you want to consider a resource that has specifically changed education.
In 1983, Richard Clark published a definitive study on how it was pedagogy (i.e., teaching practice) and not the medium (i.e., technological tools and resources, such as whiteboards, hand-held devices, blogs, chat boards) that made a difference in learning, stating that instructional media are “mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition.”2
In 1994, Clark went as far as to make a daring prediction: namely, that a single medium would never influence education. He based this position on his opinion that, at that time, there was no proof to show that a medium was capable of ensuring that pupils and students could learn more or more effectively. He saw the medium as a means, a vehicle for instruction, but that the essence of learning remained—thankfully—in the hands of the teacher.3
We are now 20 years further down the line, and the question needs to be asked: Does Clark’s position still hold true? During those 20 years, we have seen the explosion of almost unimaginable technological possibilities. Even so, Clark and Richard Mayer continue to assert that nothing has fundamentally changed.4 They argue that 60 years of comparative studies about teaching methods and teaching resources all confirm that it is not the medium that decides how effectively learners learn.
Clark and David Feldon confirm that the effectiveness of learning is determined primarily by the way the medium is used and by the quality of the instruction accompanying that use.5 When media (or multimedia) are used for instruction, the choice of medium does not influence learning. John Hattie described, for example, how instructional methods that are more effective within conventional environments, such as learner control and explanative feedback, are also more effective within computer-based environments.6
This can be called the “method-not-media” hypothesis, as tested in a study where students received an online multimedia lesson on how a solar cell works that consisted of 11 narrated slides with a script of 800 words.7 Focusing on the instructional media being used, students received the lesson on an iMac in a lab or on an iPad in a courtyard. But they also used different instructional methods.
Students received either a continuous lesson with no headings (this was the standard method) or a segmented lesson in which the learner clicked on a button to go to the next slide, with each slide having a heading corresponding to the key idea in the script for the slide (this was the enhanced method). By combining changes in both medium and method, we can see what matters most. Across both media, the enhanced group outperformed the standard group on a transfer test where students had to use the information in settings other than those in the text, yielding a method effect on learning outcomes for both desktop and mobile medium.
Across both methods, looking at the medium, the mobile group produced stronger ratings than the desktop group on self-reported willingness to continue learning, yielding a media effect on motivational ratings for both standard and enhanced methods. Effective instructional methods can improve learning outcomes across different media, whereas using hand-held instructional media may increase students’ willingness to continue to engage in learning.
If we look at the influence of technology on the effectiveness of instruction, the picture is not fully clear. This can partly be explained by the fact that relatively little research has been carried out that involves the comparison of two similar groups, one group learning with and the other group learning without the benefits of a new technology.
The different metastudies on this subject, analyzed by Hattie, reveal a considerable variation in results.8 A review study on the implementation of technology, more specifically Web 2.0 tools such as wikis, blogs, and virtual worlds, in K–12 and higher education, suggests that actual evidence regarding the impact of those technologies on student learning is fairly weak.9 There are still a number of studies that point to a positive gain in learning terms,10 but the majority equate the positive learning effect resulting from the good use of technology with good teaching. The crucial factor for learning improvement is to make sure that you do not replace the teacher as the instrument of instruction, allowing computers to do what teachers would normally do, but instead use computers to supplement and amplify what the teacher does.
A 2009 metastudy about e-learning did, however, tentatively conclude that the use of both e-learning and contact education—which is known as blended learning—produces better results than lessons given without technology.11 This is also the case when you use computer game–based learning; the role of instruction still needs to have a real significant learning effect, reflecting the conclusion of one meta-analysis.12 Such instructional support may appear in several forms, such as providing feedback, scaffolding, and giving advice.
Still, there remain some questionable claims that technology can change, by itself, the present system of education. Clark and Feldon summarize the various claims and responses:13
The claim: Multimedia instruction accommodates different learning styles and so maximizes learning for more students. Clark and Feldon describe how learning styles have not proven to be “robust foundations on which to customize instruction.” And, as we explained in our book, the idea of learning styles* in themselves is already a very stubborn and harmful urban myth in education.
The claim: Multimedia instruction facilitates student-managed constructivist and discovery approaches that are beneficial to learning. In fact, Clark and Feldon found that “Discovery-based multimedia programs seem to benefit experts or students with higher levels of prior knowledge about the topic being learned. Students with novice to intermediate levels of prior knowledge learn best from fully guided instruction.”† This is another example of how the medium does not influence the learning. Prior knowledge is an individual difference that leads to learning benefits from more guidance at low to moderate levels but not at higher levels, regardless of the media used to deliver instruction.
The claim: Multimedia instruction provides students with autonomy and control over the sequencing of instruction. Although technology can deliver this, the more important question is whether this is a good thing. Letting students decide the pace of learning (e.g., by allowing them to pause or slow down videos or presentations) is beneficial to learning. But only a small group of students has the benefit of being given the chance to select the order of lessons, learning tasks, and learning support. For the majority of students, this has a mostly negative influence on learning.14
The point that teachers should remember is this: the medium seldom influences teaching, learning, and education, nor is it likely that one single medium will ever be the best one for all situations.
Myth 2: The Internet belongs in the classroom because it is part of the personal world experienced by children.
How often have you heard this? It sounds so logical, doesn’t it? At the same time, many teachers have discovered, at their expense, that using information and communications technology in their lesson “randomly,” in an unstructured way, does not always have lasting success. The problem is that most research studies have been evaluations of relatively short-term projects. Some research, for instance, focuses on the extent to which participants liked the medium being used during the actual test, which for a student actually lasted for about 12 minutes.15
Also note that in this research, being motivated because of the medium did not help learning as much as the chosen pedagogical approach. But when we discuss implementing technology and the Internet in the classroom, people argue not for using it once or only for a short period, but for long-term implementation. Therefore, it is the impact over a longer period that really needs to be determined.
A study by the Canadian Higher Education Strategy Associates described how students had a preference for “ordinary, real life” lessons rather than e-learning or the use of some other technology.16 It was a result that surprised the researchers. “It is not the portrait we expected, whereby students would embrace anything that happens on a more highly technological level. On the contrary—they really seem to like access to human interaction, a smart person at the front of the classroom.”
The findings also revealed that the more technology was used to teach a particular course, the fewer the students who felt they were able to get something out of that course. While the 1,380 students from 60 Canadian universities questioned for this survey were generally satisfied with the courses they took, the level of satisfaction fell significantly when more digital forums, online interactions, or other technological elements were involved. Yet, at the same time, more than half the respondents said that they would skip a lesson if there was more information or a comparable video lesson online.
Although these results at first glance seem to be fairly negative for e-learning, the responses to some additional questions were more positive. The majority of students (59.6 percent) said that they would like more electronic content in their courses. When asked what they would specifically like to see online, 53.6 percent answered that they would like more online course notes, with 46.4 percent advocating more recordings of lessons on the web.
These findings are broadly in keeping with the results of a 2011 literature study that investigated the expectations of young people with regard to new forms of education and information and communications technology.17
The study reached the following conclusions: First, the technological gap between the students and their teachers is not enormous, and certainly not so large that it cannot be bridged. In fact, the relationship is determined by the requirements teachers place on their students to make use of new technologies. There is little evidence that students expect the use of these new technologies. Second, in all the studies consulted, the students persistently report that they prefer moderate use of information and communications technology in their courses. (“Moderate” is, of course, an imprecise term that is difficult to quantify.) Third, students do not naturally make extensive use of many of the newest technologies, such as blogs, wikis, and virtual worlds. Students who need or are required to use these technologies in their courses are unlikely to object to them, but there is not a natural demand among students for any such use.
Maybe this will change as technology becomes more and more ingrained. However, a study of students in Glasgow, Scotland, found little change; these students appeared to conform to fairly traditional pedagogies, albeit with minor uses of technology tools that deliver content. Research comparing traditional books with e-readers shows that students prefer paper.18
The sad thing is that even if students did prefer to use technology in school, this would not mean that they would learn more. In 2005, Clark and Feldon wrote, “The best conclusion at this point is that, overall, multimedia courses may be more attractive to students and so they tend to choose them when offered options, but student interest does not result in more learning and overall it appears to actually result in significantly less learning than would have occurred in ‘instructor led’ courses.”19 A decade later, based on 10 years of additional research, Clark and Feldon stand by this conclusion.20
In her book, danah boyd describes the main reasons young people use technology. These reasons are mainly social, such as sharing information with each other, and meeting each other online and in real life. They do discuss schoolwork with each other, but this is very different from using Facebook as a learning tool or their phone as a learning machine.21
Myth 3: Today’s “digital natives” are a new generation who want a new style of education.
Digital natives! Whenever the question of digital innovation in education is discussed, this is a term that immediately comes to the surface. But it should be avoided. Even the person who coined the term digital natives, Marc Prensky, admitted in his most recent book, Brain Gain, that the term is now obsolete.22
The concept is usually used to describe young people who were born in the digital world and for whom all forms of information and communications technology are natural. The adults who were born earlier are therefore “digital immigrants,” who try with difficulty to keep up with the natives. Prensky first coined both terms in 2001.23
With this concept, he referred to a group of young people who have been immersed in technology all their lives, giving them distinct and unique characteristics that set them apart from previous generations, and who have sophisticated technical skills and learning preferences for which traditional education is unprepared. However, Prensky’s coining of this term—and its counterpart for people who are not digitally native—was not based on research into this generation, but rather created by rationalizing phenomena that he had observed.24
As the digital native concept became popular, extra claims were added to the initial concept. Erika Smith, of the University of Alberta, describes eight unsubstantiated claims in the different present discourses on digital natives:25
They possess new ways of knowing and being.
They are driving a digital revolution and thereby transforming society.
They are innately or inherently tech savvy.
They are multitaskers,‡ team oriented, and collaborative.
They are native speakers of the language of technologies and have unique viewpoints and abilities.
They embrace gaming, interaction, and simulation.
They demand immediate gratification.
They reflect and respond to the knowledge economy.
Smith is not alone in concluding that there is little to no proof for these claims. A meta-analysis conducted in 2008 had already shown that there was little hard evidence to support the use of the term digital natives.26
But maybe the concept of digital natives was more a kind of prediction, and we just had to wait. Perhaps today’s young people are true digital natives. If we look at the research performed in high-tech Hong Kong by David M. Kennedy and Bob Fox, the answer is more nuanced.27 Kennedy and Fox investigated how first-year undergraduate students used and understood various digital technologies. They discovered, like danah boyd did with the American teenagers, that the first-year undergraduate students at Hong Kong University do use a wide range of digital technologies.
The students use a large quantity and variety of technologies for communicating, learning, staying connected with their friends, and engaging with the world around them. But they are using them primarily for “personal empowerment and entertainment.” More importantly, Kennedy and Fox describe that the students are “not always digitally literate in using technology to support their learning. This is particularly evident when it comes to student use of technology as consumers of content rather than creators of content specifically for academic purposes.”
Other researchers have reported that university students use only a limited range of technologies for learning and socialization. For example, one study found that “the tools these students used were largely established technologies, in particular mobile phones, media player, Google, [and] Wikipedia. The use of handheld computers as well as gaming, social networking sites, blogs, and other emergent social technologies was very low.”
By Pedro De Bruyckere, Paul A. Kirschner, Casper D. Hulshof
One of the most frequently cited reasons for justifying the need for change in education, or at least for labeling education as old-fashioned, is the enormous technological (r)evolution our world has undergone in recent years. Nowadays, we have the Internet in our pocket, in the form of a smartphone, which has exponentially more computing power than the Apollo Guidance Computer that put the first men on the moon! A school with desks, blackboards or whiteboards, and—perish the thought—books seems like some kind of archaic institution, one that, even if it does use a smartboard or a learning platform, operates in a manner that bears a suspiciously strong resemblance to the way things were done in the past.
In education, we often have the feeling that we are finding it harder and harder to reach our students. That is why we are so feverishly interested in smartboards or learning platforms or anything new on the market that might help. Every new tool seems like a possible solution, although sometimes we really don’t know what the problem is or even if there is one.
Regrettably, we have become saddled with a multiplicity of tools, methods, approaches, theories, and pseudotheories, many of which have been shown by science to be wrong or, at best, only partially effective. In this article, which is drawn from our book Urban Myths about Learning and Education, we discuss these miracle tools and the idea that young people today are somehow “digital natives,” and we examine the fear that technology is making our society and our students less intelligent. To illustrate that many claims about technology in education are in fact spurious, we will focus in this article on five specific myths and present the research findings that dispel them.
Myth 1: New technology is causing a revolution in education.
School television, computers, smartboards, and tablets such as the iPad—it was thought that all these new tools would, or will, change education beyond recognition. But if you look at the research of someone like Larry Cuban, it seems that classroom practice has remained remarkably stable during recent years.1 Even Microsoft cofounder Bill Gates—whom you would hardly suspect of being against technology in education—summarized his view on the matter as follows: “Just giving people devices has a really horrible track record.”
The correct use of tools and resources nevertheless does have the potential to change education. Very often these change phenomena are general rather than specific. For example, the influence of the printed word is gigantic, but this influence—like so many other tools and resources—is anchored in society as a whole. You need to come down to the level of something like the book or the blackboard if you want to consider a resource that has specifically changed education.
In 1983, Richard Clark published a definitive study on how it was pedagogy (i.e., teaching practice) and not the medium (i.e., technological tools and resources, such as whiteboards, hand-held devices, blogs, chat boards) that made a difference in learning, stating that instructional media are “mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition.”2
In 1994, Clark went as far as to make a daring prediction: namely, that a single medium would never influence education. He based this position on his opinion that, at that time, there was no proof to show that a medium was capable of ensuring that pupils and students could learn more or more effectively. He saw the medium as a means, a vehicle for instruction, but that the essence of learning remained—thankfully—in the hands of the teacher.3
We are now 20 years further down the line, and the question needs to be asked: Does Clark’s position still hold true? During those 20 years, we have seen the explosion of almost unimaginable technological possibilities. Even so, Clark and Richard Mayer continue to assert that nothing has fundamentally changed.4 They argue that 60 years of comparative studies about teaching methods and teaching resources all confirm that it is not the medium that decides how effectively learners learn.
Clark and David Feldon confirm that the effectiveness of learning is determined primarily by the way the medium is used and by the quality of the instruction accompanying that use.5 When media (or multimedia) are used for instruction, the choice of medium does not influence learning. John Hattie described, for example, how instructional methods that are more effective within conventional environments, such as learner control and explanative feedback, are also more effective within computer-based environments.6
This can be called the “method-not-media” hypothesis, as tested in a study where students received an online multimedia lesson on how a solar cell works that consisted of 11 narrated slides with a script of 800 words.7 Focusing on the instructional media being used, students received the lesson on an iMac in a lab or on an iPad in a courtyard. But they also used different instructional methods.
Students received either a continuous lesson with no headings (this was the standard method) or a segmented lesson in which the learner clicked on a button to go to the next slide, with each slide having a heading corresponding to the key idea in the script for the slide (this was the enhanced method). By combining changes in both medium and method, we can see what matters most. Across both media, the enhanced group outperformed the standard group on a transfer test where students had to use the information in settings other than those in the text, yielding a method effect on learning outcomes for both desktop and mobile medium.
Across both methods, looking at the medium, the mobile group produced stronger ratings than the desktop group on self-reported willingness to continue learning, yielding a media effect on motivational ratings for both standard and enhanced methods. Effective instructional methods can improve learning outcomes across different media, whereas using hand-held instructional media may increase students’ willingness to continue to engage in learning.
If we look at the influence of technology on the effectiveness of instruction, the picture is not fully clear. This can partly be explained by the fact that relatively little research has been carried out that involves the comparison of two similar groups, one group learning with and the other group learning without the benefits of a new technology.
The different metastudies on this subject, analyzed by Hattie, reveal a considerable variation in results.8 A review study on the implementation of technology, more specifically Web 2.0 tools such as wikis, blogs, and virtual worlds, in K–12 and higher education, suggests that actual evidence regarding the impact of those technologies on student learning is fairly weak.9 There are still a number of studies that point to a positive gain in learning terms,10 but the majority equate the positive learning effect resulting from the good use of technology with good teaching. The crucial factor for learning improvement is to make sure that you do not replace the teacher as the instrument of instruction, allowing computers to do what teachers would normally do, but instead use computers to supplement and amplify what the teacher does.
A 2009 metastudy about e-learning did, however, tentatively conclude that the use of both e-learning and contact education—which is known as blended learning—produces better results than lessons given without technology.11 This is also the case when you use computer game–based learning; the role of instruction still needs to have a real significant learning effect, reflecting the conclusion of one meta-analysis.12 Such instructional support may appear in several forms, such as providing feedback, scaffolding, and giving advice.
Still, there remain some questionable claims that technology can change, by itself, the present system of education. Clark and Feldon summarize the various claims and responses:13
The claim: Multimedia instruction accommodates different learning styles and so maximizes learning for more students. Clark and Feldon describe how learning styles have not proven to be “robust foundations on which to customize instruction.” And, as we explained in our book, the idea of learning styles* in themselves is already a very stubborn and harmful urban myth in education.
The claim: Multimedia instruction facilitates student-managed constructivist and discovery approaches that are beneficial to learning. In fact, Clark and Feldon found that “Discovery-based multimedia programs seem to benefit experts or students with higher levels of prior knowledge about the topic being learned. Students with novice to intermediate levels of prior knowledge learn best from fully guided instruction.”† This is another example of how the medium does not influence the learning. Prior knowledge is an individual difference that leads to learning benefits from more guidance at low to moderate levels but not at higher levels, regardless of the media used to deliver instruction.
The claim: Multimedia instruction provides students with autonomy and control over the sequencing of instruction. Although technology can deliver this, the more important question is whether this is a good thing. Letting students decide the pace of learning (e.g., by allowing them to pause or slow down videos or presentations) is beneficial to learning. But only a small group of students has the benefit of being given the chance to select the order of lessons, learning tasks, and learning support. For the majority of students, this has a mostly negative influence on learning.14
The point that teachers should remember is this: the medium seldom influences teaching, learning, and education, nor is it likely that one single medium will ever be the best one for all situations.
Myth 2: The Internet belongs in the classroom because it is part of the personal world experienced by children.
How often have you heard this? It sounds so logical, doesn’t it? At the same time, many teachers have discovered, at their expense, that using information and communications technology in their lesson “randomly,” in an unstructured way, does not always have lasting success. The problem is that most research studies have been evaluations of relatively short-term projects. Some research, for instance, focuses on the extent to which participants liked the medium being used during the actual test, which for a student actually lasted for about 12 minutes.15
Also note that in this research, being motivated because of the medium did not help learning as much as the chosen pedagogical approach. But when we discuss implementing technology and the Internet in the classroom, people argue not for using it once or only for a short period, but for long-term implementation. Therefore, it is the impact over a longer period that really needs to be determined.
A study by the Canadian Higher Education Strategy Associates described how students had a preference for “ordinary, real life” lessons rather than e-learning or the use of some other technology.16 It was a result that surprised the researchers. “It is not the portrait we expected, whereby students would embrace anything that happens on a more highly technological level. On the contrary—they really seem to like access to human interaction, a smart person at the front of the classroom.”
The findings also revealed that the more technology was used to teach a particular course, the fewer the students who felt they were able to get something out of that course. While the 1,380 students from 60 Canadian universities questioned for this survey were generally satisfied with the courses they took, the level of satisfaction fell significantly when more digital forums, online interactions, or other technological elements were involved. Yet, at the same time, more than half the respondents said that they would skip a lesson if there was more information or a comparable video lesson online.
Although these results at first glance seem to be fairly negative for e-learning, the responses to some additional questions were more positive. The majority of students (59.6 percent) said that they would like more electronic content in their courses. When asked what they would specifically like to see online, 53.6 percent answered that they would like more online course notes, with 46.4 percent advocating more recordings of lessons on the web.
These findings are broadly in keeping with the results of a 2011 literature study that investigated the expectations of young people with regard to new forms of education and information and communications technology.17
The study reached the following conclusions: First, the technological gap between the students and their teachers is not enormous, and certainly not so large that it cannot be bridged. In fact, the relationship is determined by the requirements teachers place on their students to make use of new technologies. There is little evidence that students expect the use of these new technologies. Second, in all the studies consulted, the students persistently report that they prefer moderate use of information and communications technology in their courses. (“Moderate” is, of course, an imprecise term that is difficult to quantify.) Third, students do not naturally make extensive use of many of the newest technologies, such as blogs, wikis, and virtual worlds. Students who need or are required to use these technologies in their courses are unlikely to object to them, but there is not a natural demand among students for any such use.
Maybe this will change as technology becomes more and more ingrained. However, a study of students in Glasgow, Scotland, found little change; these students appeared to conform to fairly traditional pedagogies, albeit with minor uses of technology tools that deliver content. Research comparing traditional books with e-readers shows that students prefer paper.18
The sad thing is that even if students did prefer to use technology in school, this would not mean that they would learn more. In 2005, Clark and Feldon wrote, “The best conclusion at this point is that, overall, multimedia courses may be more attractive to students and so they tend to choose them when offered options, but student interest does not result in more learning and overall it appears to actually result in significantly less learning than would have occurred in ‘instructor led’ courses.”19 A decade later, based on 10 years of additional research, Clark and Feldon stand by this conclusion.20
In her book, danah boyd describes the main reasons young people use technology. These reasons are mainly social, such as sharing information with each other, and meeting each other online and in real life. They do discuss schoolwork with each other, but this is very different from using Facebook as a learning tool or their phone as a learning machine.21
Myth 3: Today’s “digital natives” are a new generation who want a new style of education.
Digital natives! Whenever the question of digital innovation in education is discussed, this is a term that immediately comes to the surface. But it should be avoided. Even the person who coined the term digital natives, Marc Prensky, admitted in his most recent book, Brain Gain, that the term is now obsolete.22
The concept is usually used to describe young people who were born in the digital world and for whom all forms of information and communications technology are natural. The adults who were born earlier are therefore “digital immigrants,” who try with difficulty to keep up with the natives. Prensky first coined both terms in 2001.23
With this concept, he referred to a group of young people who have been immersed in technology all their lives, giving them distinct and unique characteristics that set them apart from previous generations, and who have sophisticated technical skills and learning preferences for which traditional education is unprepared. However, Prensky’s coining of this term—and its counterpart for people who are not digitally native—was not based on research into this generation, but rather created by rationalizing phenomena that he had observed.24
As the digital native concept became popular, extra claims were added to the initial concept. Erika Smith, of the University of Alberta, describes eight unsubstantiated claims in the different present discourses on digital natives:25
They possess new ways of knowing and being.
They are driving a digital revolution and thereby transforming society.
They are innately or inherently tech savvy.
They are multitaskers,‡ team oriented, and collaborative.
They are native speakers of the language of technologies and have unique viewpoints and abilities.
They embrace gaming, interaction, and simulation.
They demand immediate gratification.
They reflect and respond to the knowledge economy.
Smith is not alone in concluding that there is little to no proof for these claims. A meta-analysis conducted in 2008 had already shown that there was little hard evidence to support the use of the term digital natives.26
But maybe the concept of digital natives was more a kind of prediction, and we just had to wait. Perhaps today’s young people are true digital natives. If we look at the research performed in high-tech Hong Kong by David M. Kennedy and Bob Fox, the answer is more nuanced.27 Kennedy and Fox investigated how first-year undergraduate students used and understood various digital technologies. They discovered, like danah boyd did with the American teenagers, that the first-year undergraduate students at Hong Kong University do use a wide range of digital technologies.
The students use a large quantity and variety of technologies for communicating, learning, staying connected with their friends, and engaging with the world around them. But they are using them primarily for “personal empowerment and entertainment.” More importantly, Kennedy and Fox describe that the students are “not always digitally literate in using technology to support their learning. This is particularly evident when it comes to student use of technology as consumers of content rather than creators of content specifically for academic purposes.”
Other researchers have reported that university students use only a limited range of technologies for learning and socialization. For example, one study found that “the tools these students used were largely established technologies, in particular mobile phones, media player, Google, [and] Wikipedia. The use of handheld computers as well as gaming, social networking sites, blogs, and other emergent social technologies was very low.”
Comments
Post a Comment