Challenges in localization of research

Many times in educational research, we are seeking to understand practices of an individual, a classroom, or larger selection of learning environments. In this we can see, and often times interact with participants.

Often when we learn from research, we take lessons learned, and try to apply them to other spaces. This could be the use of a pedagogical routine in scaffolding. This could also include the use of an assessment from one learning context and applying it to another.

I personally saw some of this in previous research looking at online reading comprehension. We developed pedagogy related to teaching online reading comprehension. We also developed a series of assessments that sought to authentically asses online reading comprehension.

One of the challenges that I saw in these efforts was a request from educators and other researchers to “get a copy” of a lesson or assessment and implement it with their learners. This was especially true in presentations I made about the online reading comprehension assessments. When asked to share these assessments, my usual response is that they should not simply take the items and structure of an online reading comprehension assessment, and implement it with their students. They should also not look to the results of this survey as a way to determine the skill or ability of a student as it relates to the focus of the instrument.

I believe educators and researchers should examine the philosophy, focus, and constructs behind the assessment or intervention, and adapt it for use with their learners. Revise, rewrite, and remix the materials using content that is meaningful and authentic for their targeted group.

Put simply, I believe I make the argument that you cannot just take the work and ideas from one area, and use them in another. I think this is even more of a challenge as we look at using work in another language or context. The full complexities of language and culture need to be “translated” for the local group.

The process of “translating” content from one language to another is called localization in the content creation industries.

Localization

There is a lot of discussion about localization in software development. Put simply, you’re translating the product (or activity in this instance) to better serve local users. Localization is connected with globalization as well as individualization and has severe connections to education.

There are some classes and resources from Google and Microsoft to get a better handle on what this means and how to implement it.

One of the easiest ways for me to understand localization is through the work in film and television.

Localization in media

As can be found in other industries, film and television are becoming increasingly connected in a globally networked society. As films and TV shows are released, the modern consumer expects this content to be released in their native language almost simultaneously to its original language release. The pressure is on content creators to translate work quickly, however, the job of the translator is not only to translate the content but also to ensure that the audience can enjoy the film. Localization therefore plays a vital role in the process.

To successfully translate and localize a film or TV show, the translator must understand the cultural perception of the target audience. Objects and ideas hold a variety of symbolic meanings in different nations, so it’s important that the connotations of colors, foods and animals, amongst many other things, are taken into account before undertaking a literal translation.

An example of this is shown in the movie Inside Out in which the film substitutes broccoli for bell peppers to be more suitable for children in Japan. The thinking is that broccoli may be viewed as disgusting in American, whereas children in Japan love broccoli, so the animators substituted green peppers for these sequences.

Still another scene in the movie substitutes dreams and memories about ice hockey for the sport of soccer which is a bit more understood by a global audience. Many of these changes may seem small, but this attention to detail shows that the content creators understand their materials, and their audience.

An excellent overview of this process is available in the video below.

Localization in research

As we conduct educational research across global contexts, I wonder about the “translation” or localization that we need to do to our materials to make them more authentic. After some initial searches, it appears that localization of research has been largely ignored in qualitative and quantitative research manuals and may constitute a relatively new area of research methodologies.

We must note, that as detailed in this post, localization of research involves much more than translation from one language to another. The TLT survey used as the linchpin in this research was drafted in English, and went through content validation with experts that were primarily English speakers. We have recently translated the TLT into Spanish, French, and Chinese, and will release those soon. I wonder about the other specialized translation practices requiring technical, cultural and business considerations that should be made to pay attention to the global, technologically savvy educators that we’re seeking.

I wonder about ways in which we can address any lost dialogue between the English language version of the survey, and make sense of themes that may be lost and (possibly) gained in this instrument. At this point, I believe that we are gaining by simply translating the survey into other languages and sharing it online, at this website, with the same attention and respect that we’re providing for the English language materials. As we move forward, we’ll need to account for these threats to our research and recognize our perspectives. We’ll also need to trust each other as a research team, and also build a mutual understanding with our participants to make sure we are examining perspectives across languages and cultures while we examine our localization practices.

Image Credit

Challenges of studying digital literacy

The term digital native was coined and popularized by Marc Prensky in his 2001 article titled Digital Natives, Digital Immigrants. In this he posited that the contemporary decline in American education was due to educators’ inability to understand the needs of modern students. Students were labeled digital natives and said to have an insider’s perspective on the web literacies and tools that inundate our society. Adults on the other hand, and more specifically educators, are relegated to the position of immigrants or outsiders in these practices.

It should be noted that Prensky’s original paper was purely theoretical and no empirical data has yet proven his claims. In fact, a growing body of research has steadily cast doubt or entirely disproven the existence of the digital native. Put simply, there is no proof that one is more or less digitally savvy based on their born on date. Prensky has since changed his metaphor of the digital native to instead describe digital wisdom, and yet the belief in the digital native still exists.

In this context, we believe that a 21st century educational system must educate all students in the effective and authentic use of the technologies that permeate society to prepare them for the future. A central challenge in this is that educators have little or no guidance in how to embed these practices into their current work processes. Conducting research in these spaces is a challenge as educators may be unsure of their own skillset, while needing to build these skills in their students. There is not only a need to use these texts and tools in our work, but also guide students in this process. 

Despite the transformative possibilities associated with the inclusion of the Internet and other communication technologies (ICTs) in instruction, relatively little is known about the regular use of these technologies in our daily lives. Perhaps educators need not resign themselves to life as an exiled digital immigrant and instead should identify opportunities to empower themselves.

Are we getting what we are after? Our validation process

Multiple revisions and notes took place both digitally and on paper
Multiple revisions and notes took place both digitally and on paper

Preparation of the Teaching Literacies with Technology Survey was a lengthy process. From collaborating with peers to add the content, to validating with experts in the field, to taking the feedback and rebuilding the survey, the process took almost eight months before we were ready to launch the survey. I joined the Digitally Literate Research Team in October 2015, in the midst of the final revisions of the survey prior to validation and preparing applications for each institution’s research ethics boards.

The initial survey was put together by Michelle Schira Hagerman, Ian W. O’Byrne, Denise Johnson, and Greg McVerry. This initial survey was combed through by myself (Heather), Michelle, and Ian. We wanted to make sure we covered basic demographic information, facets of technology use within and outside of work, and educators’ own digital literacy knowledge/skill.

Discussions at this stage between the research team consisted of whether we were truly capturing educators digital literacy skills and teaching practices, the wording and tone of questions, and the repetitiveness of questions. Here we made some minor adjustments, and opted to put the demographics at the end to the survey. This decision was made to ensure that should participants only fill out a portion of the survey, we had usable data about their teaching practices. However, we reexamined this decision later on for validation purposes, which I will discuss later.

Once we were comfortable with the survey, we invited close to 40 practitioners and scholars to provide feedback on the survey. We received 31 responses. From this feedback, I was tasked with summarizing the comments, and formulating a plan of attack to improve the survey. Validators provided feedback at the end of each page of the survey. From this we combed through the survey, page by page, question by question.

Informed Consent

The feedback we received for the informed consent questioned the the length of this content. However, we were unable to change much of the content as it contains all the information that our respective Institutional Review Board (IRB) and Research Ethics Board (REB) required. However, as suggested, we did add clarifying language regarding the target audience for the survey “You are invited to participate in a research study because you are a PK-16 educator who may use instructional technologies as part of literacy instruction”. We also added headings to help direct the readers through the letter in a more systematic way. Finally, one of the most common points of feedback we received was that the survey would not take 60 minutes, as we had stated in the first informed consent letter. Validators reported that it would probably take about 30 minutes, and listing 60 minutes may turn educators away from wanting to participate in the study.

Demographics

Despite our thoughts about moving the demographics to the end of the survey, many of the validators questioned this, and felt it would be best to place it at the start. After discussing, we agreed, as this would allow us to systematically ensure that those participants who completed the survey were not characteristically different from those who did not fully complete the survey. Thus, this would add to the reliability of our anticipated analyses.

It was also suggested that we add questions about experience and grades being taught. As this is a global survey, we opted to ask what age group educators were teaching, as to avoid confusion between various countries grade systems. Based on the feedback we also added questions on where the educators were from, teaching, and the languages in which they taught and spoke.

Again we clarified some of the language, for example the type of institution in which participants were teaching was clarified from public and private to: Publicly funded by taxpayers; Private institution, not funded by government; and Other: [please specify]. We also added a description to help prompt respondents for the question “Please describe your current role”.

Main content

The main content of the survey underwent a fairly major overhaul after the validation feedback came in. One of the major adjustments was the addition of digital literacy specific questions. There was some feedback that we had not adequately captured educators’ digital literacy skills and how they implemented digital literacy education within the classroom, and we had to agree. There was a heavy focus on how educators’ used technology but not enough on digital literacy itself. So we added questions regarding theoretical approaches that educators may use to teach digital literacy and integrate technology. Additionally we added survey items regarding educators’ encourage their students to Use, Participate and Create in a digital world.

We initially had the question:

How, if at all, does the use of instructional technology positively impact students’ non-academic outcomes (e.g., social-emotional, behavioral, motivational)? (Select all that apply.)

  • The physical safety of students has improved.
  • Students are more actively engaged in class.
  • There are fewer classroom issues.
  • Improved classroom climate.
  • Increased time on task.
  • Students’ interactions with each other are more positive.
  • Other (please specify)

This question was altered so as to remove the positive lean of the question, to assess whether educators thought instructional technology impacted these factors in a positive or negative way. Each option was rated on a likert scale from negative to positive impact on student outcomes.

Additionally, we added examples throughout to aid in clarification. We also elaborated and generalized the applications that educators may be using within their daily personal and professional lives. We opted to move away from name brands where alternatives were available For example, we had originally included Microsoft word as an option, but generalized for word processing applications, so as to include Google Docs, Open Office, etc.

The biggest comment about the main content of the survey was the organization. Specifically, there was a lack of flow. The reviewers expressed that similar questions were scattered throughout, and it was difficult to assess which topics were being explored. Once we had reviewed and integrated all of the feedback the survey was printed off to be able to view the survey as a whole. This allowed me to visually group similar questions. Once this stage was done, Michelle and myself reviewed the survey, to ensure that the order of items seemed logical.

Overall, we aimed to assess how and which technologies that educators were using within the classroom, and at home, their digital literacy skills, and how they were fostering digital literacies in their students. We also wanted to include a short section on what potential barriers educators were facing in implementing technologies and digital literacy skills within the classroom. From the survey responses we can not only gain a better understanding of how educators from around the world teach digital literacy, but can then select participants to for the second stage of the research, the in depth interviews, where we will be able to further explore how educators are teaching digital literacies.