To guide equitable and effective edtech use, district leaders need help identifying what evidence is relevant and how to gather it.
Context matters, and in order to measure the 10 context factors most likely to influence edtech implementation success, the EdTech Evidence Exchange offers districts its Context Inventory, a free and validated survey.
In April, the Office of Educational Technology (OET) released the Edtech Evidence Toolkit, which can help districts use the Context Inventory to guide edtech research and decision-making and promote effective edtech use.
By: Marion Goldstein
Why do some edtech tools work well in one district and fail in another? The answer isn’t necessarily the tool, but the context in which the tool is used. Let’s look at how districts can consider their implementation context when looking at technologies they’re using or thinking about adopting.
The EdTech Evidence Exchange (the Exchange) helps district and state leaders gather evidence about their edtech implementation contexts. By understanding context strengths and weaknesses, education decision-makers can focus on making edtech decisions that are best for their unique circumstances.
Through the Exchange’s EdTech Genome Project, we identified 10 context factors, shown below, as those most likely to influence the success of edtech programming.
Educators conducting school-based research didn’t have a high-quality tool to measure context until recently. By working with hundreds of educators, researchers, and context experts, our Exchange team developed the Edtech Context Inventory (CI), a survey to measure these 10 context factors. With the CI and newly released Office of Educational Technology’s (OET) Edtech Evidence Toolkit, districts, states, and even vendors now have the resources they need to easily integrate context into edtech research and decision-making.
Conducting Research to Guide EdTech Decision-Making
As an education decision-maker, you may want to know about the effectiveness of an edtech tool but struggle to find relevant research. That’s where the CI and the OET Toolkit come into play! The OET Toolkit describes the Elementary and Secondary Education Act’s (ESEA) four tiers of evidence and walks you through examples of how to gather evidence about a tool’s efficacy. Here’s a glimpse at how districts or states can use the CI and the Toolkit to guide research.
Tier 4 research involves building evidence about how an edtech tool is likely to improve educational outcomes when no research is available.
Teams can use the CI to integrate relevant context information into their research. Reviewing context data, a team might ask:
- What are the strengths in my district’s implementation context (e.g., access to devices, internet connectivity) and how can we leverage them to support good edtech use?
- Where should we focus efforts to improve conditions for edtech use?
- Given our context, what improvements will lead to outcomes we want to see?
- How can we ensure equitable edtech use?
Tier 3 research involves finding relationships between at least two things. For example, Tier 3 research might connect a particular math tool to higher test scores. These connections can be hard to pinpoint because multiple contextual factors can affect how an edtech tool performs.
Using data from the CI, a team can isolate factors influencing edtech use and its benefits. For example, if inventory results suggest educators struggle with finding time to plan how they’ll use a new technology, the team might ask:
- Among educators, is a lack of planning time associated with fewer benefits from edtech use? If so, how can we adjust the daily schedule to give educators more planning time?
- Is sustained use of edtech tools associated with better student outcomes? If so, how can we ensure all district educators have enough instructional time to use edtech tools with students?
Researchers building the highest levels of evidence are looking for cause-and-effect relationships, like whether an edtech tool results in learning. This usually involves more people and more data than research at Tiers 3 or 4. OET’s Toolkit stresses the need to control for things that may interfere with a tool producing an outcome. Because any aspect of an environment’s context can influence whether a tool is effective, it makes sense for education and research teams to integrate context data from the CI into study designs.
Research at Tiers 1 and 2 are more recognizable as traditional research with intervention and control groups. To measure a tool’s impact on learning, for example, some schools may use the tool and some may not. Researchers must ensure schools and student groups are equivalent in ways relevant to achieving outcomes.
The CI supports districts and states with this process by allowing them to review context data when assigning schools to control or intervention groups. Researchers might ask:
- Do schools have significantly different scores for leaders’ communication about technology’s value and purpose? If so, this context feature should be considered when grouping schools.
- Do schools differ significantly in offering professional learning opportunities? If so, this context feature should be considered when grouping schools.
Context matters! For education leaders looking to make evidence-based edtech decisions, it’s time to give context factors their due attention throughout research and decision-making processes. Now, with the EdTech Context Inventory and OET’s Edtech Evidence Toolkit, those conducting school-based research aligned with any ESEA tier of evidence have the resources they need to do so.