Written by Natalia Kucirkova. Natalia is a Professor of Early Childhood Education and Development at the University of Stavanger, Norway and a Professor of Reading and Children’s Development at The Open University, UK.
The global consensus is that EdTech has to be evidence-based: educational technology providers need to demonstrate that their solutions work as intended and improve children’s learning. However, the demands for evidence and routes to obtain it, vary across countries. This poses some unique challenges to EdTech providers who seek to develop a portfolio of efficacy, effectiveness, and usability evidence.
The leading narrative in the USA is that all schools use EdTech based on causal, and not casual, evidence. The US Department of Education has set clear evidence expectations for what this means in the form of ESSA standards. EdTech is part of the assessments and accountability process followed by state educational agencies when they apply for federal grant programs. In other words, EdTech is part of the four-level assessment framework that applies to all educational programs and resources used in US schools.
There are some issues with this approach. First, efficacy trials are expensive. Only the big players can afford large-scale research studies. A professor’s time is too expensive for the whole cycle of designing, running, and evaluating a research study. Equally, paying a university research team can get so expensive that it elevates EdTech’s status to national funding bodies.
Some states and universities have been trying to fill the gap by connecting EdTech and academics in accelerator labs. Accelerator labs can help with embedding research evidence into a product’s design, which helps in the long run toward efficacy trials. Furthermore, university-research partnerships often lead to a win-win situation where an EdTech and research team apply for joint funding. It means that the technology can be evaluated and researchers get a peer-reviewed study at the end.
However, for many EdTech entrepreneurs, especially those at the pre-scale stage, such partnerships are impractical. Without federal funding tied to supporting the development of evidence (rather than just its demonstration), smaller EdTech come away with the short end of the stick. Critics have warned before that efficacy research could stifle innovation in the market.
Furthermore, EdTech research teams need to be specially trained and researchers’ expertise carefully matched with the EdTech’s needs. University education departments are slowly adopting procedures to meet this demand. No surprise then that EdTech research consultancy companies are on the rise, especially for EdTech who need evidence for scaling fast with access to a diverse group of pilot partners. The recent acquisition of LearnPlatform, by Instructure, one of the leading LMS creators, reflects the demand for such services.
Outside of the USA, the leading narrative for EdTech evidence has been a combination of qualitative and quantitative studies. The World Bank and UNICEF’s catalogue of “Smart Buys” demonstrates an evaluation process that takes into account efficacy but context, scale, and equity in an EdTech’s implementation. Just how much such catalogues will be used by individual countries as tight accountability tools, is not clear. What is clear is that global EdTech needs to take into account both efficacy and effectiveness evidence. In addition, academic criteria need to be combined with teachers’ voices. This is where usability evidence comes in.
Usability is the extent to which EdTech is acceptable and can be feasibly used by teachers to achieve the goal intended by the EdTech producer. Usability studies take various forms when conducted by EdTech providers, researchers, or teachers. Typically, EdTech providers check the technical adequacy of their tools before launch. Research consultants are often hired to check whether the EdTech is appropriate for the intended use. Teachers, on the other hand, can provide unique insights into how feasible (or practical) an EdTech is for their classrooms. Teachers are the ones who can estimate the level of acceptance of a particular tool in their school and for their group of learners.
The teacher power is universal but attention to teachers’ voices varies across countries. In Scandinavia, for example, it is currently teachers and municipalities’ ICT coordinators who decide which EdTech is bought for their schools. Globally, the rocketing rise of teacher influencers (Freed calls them leaders of transformation) indicates the need for EdTech markets to follow the trends in teacher networks closely.
Globally oriented EdTech knows that teachers want products that are tested in real classrooms, with a package of recommended activities. In particular, teachers prefer EdTech solutions that come with whole-school professional development and responsive customer services. Such a package of EdTech services underscores the need for EdTech to adjust its international offers to national curricula and teacher training programs.
In sum, the current international variation propels business storytelling tailored to international scientific evidence and national experience of what works. An EdTech’s evidence portfolio requires both evidence of efficacy and teachers’ perspectives on usability. The 2023 EdTech winners will be those who can tackle this delicate balancing act.
Natalia Kucirkova is Professor of Early Childhood Education and Development at the University of Stavanger, Norway and Professor of Reading and Children’s Development at The Open University, UK. She is the founder of the university spin-out Wikit, AS, which integrates science with the children’s EdTech industry.