Harvard Graduate School of Education

The READS lab is a university-based research initiative dedicated to improving children’s literacy at scale; creating relevant, rigorous, and replicable science; and advancing systemic equity through collective action with teachers and education leaders.

Children Embracing in Circle

We believe that replication is an important aspect of research. Researchers can now request access to reproduction toolkits for our studies.

Group Discussion

Want to know more about how to push back against summer slide?  Complete our free summer learning module and learn how to keep your students reading!  --Provides access to to the READS Lab library and app.

Happy Kids with Books

Learn about our research-based summer reading intervention for children in 2nd through 5th grade at risk of experiencing summer learning loss. We invite you to access our free summer learning modules to learn how to implement our summer learning program in your school.

Outdoor Biology Class

We're crazy about dinosaurs!   

                     --Are you?

Check out our fun dino jokes and activities!

Dino Chick

READS Lab in the news . . . An interview with Dr. Jimmy Kim

"There's much debate in the literacy world about what's the best way to teach children to read. With two out of three children struggling to learn to read, the nation is questioning what actually works. Harvard Professor James Kim discusses why learning to read is so challenging and shares how his latest model called MORE offers another way."

_______________________ Spotlight on our work _______________________

_______________________ Recent Publications  _______________________

Kim J, Gilbert J, Yu Q, Gale C. Measures Matter: A Meta-Analysis of the Effects of Educational Apps on Preschool to Grade 3 Children’s Literacy and Math Skills. AERA Open. January 2021. doi:10.1177/23328584211004183


Thousands of educational apps are available to students, teachers, and parents, yet research on their effectiveness is limited. This meta-analysis synthesized findings from 36 intervention studies and 285 effect sizes evaluating the effectiveness of educational apps for preschool to Grade 3 children and the moderating role of methodological, participant, and intervention characteristics. Using random effects meta-regression with robust variance estimation, we summarized the overall impact of educational apps and examined potential moderator effects. First, results from rigorous experimental and quasi-experimental studies yielded a mean weighted effect size of +0.31 standard deviations on overall achievement and comparable effects in both math and literacy. Second, the positive overall effect masks substantial variability in app effectiveness, as meta-regression analyses revealed three significant moderators of treatment effects. Treatment effects were larger for studies involving preschool rather than K–3 students, for studies using researcher-developed rather than standardized outcomes, and for studies measuring constrained rather than unconstrained skills.

Kim JS. Making Every Study Count: Learning From Replication Failure to Improve Intervention Research. Educational Researcher. 2019;48(9):599-607. doi:10.3102/0013189X19891428


Why, when so many educational interventions demonstrate positive impact in tightly controlled efficacy trials, are null results common in follow-up effectiveness trials? Using case studies from literacy, this article suggests that replication failure can surface hidden moderators—contextual differences between an efficacy and an effectiveness trial—and generate new hypotheses and questions to guide future research. First, replication failure can reveal systemic barriers to program implementation. Second, it can highlight for whom and in what contexts a program theory of change works best. Third, it suggests that a fidelity first and adaptation second model of program implementation can enhance the effectiveness of evidence-based interventions and improve student outcomes. Ultimately, researchers can make every study count by learning from both replication success and failure to improve the rigor, relevance, and reproducibility of intervention research.


This study investigated the effectiveness of the Model of Reading Engagement (MORE), a content literacy intervention, on first graders’ science domain knowledge, reading engagement, and reading comprehension. The MORE intervention emphasizes the role of domain knowledge and reading engagement in supporting reading comprehension. MORE lessons included a 10-day thematic unit that provided a framework for students to connect new learning to a meaningful schema (i.e., Arctic animal survival) and to pursue mastery goals for acquiring domain knowledge. A total of 38 first-grade classrooms (N = 674 students) within 10 elementary schools were randomly assigned to (a) MORE at school (MS), (b) MORE at home, (MS-H), in which the MS condition included at-home reading, or (c) typical instruction. Since there were minimal differences in procedures between the MS and MS-H conditions, the main analyses combined the two treatment groups. . . .