Introducing the COVID-19 School Dashboard Newsletter

Welcome to the COVID-19 School Dashboard Newsletter.  The goal of this newsletter is to provide further updates, context and analysis for the data in the COVID-19 School Dashboard.

If you’re new to this effort, you can see the full dashboard here. At that link you can see the full dashboard, navigate around it, filter it, download some of the data and, if you’re an administrator, enroll your school or district.

Today’s newsletter will give a brief overview of this effort (why it started, what we do), walk you through what is actually in the dashboard and talk through the biggest question we get: What is the right “comparison” for the rates we see?  We’ll also introduce the organizations involved.

This newsletter will publish weekly.  In some weeks, we’ll update you to new data we have available. On off weeks, when data is not updated, we’ll do some analysis or cover frequently asked questions.  Have a question?  Email us at


This initiative began in August, motivated by the need for better data on COVID-19 in schools. We were motivated by the observation that as schools opened there was little systematic effort to track COVID-19 rates; individual cases were (sometimes) reported but it was difficult to understand how prevalent COVID was in schools, whether some schools were faring better than others, and what mitigation factors were important. 

Generating these data required starting with context from schools and districts, including information on enrollments and in-person counts, and information on mitigation practices. It then required collecting ongoing data on COVID-19 cases in schools. 

We began this effort with individual schools and school districts; in later waves we expanded to include data collected by state governments. There is much more detail on our data collection approach in this document.  In short, we ask schools and districts to submit information on COVID cases on a biweekly cycle, and download state data as available for these same cycles.  We have endeavored to include state data as long as it includes at least information on cases and in person enrollment.   

One very important point: our data asks about cases among people associated with in-person schooling. We do not differentiate between cases acquired at school and those acquired elsewhere.  It is clear to us that this differentiation is crucial, but it isn’t something that we collect, and our sense is it is not something that many schools or districts are systematically tracking. 

What’s Actually in the Dashboard?

When you visit the dashboard, you’ll see an introductory page, and three subpages.  The introductory page has more information on the effort, so we encourage you to read it.

Our primary data appears on the Primary Dashboard page. On this page we report counts of students and staff in our data.  This page can be filtered (see the top of the page) by location, school type and student age, as well as by community rate group.

This page also reports some basic information on mitigation practices in the schools in the data, where known, and information on geographic distribution.  We also report some comparison metrics; more on these below.

The second page is Variation and Mitigation.  In this page we illustrate the variation in infection rates by mitigation practices and in-person density.  We’ll note that the information on these pages should be interpreted with some caution; although they suggest associations, it’s hard to be convincing about causality given that mitigation practices tend to move in tandem and go along with community practices.  Still, there’s some interesting patterns here (note the graph of masking). 

The final page of the dashboard is Raw State Data Links.  Here, we provide raw data which is drawn directly from state reports.  This includes the raw data from two states (New York and Texas) where we currently include cases in the overall dashboard. 

It also includes raw data from other states which do not report sufficient data to include in the dashboard.  In most cases the missing information is information on in person attendance.  Although we cannot include these data in the primary calculations, we felt they were helpful to provide in a consistent and usable manner. 

Please note: this page does not contain data from individual districts or schools who provide their data to this project directly. We have committed not to release identified data publicly. 

What do the Case Numbers Mean? What are the Comparisons? 

The simplest way to think about the numbers is literally to think about the numbers.  In the most recent period covered by the dashboard, the figures suggest an expected 4 cases in a cohort of 1000 students, and 8 cases in a staff of 1000 over a two week period.  In the dashboard we report these as counts, and rates either per two week period or as a daily rate. They all represent the same underlying data; just different ways to look at it.

It’s important to remember these numbers reflect cases among students and staff at schools regardless of where infection occurred. Some cases within school populations will be acquired outside of school grounds, because of community spread. Our data cannot separate cases contracted within schools from those contracted outside of the school and we would expect to see cases at schools regardless of the amount of spread in schools. To understand the risk posed by schools in data like this, we would like to understand what rates we would expect to see given the underlying community rate.

This is difficult given our relatively limited data infrastructure around the virus. There are two natural community comparisons; neither is ideal, but we report both.  The first is community case rate, defined as the count of positive COVID test results per 100,000 people in the community overall.  The second is positivity rate, defined as the share of tests performed which are positive.

Community rates depend on the total number of tests performed. When a community increases the number of tests performed, the community rate will be higher because more cases will be found. Therefore, the comparison between schools and their surrounding communities depend on the levels of testing within the school itself and the community as a whole.  

Positivity rate is also dependent on testing. When more people are tested, positivity rate tends to be lower at the same level of infection (since highest risk people tend to be tested first). 

For these reasons, it is appropriate to consider the rates in the dashboard both on their own as simple measures of the frequency of infection, and as compared to both community rates and positivity rates. 

In general, what we see in the dashboard is that student rates are consistently lower, compared to either the community case rate or the positivity rate.  Staff rates are typically slightly higher than community rates, and much lower than positivity rates. The staff rates also exceed community rates in districts with remote learning, suggesting that some variation in testing across groups may play some role. 

Given the crucial need to understand staff risks, we are focused on how we can better compare these rates. 

In general we see school rates move with rates in communities, regardless of how they are measured: higher rates in the community translate to higher rates in the school population, as we would expect.

Who is the Team Behind This?

This project is led by a partnership between a number of school organizations (the Association of School Superintendents, National Association of Elementary School Principals, National Association of Secondary School Principals), the technology company Qualtrics and Emily Oster at Brown University.  

A number of other partners have assisted by distributing data requests to their schools or networks. These partners include: National Alliance for Public Charter Schools, Association of Independent Schools of New England, Washington Federation of Independent Schools, National Association of Independent Schools, Northwest Association for Independent Schools, Chiefs for Change, Charter School Growth Fund, National Association of Federally Impacted Schools.

The initiative is funded by a number of private foundations and Brown University.  Foundation funders include Chan Zuckerberg Initiative, Arnold Foundation, Templeton Foundation, Walton Family Foundation.  None of the funders have a role in data collection or interpretation. 


If you have questions, feel free to reach out to us at