Informing the Public or Information Overload? The influence of school accountability data format and on public satisfaction

Download data and study materials from OSF

Principal investigator:

Rebecca Jacobsen

Michigan State University

Email: rjacobs@msu.edu

Homepage: http://education.msu.edu/search/FormView.aspx?email=rjacobs@msu.edu


Sample size: 1111

Field period: 7/18/2011-1/17/2012

Abstract
The 2001 No Child Left Behind act requires local education agencies to publicly disseminate data on school performance. In response, districts and state departments of education have created unique "school report cards" that vary widely. Respondents first allocated a set number of points across three common goals for schooling. Then, mimicking the variation seen in publicly available data formats, respondents were randomly assigned to one of the four format conditions to examine whether and how format influences public perception. One set of findings suggests that data format significantly influences perceptions of school performance. A second set of findings shows that the expectations a respondent has for what schools ought to do also influences satisfaction. Because our findings show that data are viewed differently across different formats, and by those with different expectations, we conclude by considering the policy feedback effect accountability data decisions may have on education politics.

Hypotheses
Do different data formats and expectations influence how the public responds to school data? We hypothesize that differences will exist given different desires for public schools and varying interpretive ease among common school data measures.

Experimental Manipulations
Condition 1: Performance Index. Some states provide the public with a numerical performance index rating for each school. Two notable examples are California and Ohio. These scores are often de-contextualized and can take on a variety of ranges (e.g. California issues an Academic Performance Index (API) score somewhere between 200 and 1000 whereas Ohio's falls between zero and 120). We presented respondents in this condition group with a performance index score between zero and 200.

Condition 2: Letter Grade. Many states provide the public with school letter grades. Similar to how students are graded, schools receive "A" through "F" letter grades for their performance. Florida and Michigan are two states where the public receives information in this format.

Condition 3: Percent Meeting Goal. By far the most common data format is reporting the percent of students meeting a specified goal. States may display the goal differently–North Carolina uses percent at or above grade level while Wisconsin uses the percent scoring at each level of its state test. For our study, respondents in this condition were shown a percent of students between zero and one hundred who met a "goal."
Condition 4: Achievement Level. Several states assign schools achievement levels to signal their performance. For example, Ohio labels each school with one of six designations ranging from "Academic Emergency" to "Excellent with Distinction." We utilize the achievement levels adopted by National Assessment of Educational Progress (NAEP), which includes four designations: below basic, basic, proficient, and advanced and we added a fifth category–failing–because of the increased use of this label for schools.

Outcomes
After viewing a school's data, respondents were asked to evaluate its performance using a seven-point rating scale. Utilizing a modified version of the American Customer Satisfaction Index (ACSI), which is widely cited in the business and media literature (Fornell et al. 1996; see also http://www.theacsi.org/), respondents express their satisfaction with 1) the overall performance, 2) whether the school meets their expectations, and 3) how close the school is to their ideal school. (See Appendix A for a sample of the survey instrument, including the exact question wording for this section.) This trio of questions has been found to have strong internal and retest reliability and the strongest construct validity when compared to five other single and multi-item satisfaction measures (Van Ryzin 2004a). ACSI measures have long been used in studying consumer satisfaction and behavior. More recently, public administration scholars have used these questions to assess citizen satisfaction with public services (Van Ryzin, 2004a; Van Ryzin, 2004b; Van Ryzin et al. 2004). For each school in each condition, internal consistency (as measured by Cronbach's alpha) for the set of three satisfaction items was 0.9 or higher. This high level of internal consistency allowed us to average the three questions into a single outcome.

Summary of Results
We find that the format used to convey school performance data influences the public's satisfaction with its schools, even when the data contained in school report cards are roughly equivalent. Relative to other data formats commonly used, respondents viewing data conveyed through letter grades were most able to pick up differences among school performance and awarded a strong-performing school the highest average satisfaction rating while also conveying the lowest average satisfaction with a weak-performing school. Moreover, those respondents viewing performance indices were least likely to pick up on relative school difference and awarded the narrowest range of average satisfaction scores. We also find that individuals holding the strongest beliefs that schools should pursue a well-rounded set of goals are more satisfied with each of the three hypothetical schools shown.

Additional Information
Panelists were randomly drawn from the Knowledge Networks panel and 1,111 panelists of the 1,833 invited, responded to the invitation. This represents a final stage response rate of 60.6 percent (see additional note below).
KnowledgePanel™ respondents are recruited to join the panel through a random-digit-dial technique, which has a recruitment rate of 15.2 percent. To calculate the cumulative response rate, the final stage response rate must be multiplied by the recruitment response rate. Thus, the cumulative response rate for this study is 9.2%.

References

Jacobsen, R., Snyder, J. W., & Saultz, A. (2015). Understanding satisfaction with schools: The role of expectations. Journal of Public Administration Research and Theory, 25 (3), 831-848.

Jacobsen, R., Snyder, J. W., & Saultz, A. (2014). Informing or shaping public opinion? The influence of school accountability data format on public perceptions of school quality. American Journal of Education, 121 (1), 1-27.