Wednesday, November 15, 2017

Notations From the Grid (Mid-Week Edition): On the State of Education in the United States

For this edition of "Notations", please note this courtesy of the team at Education Next as we also wanted to have 


As millennials grow up and become parents, find schools for their kids, and move into positions of leadership, what’s apt to change on the education reform front? The Fordham Institute and the Walton Family Foundation are convening a panel to discuss this on November 14 at 4 pm.
Watch the live stream or learn more about the event here.
— Education Next

Posted: 06 Nov 2017 06:01 AM PST
Recently submitted state plans for implementing the “Every Student Succeeds Act” (ESSA) provide insight into how research is making inroads into education policy at the state level. Based on my review of a sample of plans, a fair answer is that it is not. A previous postin this series by Martin West describes how ESSA created opportunities for states to use research and evidence in ways that improve student outcomes. [1] Opportunities, yes—but most of what is in the plans could have been written fifteen years ago.
To date, most of the attention on submitted plans has focused on the accountability structures they propose. ESSA requires each state to specify how it will hold schools and districts responsible for meeting the state’s education goals, unlike No Child Left Behind, which specified an accountability structure that applied to all states. Other organizations are reviewing these aspects of plans. [2] My focus emerges from another ESSA requirement: each state has to designate at least 5 percent of its schools, and high schools with graduation rates below 67 percent, as low-performing and use “evidence-based interventions” with them.
I looked at one other aspect of plans—how they proposed to “use data to help educators be more effective.” Intervening in schools falls under a different funding stream than using data to improve educator skills (Title I for the first and Title II for the second), but both clearly involve a role for research and evidence.
Research can enter in various ways, some of which I do not focus on. For example, some plans cited statistical research to support their n-size determination (the minimum number of students in a subgroup, such as English learners, above which schools are held accountable for outcomes for that group). Some plans cited research to support their choice of a “nonacademic indicator” in their accountability structure. As has been reported elsewhere, chronic absenteeism has been a favorite choice, and research on it is cited in many plans. [3] Some plans cite research on early warning systems designed to flag students who may be in need of support to help them progress in school. Debates about n-size have been occurring at least since NCLB. Chronic absenteeism and early warning systems are relatively recent. [4]
What the plans say
Reviewing 51 plans (each between 100 and 200 pages plus appendixes) was too extensive an undertaking. Instead, I sampled 10 states with probability proportional to their 2015 K-12 student enrollment. That sampling process yielded California, Texas, Florida, Illinois, Ohio, Michigan, Indiana, Arizona, Colorado and Alabama. The 10 states account for about half of the country’s K-12 enrollment.
Table 1 displays how state plans describe their approaches for using evidence to support low-performing schools or to use data to help educators be more effective. The text in the table mostly is from plans themselves, edited to remove acronyms and make wording more concise.
The variability in the table is noticeable. California’s plan for improving low-performing schools essentially is we got this. Ohio’s plan is much more detailed about how it will take steps to promote improvement. How long a school can underperform before the improvement requirements take effect is variable, from two to four years depending on the state. Some states require low-performing schools to partner with external entities. Some states call for state longitudinal data to be a source that educators will use to improve their practices. Other plans simply say data will be used somehow.
Overall, it is hard not to reach the conclusion that plans mostly ignored research on what works and what does not to achieve particular outcomes (effectiveness research). The logic that is most evident in plans goes something like this: if a school does not improve after some number of years, the school and its host district will do a “needs assessment,” and a “root cause analysis,” which will support choosing appropriate evidence-based interventions. Not one of the ten plans offered an example of how that process might yield evidence-based interventions that schools could implement. Requiring a needs assessment and a root-cause analysis might fairly be interpreted as “intervening” with schools, but if your doctor tells you there’s something wrong, you would expect to hear a treatment plan. The Department of Education’s guidance to its peer reviewers who scrutinize the plans is unclear about what they should be looking for as treatments. [5] There is a glaring missed opportunity to tie effectiveness research more closely to identified needs.
States understand that using the expression “evidence-based” in these plans, over and over, is a plus. In using the term, however, plans do not provide enough detail to allow a reader to assess the evidence base being referred to. For example, Arizona’s plan indicates it will work with low-performing schools to implement “evidence-based interventions which are bold and based on data.” Yes, well. It goes on to say it will work with districts to support selection of “innovative, locally-selected evidence-based interventions leading to dramatic increases in student achievement.” If there is substance underlying Arizona’s claim to be able to help districts achieve dramatic success with students, why wait?
It is at least a bit uncomfortable that, under ESSA, states do not require schools to undertake interventions until years have passed and millions of students have continued to perform at low levels (ESSA requires that states designate at least five percent of schools and low-graduation rate high schools as having to improve, which is more than 2 million students). Why schools are not immediately identifying needs and root causes is unclear. Perhaps conserving resources?
Another feature of some plans is that schools designated as needing to improve have to work with external partners or entities. How working with external partners will improve schools is unclear. It seems unlikely that the driving factor in why schools do not improve is that they are not working with external partners. Partners do not hold the keys to unlocking achievement. It would have been useful to see at least citations to evidence that working with partners is a pathway to improvement.
Plans to use data to improve educator skills are even more unclear than plans to implement evidence-based interventions. A number of plans appear to be describing what they currently do with data, which means their “plan” is to keep doing it. While that might be a good idea if they had evidence that what they were doing was working, none of the plans offer that.
Several states note that their teacher evaluation systems will use value-added models and those evaluations will point the way to improving teacher skills. Using value-added models for evaluating teachers has been contentious, but they certainly are grounded in research. But research demonstrating how to successfully use the outcomes of those evaluations to improve student learning is a different matter, as I have documented previously in this series. [6]
Noteworthy aspects
Five states (Ohio, Indiana, Michigan, Alabama, and Illinois) indicated they will set up “clearinghouses” or listings of interventions that have been vetted for evidence of their effectiveness. Clearinghouses seem like a reasonable idea—and in fact the Institute of Education Sciences has been operating one, the What Works Clearinghouse (WWC), for nearly 15 years. It has reviewed thousands of studies and released hundreds of reports. Re-inventing this concept seems inefficient, though it should be noted that ESSA’s evidence standards and WWC evidence standards differ and creating vetted lists using the WWC as the starting point will require some effort. Indiana proposes that its clearinghouse only list programs that have evidence of effectiveness in Indiana. This seems a bit strict, like asking my doctor to only prescribe me medications that have been shown to work on patients living in New Jersey.
A number of plans mention “multi-tier systems of support.” The logic of these systems is that students, schools, or districts can be arrayed into tiers. The lowest tier applies to just about everybody. Those in higher tiers need more support. Arraying individuals into tiers can be cost-effective to the extent that lower-cost forms of assistance can be broadly applied and higher-cost forms of assistance can be narrowly applied to those showing they really need the assistance. It is like triage in hospital emergency rooms. However, what happens in the highest tier still needs to be identified. The notion of using tiers is simply structural—the tiers need to be filled with something.
Developed authority has its costs
ESSA moved more authority for K-12 education back to states. But there are sensible reasons the Federal government should continue to invest in education research. All states share in the benefits of these investments, at no direct cost to them. But it remains up to the states to take advantage of these investments.
When I started reviewing plans, I thought I would see more concrete ways effectiveness research was or would be used. What I see is closer to leaps of faith—needs will be assessed, causes will be identified, and, then, suitable interventions will be selected. That last step is a big one, though, and it is where most of the resources will be spent. Researchers and states will need to work together more closely than these plans suggest to make this happen.
— Mark Dynarski
ednext-evidencespeaks-smallMark Dynarski is a Nonresident Senior Fellow at Economic Studies, Center on Children and Families, at Brookings.
This post originally appeared as part of Evidence Speaks, a weekly series of reports and notes by a standing panel of researchers under the editorship of Russ Whitehurst.
The author(s) were not paid by any entity outside of Brookings to write this particular article and did not receive financial support from or serve in a leadership position with any entity whose political or financial interests could be affected by this article.

Notes:
4. State plans can be downloaded from https://www2.ed.gov/admins/lead/account/stateplan17/statesubmission.html. Each state also posted their submitted plan on the state website.

No comments:

Post a Comment

Creative Commons License