<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="http://webfeeds.brookings.edu/feedblitz_rss.xslt"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	xmlns:event="https://www.brookings.edu/events/" xmlns:feedburner="http://rssnamespace.org/feedburner/ext/1.0">
<channel>
	<title>Brookings: Series - Evidence Speaks</title>
	<atom:link href="https://www.brookings.edu/series/evidence-speaks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.brookings.edu</link>
	<description></description>
	<lastBuildDate>Mon, 12 Jul 2021 16:45:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.2</generator>
<meta xmlns="http://www.w3.org/1999/xhtml" name="robots" content="noindex" />
<item>
<feedburner:origLink>https://www.brookings.edu/research/a-good-enough-early-childhood/</feedburner:origLink>
		<title>A good-enough early childhood</title>
		<link>http://webfeeds.brookings.edu/~/588385516/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Grover J. "Russ" Whitehurst]]></dc:creator>
		<pubDate>Thu, 20 Dec 2018 10:00:19 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=553970</guid>
					<description><![CDATA[Executive Summary The standard model of the role of early experience in human development assumes that children’s environments in their first years of life are dominant influences on who they become as adults. The standard model favors interventions to improve children’s long-term outcomes that start early in life and are intensive in time and attention&hellip;<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/12/ES_20181219_Good-Enough-Model.jpg?w=265" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/12/ES_20181219_Good-Enough-Model.jpg?w=265"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/588385516/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Grover J. &quot;Russ&quot; Whitehurst</p>
<h2>Executive Summary</h2>
<p>The standard model of the role of early experience in human development assumes that children’s environments in their first years of life are dominant influences on who they become as adults. The standard model favors interventions to improve children’s long-term outcomes that start early in life and are intensive in time and attention from nurturing adults. The benefits of such interventions, including high-quality, universal preschool programs, are assumed to accrue to children from all socio-economic strata, and to be powerful enough to substantially eliminate racial and social class differences in children’s life outcomes.</p>
<p>I propose a different way of thinking about the role of early experience, which I call the good-enough model. It is an evolutionary perspective that sees the human species as having evolved in circumstances that support normal development of brain and behavior in a wide range of environments, including those in which parents and communities do not invest extraordinary time and attention in the rearing of their young. It posits a floor with respect to early stimulation, the good-enough point, above which the vast majority of children will experience normal development of brain and behavior without the need for special programs or expensive enrichment experiences. A corollary is that the returns to investment in intensive early childhood programs rapidly diminish beyond the good-enough point.</p>
<p>The good-enough model readily incorporates research findings that are anomalies within the standard model, including seemingly high quality preschool programs that produce no long-term advantage for participants; normal later development of children reared in very impoverished early environments; weak associations between measures of cognitive abilities in toddlers and their cognitive abilities measured later in life; and genetic influences on individual differences in human cognitive and socio-emotional abilities that are orders of magnitude greater than the effects family and school environments.</p>
<p>The good-enough model has implications for child rearing across the range of socio-economic advantage. With respect to social programs intended to increase opportunity for children from economically disadvantaged homes, it favors investments in families and communities that open doors to children throughout their dependent years. Examples include programs that increase families’ income, stability, and employment opportunities. For advantaged families, it suggests that intensely programmed and expensive early exposure lessons and experiences intended to &#8220;grow your child&#8217;s neurons&#8221; are unlikely to be productive. For all families, it suggests how critical it is to avoid damaging experiences for young children and, by implication, the importance of attending to conditions that support the overall health of the family unit.</p>
<h2>The standard model of early experience</h2>
<p>Early childhood experts, advocates of investment in early learning programs, and interested members of the public generally believe that early experience is a dominant influence on the development of the cognitive and emotional characteristics of grownups. Specifically, they accept that:</p>
<ol>
<li>programs intended to improve the life outcomes of children should start as early in life as possible (“the evidence points to a high return to early interventions and a low return to … interventions later in the life cycle” – James Heckman);<sup class="endnote-pointer">1</sup></li>
<li>more is better, i.e., every child benefits from as much intensive interaction with caring adults as possible (“children’s … cognitive, social, and emotional development is driven almost entirely by time- and attention-intensive adult nurture and care” – Katherine Stevens);<sup class="endnote-pointer">2</sup></li>
<li>preschool programs can substantially reduce what would otherwise be glaring racial and social class differences in life outcomes. (“high-quality preschool is a sure path to the middle class” – Arne Duncan);<sup class="endnote-pointer">3</sup></li>
<li>these things are true because of the nature of the development of the human brain.</li>
</ol>
<p>Here’s a version of the point on brain science from a report from the U.S. Chamber of Commerce:</p>
<blockquote><p><em>The infant brain has about 100 billion cells at birth—roughly the same number as an adult brain—but with many fewer connections between cells. In the first months of life, the brain’s neural network expands exponentially, from around 2,500 connections per neuron at birth to about 15,000 connections between ages 2 and 3, with rapid growth continuing into the early elementary school years …. Those connections—called synapses—“wire” the structure of a young child’s brain in response to his or her environment and cumulative experiences…. Healthy development at any stage depends on healthy development in previous stages, as more complex neural connections and skills build on earlier ones.<sup class="endnote-pointer">4</sup></em></p></blockquote>
<p>Let’s call these four axioms the standard model of early human experience. They have driven two things: First is a strong and successful push to expand public early childhood programs at the state and local levels. Second is increasing investment in time and money by families on the provision of more “stimulating” environments for their young children.</p>
<h2>Anomalies to the standard model</h2>
<p>The standard model is more of an organizing framework than a formal, straightforwardly testable scientific theory. But to the degree that it fails to organize bodies of reliable observations to which it should be applicable, there is good reason to reconsider it.</p>
<p>There are a number of empirical anomalies to the standard model. I briefly review five: 1) preschool programs that do not improve and sometimes harm children’s later development; 2) normal developmental outcomes in children who have experienced very impoverished early environments; 3) weak correlations between measures of cognitive development in infants and toddlers and their later cognitive abilities; 4) disproportionately larger positive impacts of universal preschool programs on the most disadvantaged children; and 5) the strong genetic influence on many of the characteristics of children that early childhood programs are intended to influence.</p>
<h3><strong>Preschool programs that do not improve and sometimes harm children’s later development</strong></h3>
<p>The Head Start Impact Study, a randomized trial comparing four-year-olds who won vs. lost a lottery to attend oversubscribed Head Start centers across the nation, found positive effects on outcomes such as letter knowledge at the end of the Head Start year for winners of the lottery. However, there were no later differences between the treatment and control group of children as they were followed through the 3rd grade. This was true for cognitive as well socio-emotional abilities. From the final government report of the study results:</p>
<blockquote><p><em>Looking across the full study period, from the beginning of Head Start through 3rd grade, the evidence is clear that access to Head Start improved children’s preschool outcomes across developmental domains, but had few impacts on children in kindergarten through 3rd grade.<sup class="endnote-pointer">5</sup></em></p></blockquote>
<p>The Tennessee Voluntary Pre-K Evaluation exposes a second failed large-scale preschool intervention. It was a randomized trial comparing children who won vs. lost a lottery to attend the public pre-K program in Tennessee. As was the case in the Head Start Impact Study, there were positive impacts of the program at the end of the pre-K year. But as the children moved through elementary school, the treatment group actually did worse than the control group. From the peer-reviewed report of the study results:</p>
<blockquote><p><em>Positive achievement effects at the end of pre-k reversed and began favoring the control children by 2nd and 3rd grade. [Program] participants had more disciplinary infractions and special education placements by 3rd grade than control children…. On the 3rd grade state achievement tests for the full randomized sample – pre-k participants did not perform as well as the control children. Teacher ratings of classroom behavior did not favor either group overall, though some negative treatment effects were seen in 1st and 2nd grade.<sup class="endnote-pointer">6</sup></em></p></blockquote>
<p>A third example comes from evaluations of the Quebec childcare program. In 1997, the Canadian province of Quebec introduced a universal childcare program that provided heavy subsidies to childcare providers and required only a small payment by parents ($5 a day, later raised to $7). Uptake of the program was substantial, and mothers of young children entered the workforce at a substantially higher rate than was the case prior to the program’s existence. Researchers have been evaluating the impacts of the program on children ever since using quasi-experimental methods. The gist of the findings is that the program had little effect on cognitive achievement, but substantial long-term negative impacts on social adjustment. In the words of the authors of a recent evaluation of the impacts of the program on social skills:</p>
<blockquote><p><em>We first confirm earlier findings showing reduced contemporaneous noncognitive development following the program introduction in Quebec, with little impact on cognitive test scores. We then show these non-cognitive deficits persisted to school ages, and also that cohorts with increased child care access subsequently had worse health, lower life satisfaction, and higher crime rates later in life.<sup class="endnote-pointer">7</sup></em></p></blockquote>
<p>There are many other studies that are consistent with the findings of the three described above. For example, there is no association between differences among states in their pre-K enrollment and differences in their NAEP scores in fourth grade.<sup class="endnote-pointer">8</sup> In general, the evidence questioning the power of contemporary public preschool programs in the U.S. to meaningfully enhance later development, much less to “provide a sure path to the middle class”, is much heartier than the science supporting the standard model.</p>
<h3><strong>Normal developmental outcomes in children who have experienced very impoverished early environments</strong></h3>
<p>The empirical underpinnings of the policy prescriptions of the standard model are at their strongest in observations of long-term negative impacts of impoverished early childhoods. Classic studies involve follow-up of children adopted internationally from custodial state orphanages. Children who spend the first year or more of their life in such institutions sometimes display serious behavioral and cognitive deficits as they progress though childhood and into adulthood.</p>
<p>But there is a countervailing story that receives much less attention. It includes the resilience of many children who are adopted after having spent the first part of the life in a low-quality orphanage. From a recent systematic review of the literature:</p>
<blockquote><p><em>While the rates of problems are higher in PI [post-institutionalized] children than in parent-reared children, most PI children fall in the normal range of adjustment; PI children often show remarkable resiliency, despite the challenges they faced early in life.<sup class="endnote-pointer">9</sup></em></p></blockquote>
<p>Important here is largely forgotten research of nearly 50 years ago by Harvard developmental psychologist Jerome Kagan on the development of infants in rural Guatemalan villages.</p>
<blockquote><p><em>The observed infants were permitted very little activity in the first 15 months of life, were not allowed outside their family hut, had little to play with, and were seldom played with. Compared to American infants, the Guatemalan children were extremely passive and delayed in measures of attention during infancy. However, by pre-adolescence these children had caught on and performed comparably to American middle class norms on tests of perceptual analysis, perceptual inference, recall. and recognition memory. They smiled, talked, and were intellectually competent. This, even though their environments in early childhood contained only about 20% of the interactions with adults that are typical in American families.<sup class="endnote-pointer">10</sup></em></p></blockquote>
<p>That children’s later development is prototypically in the normal range despite early environments that are shockingly deficient by contemporary standards is difficult to reconcile with the tenets of the standard model.</p>
<h3><strong>Weak correlations between measures of cognitive development in infants and toddlers and their later cognitive abilities</strong></h3>
<p>One of the tenets of the standard model is that earlier intervention is better than later intervention, and that a key to understanding why is the rapid growth of neural synapsis during the first years of life. But short of substantial deprivation, it isn’t clear that cognitive growth in later childhood builds on cognitive experience in the first year or two of life.</p>
<p>Relevant here is that the leading test of infant cognitive development, The Bayley Scales of Infant and Toddler Development, generates scores that are only very weakly predictive of cognitive ability at later stages of development. For instance, the Bayley cognitive development scores from infants at one year of age correlate only .20 with their IQ scores at 7 years of age, i.e., they account for less than 5% of the variance in children’s 2nd grade IQ scores.<sup class="endnote-pointer">11</sup></p>
<p>The previously described orphanage studies are relevant as well to the disconnect between the circumstances and outcomes of development in the first two years of life versus later in the preschool period. Those studies frequently find an inflection point in later outcomes that occurs around 18 months of age at adoption. Children who are younger than 18 months when they are adopted are not likely to experience problems in later years despite the barren environmental circumstances of their first year and a half of life.<sup class="endnote-pointer">12</sup> In the standard model, healthy development as an infant or toddler should not occur in an orphanage that provides little more than nutritional support to its wards. And since healthy development in later stages depends on healthy development in earlier stages, adoption at 18 months should not erase the effects of severe deprivation of stimulation and human interaction prior in the first year or so of life.</p>
<h3><strong>Disproportionately larger positive impacts of universal preschool programs on the most disadvantaged children</strong></h3>
<p>One of the tenets of the standard model is equipotentiality (#2 in my initial list), meaning that children at any and all points of the distribution of environmental advantage can benefit from more time, intensity, and quality in their interactions with caring adults. This premise plays out in advocacy for universal vs. targeted early childhood programs. And it is the implicit foundation of the superparent phenomenon, i.e., a parent who lavishes extraordinary time and resources in providing “stimulating” activities and a “healthy” environment for her developing child, doing so in the belief that these investments will pay off in the child’s later success.</p>
<p>But the research is crystal clear that whether effects of preschool programs are measured contemporaneously or later in a child’s life, it is children from the most disadvantaged backgrounds that receive the most benefit. From a recent consensus statement on the subject from established scholars in the field, grounded on a systematic literature review:</p>
<blockquote><p><em>Studies of different groups of preschoolers often find greater improvement in learning at the end of the pre-k year for economically disadvantaged children and dual language learners than for more advantaged and English-proficient children.<sup class="endnote-pointer">13</sup></em></p></blockquote>
<h3><strong>Strong genetic influence on many of the characteristics of children that early childhood programs are intended to influence</strong></h3>
<p>Early model preschool programs, including Perry and Abecedarian, were intended to enhance children’s IQ and school achievement. More recently, the bearers of the standard model have shifted their emphasis to soft skills in conceptualizing and measuring the impacts of programs intended to boost the long-term outcomes of young children. The proposition is that preschool programs improve later outcomes for children by enhancing their personality traits such as agreeableness, conscientiousness, and openness to experience.<sup class="endnote-pointer">14 </sup></p>
<p>But both cognitive abilities as measured by IQ and achievement tests and personality traits such as conscientiousness measured through surveys are heavily influenced by the DNA passed from parents to their biological children. Several independent studies converge on the estimate that a bit more than 40 percent of the variance in major personality traits is due to genes whereas only 7 percent is due to home and school. For conscientiousness, the personality trait that is most often considered a key outcome for preschool intervention, the estimate of heritability from the four most recent studies is 49 percent.<sup class="endnote-pointer">15</sup> Estimates of heritability of IQ are even higher.<sup class="endnote-pointer">16</sup></p>
<p>This is not to conclude that early childhood environments cannot influence personality and IQ, but it does suggest that large enduring impacts on these outcomes from a year or two in preschool are unlikely to overcome the influence of genes and family environment.</p>
<h3><strong>An alternative: the good-enough model of early experience</strong></h3>
<p>An alternative to the standard model, the good-enough model, incorporates each of the anomalies described above while leaving a significant role for public programs and parental activities intended to enhance outcomes and opportunities for children from disadvantaged families.</p>
<p>The root tenet of the good-enough model is that the human species has evolved in circumstances that support normal development of brain and behavior in environments that have been typically experienced by our species. These environments are not ones in which parents invest extraordinary time and attention in the rearing of their young children. Rather, these are environments in which there are diminishing returns in terms of evolutionary advantage and survival of the family unit to investments in the care and nurturing of young children. In this framework, our species evolved epigenetic mechanisms of development that require nothing more than adequate, or good-enough, levels of nutrition and critical forms of environmental interaction. These developmental mechanisms are heavily regulated genetically and well-buffered from the variations in environmental experience that are within the range typically encountered by the species.</p>
<p>In contrast to the standard model, the good-enough model suggests that earlier intervention is not necessarily better than later intervention; more intensive social interactions with adults are not necessarily better than less; preschool programs are unlikely to eliminate differences in outcomes and opportunity for children from different family backgrounds; and the dramatic growth in neuronal synapses during the first years of life is going to happen for all biologically normal children without heavy lifting by parents or society.</p>
<p>The following points provide additional background on the good-enough model.</p>
<h3><strong>The human species has an extended period of early development because our fully formed brains are much too large to pass through the birth canal</strong></h3>
<p>Humans differ from other primates in many ways, but most importantly in the size of their brains. This evolutionary path was taken by the human species because big brains were more useful than big muscles in adapting to the varied environments in which humans evolved.</p>
<p>The growth in skull size relative to body size that was necessary to accommodate evolution towards ever larger brains ran head on, so to speak, into the mechanical constraint of the size of the birth canal. Rather than scale-up the size of females so that they could give birth to infants with brains as fully formed as in other primates (in that case, human neonates would have weighed more than 20 lbs.), evolution took the path of delaying anatomical development of the brain into the postnatal period.</p>
<p>In humans the brain is 23% of its adult size at birth whereas it is 40% in chimps, and while chimps reach 70% of their final brain capacity within a year of birth, humans take three years.<sup class="endnote-pointer">17</sup> Thus the rapid brain growth that occurs in the intrauterine environment for all primates continues far longer after birth for humans. One way this has been characterized is that with respect to brain development humans are born very prematurely.<sup class="endnote-pointer">18</sup> Thus the first year or so of post-uterine life for a human baby is not primarily about soaking up experience and from that growing synapses, but about continuing the anatomical development of the brain that would occur in an intrauterine environment if only human brains were smaller.</p>
<h3><strong>Darwinian selection for the human species favors large allowances for differences in experience and parental investment in nurturing</strong></h3>
<p>As a species, humans have a low frequency of reproduction, low litter size, and offspring with an extended period of dependency. If the purpose of our large brains is to give us a reproductive advantage, one could question why evolutionary forces would condition the ultimate usefulness of that large brain on individual differences among adults in individual family units in the time and intensiveness of their interactions with their young. In that scenario, children in families in which the adults did not have the capacity to invest their time heavily in nurturing interactions with their young children would not develop normal brains. And failing in that, they and their species would be at a severe disadvantage in the competition to survive to adulthood and reproduce.</p>
<p>It would make more sense from an evolutionary perspective for the normal development of the human brain during the period of early childhood to be well-buffered against the variations and vicissitudes in child-rearing environments that would ordinarily be encountered across family units and circumstances of living. Were this so, it would align with the perspective that what the human brain is doing in the first couple of years of life is completing the growth that would have occurred in utero were it not for humans’ need to grow very large brains. A developmental trajectory that is protected against ordinary and expected variation in environment would be baked into the design.</p>
<p>We understand this for physical development, i.e., we need <strong>adequate</strong> not extraordinary levels of nutrition and experience in movement for our bodies to grow normally. But for social emotional and cognitive development the standard model requires a superparent and a high-quality preschool center and enrichment classes and an hour a night of shared picture book reading (and more) for a child to realize his or her genetic potential. Adequate environment makes much more evolutionary sense than extraordinary environment as a requirement for normal development of the human species.</p>
<h3><strong>The good-enough model of early environment aligns with a distinction between the average expectable environment vs. the damaging environment</strong></h3>
<p>Nearly 30 years ago the distinguished developmental psychologist, Sandra Scarr, summed up implications for early intervention of the findings from her prior 25 years of research on the genetics of human development. She drew a critical distinction between the average expectable environment vs. a damaging environment. The former represents the typical environment a child can expect to experience regardless of broad differences in the specifics of culture, class, and family. In contrast, a damaging environment is one that interferes with a genetically programmed developmental progression.</p>
<p>Thus, a biologically normal child reared with virtually no exposure to language experiences a damaging environment because the unfolding of language competences that occurs in all normal members of the human species over the first few years of life depends on some exposure to language. The average expectable environment includes such exposure but tolerates wide variability in how much such exposure occurs and in what form.</p>
<p>In Scarr’s treatment, interventions intended to help children experiencing damaging environments can have lasting effects, whereas changing the trajectory of the lives of children experiencing something within the range of the average expectable environment, the good-enough environment in my terms, is difficult and unlikely to be transformative. As she puts it:</p>
<blockquote><p><em>It is not easy to intervene deliberately in children’s lives to change their development, unless their environments are outside of the normal species range…. Feeding a below average-intellect more and more information will not make her brilliant. Exposing a shy child to socially demanding events will not make him feel less shy. The child with below-average intelligence and the shy child may gain some specific skills and helpful knowledge of how to behave in specific situations, but their enduring intellectual and personality characteristics will not be changed.<sup class="endnote-pointer">19</sup></em></p></blockquote>
<h2>Summing Up</h2>
<p>The standard model of early experience, which presently drives most policy and practice in early childhood, is imperfect, at best. It faces numerous empirical anomalies and ignores alternative ways of organizing the current body of knowledge and observations on early experience. The standard model and the alternatives, such as the good-enough model presented here, have very different implications for policy and practice.</p>
<p>The good-enough model places the emphasis on raising the floor for the environmental interactions of early childhood to a point where it is adequate for as many children as possible. From a reform perspective it intends to reduce as much as possible environmental circumstances that are toxic for young children. It postulates a law of rapidly diminishing returns beyond the point of adequacy for investments in preschool interventions intended to enhance children’s cognitive abilities or personality traits. It favors, instead, investments in families and communities that open doors to children throughout their dependent years. Examples include programs and investments that increase family income and stability, as well as programs that improve the rearing circumstances of abused and neglected children.</p>
<p>The nature of the relationships between variables of early experience and later development are an empirical matter, not something to be assumed based on either the standard model or the good-enough model. The good-enough model as described here is intended not as a replacement for the standard model as the lodestar for early childhood policy and practice. Rather it provides a different framework for thinking about early experience, and thereby highlights the many unknowns around the critical and necessary policy decisions on early childhood investments.</p>
<p>There is simply a lot we don’t know and much we need to learn going forward. It will, I conclude, be far more productive to approach the task of improving lives through public investment in children and families if we embrace that uncertainly and proceed in ways that can chip away at it. Dependence on conceptual frameworks that often don’t fit the plain facts in front of our noses carries serious opportunity costs.</p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/588385516/0/brookingsrss/series/evidencespeaks">
<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/12/ES_20181219_Good-Enough-Model.jpg?w=265" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/12/ES_20181219_Good-Enough-Model.jpg?w=265"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/588385516/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/588385516/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="https://www.brookings.edu/wp-content/uploads/2018/12/ES_20181219_Good-Enough-Model.jpg?w=265" type="image/jpeg" />
		<atom:category term="Early Childhood Education" label="Early Childhood Education" scheme="https://www.brookings.edu/topic/early-childhood-education/" /></item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/a-promising-alternative-to-subsidized-lunch-receipt-as-a-measure-of-student-poverty/</feedburner:origLink>
		<title>A promising alternative to subsidized lunch receipt as a measure of student poverty</title>
		<link>http://webfeeds.brookings.edu/~/564556388/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Matthew M. Chingos]]></dc:creator>
		<pubDate>Thu, 16 Aug 2018 09:00:40 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=532809</guid>
					<description><![CDATA[A central component of federal education law for more than 15 years is that states must report student achievement for every school both overall and for subgroups of students, including those from economically disadvantaged families. Several states are leading the way in developing and using innovative methods for identifying disadvantaged students, and other states would&hellip;<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/564556388/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi1.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f08%2fES_20180816_ChingosFig1-01.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Matthew M. Chingos</p><p>A central component of federal education law for more than 15 years is that states must report student achievement for every school both overall and for subgroups of students, including those from economically disadvantaged families. Several states are leading the way in developing and using innovative methods for identifying disadvantaged students, and other states would do well to follow them.</p>
<p>Until recently, low-income students have almost always been identified as those eligible for the federal free or reduced-price lunch (FRL) program.<sup class="endnote-pointer">1</sup> But FRL eligibility is quickly becoming useless for both research and policy, as I documented in a 2016 Evidence Speaks <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/evidencespeaks/~https://www.brookings.edu/research/no-more-free-lunch-for-education-policymakers-and-researchers/" target="_blank" rel="noopener">report</a>.</p>
<p>About one in five schools now offer free lunches to all of their students under a “community eligibility” provision.<sup class="endnote-pointer">2</sup> The result is that the share of U.S. students receiving a subsidized lunch has climbed from less than 35 percent in 1990 to more than 50 percent today, even though the share of children who grow up in low-income families has not changed over this period.</p>
<p>This trend presents immediate challenges to states as they implement new school accountability systems under the Every Student Succeeds Act (EESA).<sup class="endnote-pointer">3</sup> Continuing to use FRL to identify economically disadvantaged students in community eligibility schools means either saying that all students are eligible, which would violate the spirit of ESSA, or surveying families to find out who would be eligible on an individual basis, which would be costly and burdensome. Census data could be used to estimate the level of disadvantage in a school’s surrounding neighborhood, but cannot be linked to achievement data at the student level.</p>
<p>Fortunately, several states are leading the way in adopting new methods for identifying disadvantaged students based on their families’ participation in programs such as the Supplemental Nutrition Assistance Program (SNAP), Temporary Assistance for Needy Families (TANF), Medicaid, and the foster care system.</p>
<p>Districts have been making such linkages to “directly certify” students for FRL without them having to complete a form. States assuming the responsibility for this linkage reduces burden on districts and ensures more uniformity. Most important for ESSA purposes, it means that states including Delaware, Massachusetts, New Mexico, Tennessee, and Washington, DC will be able to shine a light on the achievement of disadvantaged students even in schools where all students get a free lunch.<sup class="endnote-pointer">4</sup></p>
<p>Washington, DC makes for an instructive case study, as it is an urban school system where two-thirds of students attend schools where free lunch is provided to everyone. DC’s new accountability system will identify economically disadvantaged students as those who are “at-risk” by virtue of participation in SNAP or TANF or being in foster care or homeless.<sup class="endnote-pointer">5</sup></p>
<p>Shifting to this new definition dramatically increases the number of schools for which achievement gaps can be calculated (Figure 1).<sup class="endnote-pointer">6</sup> In 2017, only 26 percent of students attended schools where the achievement of FRL students could be compared to non-FRL students, down from 40 percent two years earlier. But more than 80 percent of students attend schools where the scores of at-risk students can be compared to other students.<sup class="endnote-pointer">7</sup></p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=500%2C375px&amp;ssl=1 500w" alt="Figure 1" data-src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig1-01.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p>School-by-school data, reported in Figure 2, show that the at-risk percentage varies dramatically among the two-thirds of schools where all students receive a free lunch, from 23 percent to 95 percent. By collecting the data underlying the at-risk designation, DC makes it possible to both measure achievement gaps within these schools and understand differences in contexts across these schools.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Figure 2" data-src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/08/ES_20180816_ChingosFig2-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p>Transitioning to a new measure of economic disadvantage will entail some challenges. There is surely some cost of making data linkages across systems maintained by different agencies, and it has to be done using methods that ensure the privacy and confidentiality of student records. States may need to make upgrades to their data systems, or amend laws or regulations that restrict how data can be used.</p>
<p>But it is clear that FRL participation is no longer a viable option for identifying economically disadvantaged students, especially in areas with significant low-income populations. All states should follow the lead of DC, Delaware, Massachusetts, New Mexico, and Tennessee by putting in place linked data systems that enable them both to identify students who should get a free lunch—regardless of whether they fill out a form—and to shine a bright light on how much these students are learning.<a href="#_ednref1" name="_edn1"></a></p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. He is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/564556388/0/brookingsrss/series/evidencespeaks">
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/564556388/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi1.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f08%2fES_20180816_ChingosFig1-01.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/564556388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="http://webfeeds.brookings.edu/-/564556386/0/brookingsrss/series/evidencespeaks.jpg" type="image/jpeg" />
		<atom:category term="Education" label="Education" scheme="https://www.brookings.edu/topic/education/" />
<feedburner:origEnclosureLink>https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180815_SchoolLunch.jpg?w=265</feedburner:origEnclosureLink>
</item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/accountability-for-early-education-a-different-approach-and-some-positive-signs/</feedburner:origLink>
		<title>Accountability for early education–a different approach and some positive signs</title>
		<link>http://webfeeds.brookings.edu/~/563245776/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Susanna Loeb]]></dc:creator>
		<pubDate>Thu, 09 Aug 2018 09:00:09 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=531864</guid>
					<description><![CDATA[Early childhood education in the United States is tangle of options—varying in quality, price, structure, and a range of other dimensions. In part as a result, children start kindergarten having had very different experiences in care and very different opportunities to develop the skills and dispositions that will serve them well during school. Systematic differences&hellip;<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180808_EducationPrograms.jpg?w=309" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180808_EducationPrograms.jpg?w=309"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/563245776/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Susanna Loeb</p>
<p>Early childhood education in the United States is tangle of options—varying in quality, price, structure, and a range of other dimensions. In part as a result, children start kindergarten having had very different experiences in care and very different opportunities to develop the skills and dispositions that will serve them well during school. Systematic differences across groups by income, race, ethnicity, home language, and geographic location are particularly troubling because inequalities that appear early are often sustained through school and affect prospects throughout life.</p>
<p>Convincing research has demonstrated that high-quality early childhood programs can reduce these differences across groups.<sup class="endnote-pointer">1</sup> A few small programs have demonstrated strong positive effects throughout the life cycle, but even some large-scale programs, such as those in Boston and Tulsa have shown effects on math and reading learning.<sup class="endnote-pointer">2</sup> These positive results combined with evident need have led to substantial public investment in early childhood education. State spending on preschool more than doubled between 2002 and 2016, from $3.3 to $7.4 billion (constant 2017 dollars).<sup class="endnote-pointer">3</sup></p>
<p>However, a range of research also shows that many early childhood programs do not have positive long-term effects. For example, as discussed in an early Evidence Speaks brief, the Tennessee Voluntary Prekindergarten Program showed some positive effects for children as they finished their pre‐k school year; but, these positive were largely gone by the end of kindergarten.<sup class="endnote-pointer">4</sup> Program quality likely affects whether programs benefit children and recent investments have focused heavily on quality improvement. The Federal government, for example, invested $1.75 billion between 2011 and 2016 in Race to the Top—Early Learning Challenge and Preschool Development Grants requiring quality-improvement infrastructures and the reauthorization of the Child Care and Development Fund included provisions aimed at increasing quality in the child care sector.<sup class="endnote-pointer">5</sup></p>
<p>Governments have several options in trying to improve quality. Regulations—a form of direct control—are one option that have been used widely in both early childhood and K-12 education. With regulations, governments set minimum class sizes, establish education requirements for teachers and safety requirements for classrooms. Regulations, because they are by nature rigid, tend to set floors on quality instead of pushing towards improvement and making use of opportunities. Other approaches to quality improvement are less consistent but more flexible. For example, local government structures such as school boards are set up to oversee schools with voter accountability, though there is little evidence on how effective they are. School choice is another mechanism aimed to give families some influence over the quality of their schools.</p>
<p>Starting in the early 1980s, states and the Federal government have used outcomes-based accountability with the aim of quality improvement for primary and secondary education. While the K-12 sector in the US is far more centralized than early childhood, it is still one of the more decentralized elementary and secondary education systems globally. Instead of schooling decisions—such as curriculum, instructional approaches, revenues, salaries—being the purview of the national government, each state retains the legal right to most education decisions and, even then, turns many of schooling decisions to local authorities at the district level, many of whom run only one school or a small group of schools. Such decentralization leads to substantial differences in spending, both between and within states, as well as substantial differences in educational offerings. Outcomes-based accountability approaches to improve educational opportunities while retaining some of the advantages of decentralization, particularly incorporating local knowledge, preferences and opportunities into decision making. The results of accountability in K-12 education have been mixed, with some evidence of improvement, especially for programs aimed at accountability at the school level, but also substantial pushback, particularly for narrowing the scope of educational goals.<sup class="endnote-pointer">6</sup> Because outcomes-based accountability in the US has used test scores in math and English language arts as primary measures, for example, schools may focus on these outcomes at the expense of other valued goals.</p>
<p>More recently, outcomes-based accountability has come to preschools in the form of Quality Rating and Improvement Systems (QRIS). QRIS give ratings to early childhood education and care settings based on a variety of measures. Unlike accountability in K-12, these systems tend not to use measures of student test performance since these are far more costly to collect reliably for young children. They include basic measures of resources such as class size and teachers’ educational attainment, but also often include more nuanced observational measures of classroom quality than are common in K-12. The Environment Rating Scale (ERS), for example, is an observation tool used in 30 QRIS states that includes a variety of elements ranging from space and layout to classroom activities and student-teacher interactions. Unlike accountability in K-12, participation in most QRIS systems is voluntary.</p>
<p>QRIS systems assign rating to program that provide information to program staff about their organizations’ own quality and to parents who are choosing programs for their children. Many systems include differential funding reimbursement for programs with higher quality ratings. The first statewide QRIS was implemented by Oklahoma in 1998. By February 2017, 38 states had statewide systems with nearly all others planning or piloting systems.<sup class="endnote-pointer">7</sup></p>
<p>While Quality Rating and Improvement Systems are the common approach to quality improvement for preschools, we have had very little evidence on their effectiveness. A new study by Daphna Bassok, Thomas Dee and Scott Latham, “The Effects of Accountability Incentives in Early Childhood Education,” provides some of the first—not focusing on the long run effects on children, which ultimately we would like to know—but instead, demonstrating that the systems can produce some of the mechanisms needed for quality improvement.<sup class="endnote-pointer">8</sup></p>
<p>In particular, the new study shows first that programs that get lower scores initially respond to these scores by improving in the area that lead to their lower score and, second, that parents respond to program scores in their choice of care for their children. The study compares programs that received scores just over a threshold needed to get a higher rating to very similar programs who scored just under the threshold needed to receive the rating. This approach—what is known as a fuzzy regression discontinuity approach—provides convincing causal effects of the program, much like a randomized control trial would.</p>
<p>The Bassok et. al. study assesses the QRIS program in North Carolina, one of the oldest programs in the country, which begun in 1999 and has been operating in its current form since 2005. North Carolina spends more than any other state on its QRIS, more than $13 million yearly. The system includes well-defined quality standards linked to financial incentives; support for program improvement through technical assistance and local partnerships; regular quality monitoring and accountability and; easily accessible quality information provided to parents.<sup class="endnote-pointer">9</sup> While in most state participation in QRIS is voluntary, but in North Carolina, all non-religious programs are automatically enrolled at the lowest ranking when they become licensed. Programs then can volunteer to be assessed for higher rankings.</p>
<p>North Carolina’s Division of Child Development and Early Education rates programs on a scale of one to five stars. This rating comes from subscales for “education standards” which include the education and experience of administrators, lead teachers, and the overall teaching staff; for “program standards” which includes a variety of structural measures such as staff-child ratios and square footage requirements as well as scores on the observational tool, ERS; and for meeting at least one of a variety of other education or programmatic criteria such as using a developmentally appropriate curriculum. A program’s scores on each of these measures combine to determine their overall rating—one to five stars.</p>
<p>One mechanism through which QRIS could drive quality is that programs seeing that they got lower ratings work on improving their scores. In North Carolina, ECE programs receive higher per-student reimbursements for subsidy-eligible children for every additional star they earn, in theory creating some incentive for program improvement. These increases vary by county and by the age of children served but, in most cases, they are substantial. The researchers, comparing similar programs that received a full point lower ranking due to a just slightly lower ERS rating, find that these slightly-lower scoring programs later earn ERS quality scores that are even higher than programs that received a higher star ranking due to a slightly higher ERS score initially. This result provides evidence that programs respond to the accountability system by improving their practice as measured by the ERS. Such improvement was quite concentrated. That is, the researchers did not find improvements on other measures such as the education and experience of workers.</p>
<p>A second mechanism through which QRIS could improve the quality of care that young children receive is through parents’ care choices. If parents, when given the opportunity, choose higher rated programs then more children will attend higher quality programs, even if the programs themselves do not improve. Using the same approach, the researchers find that in North Carolina parents do respond to the ratings. Programs that got a lower rating for just marginally lower performance on the ERS saw their enrollments drop relative to similar programs. In some areas parents have few choices for programs and in those places, the researchers did not find these enrollment effects. Instead, they were concentrated in areas with where parents had choice of care. As such, this mechanism is unlikely to work in all areas, but the results show that it can work in some, more densely populated areas, and can be driving force for improvement.</p>
<p>Overall, we still do not know the effects of QRIS on children’s long-run trajectories and on the substantial differences in early childhood learning opportunities across groups. Nonetheless this new research demonstrates that a well-designed QRIS system can both encourage programs to improve and provide parents with information that they value in making choices for their children’s care. Combined with regulations that set an acceptable floor for quality, these programs can help to create a higher quality early education system that allows for some diversity of offerings as well as the local control and parental choice, that have been hallmarks—whether for better or for worse—of the US education system and, particularly, the US early education system.</p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. She is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/563245776/0/brookingsrss/series/evidencespeaks">
<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180808_EducationPrograms.jpg?w=309" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180808_EducationPrograms.jpg?w=309"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/563245776/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/563245776/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="https://www.brookings.edu/wp-content/uploads/2018/08/ES_20180808_EducationPrograms.jpg?w=309" type="image/jpeg" />
		<atom:category term="Early Childhood Education" label="Early Childhood Education" scheme="https://www.brookings.edu/topic/early-childhood-education/" /></item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/school-policies-and-the-success-of-advantaged-and-disadvantaged-students/</feedburner:origLink>
		<title>School policies and the success of advantaged and disadvantaged students</title>
		<link>http://webfeeds.brookings.edu/~/562087999/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[David Figlio, Krzysztof Karbownik]]></dc:creator>
		<pubDate>Thu, 02 Aug 2018 09:00:16 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=530855</guid>
					<description><![CDATA[executive summary We make use of matched birth-school administrative data from Florida, coupled with an extensive survey of instructional policies and practices, to observe which policies and practices are associated with improved test performance for relatively advantaged students in a school, for relatively disadvantaged students in a school, for both, and for neither.  We consider&hellip;<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/562087999/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi2.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f08%2fFigure1a.png%3fw%3d768%26amp%3bcrop%3d0%252C0px%252C100%252C9999px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By David Figlio, Krzysztof Karbownik</p><h2>executive summary</h2>
<p>We make use of matched birth-school administrative data from Florida, coupled with an extensive survey of instructional policies and practices, to observe which policies and practices are associated with improved test performance for relatively advantaged students in a school, for relatively disadvantaged students in a school, for both, and for neither. </p>
<p>We consider twelve policies and practices from this survey that are neither highly common nor challenging to implement, and we find that in seven of twelve cases, the policy/practice is associated with much different fifth grade test score outcomes for advantaged versus disadvantaged students. For example, sponsoring Saturday school is associated with significant increases in test performance for disadvantaged students but reductions in test performance for advantaged students. While these are not causal estimates of relationships – to do so would require either an experiment or a natural experiment – they do make clear that school policies and practices that are associated with better outcomes for some students might be associated with worse outcomes for others.</p>
<p>Our bottom line is this: Policies and practices that might be successful overall could actually help one group of students while harming another, so care should be taken when evaluating them to see whether they are benefiting all, some, or no students – and whom they are benefiting. Schools might do a better job ensuring success for all students the more they investigate how the practices are affecting different groups of students. We hope that this analysis will shed some light on possible policies and practices to be evaluated more rigorously, and to encourage a careful analysis of heterogeneous effects of policies and practices.</p>
<h2>Introduction</h2>
<p>The socio-economic differences in student performance are well-known and extensively documented.<sup class="endnote-pointer">1</sup> As just one example: Nationally, 13-year-old students whose parents are college graduates scored over four-fifths of a standard deviation higher on the mathematics assessment of the National Assessment of Educational Progress (NAEP) in 2012 than did those whose parents did not finish high school.<sup class="endnote-pointer">2</sup> In science in 2015 the same gap was also over four-fifths of a standard deviation.<sup class="endnote-pointer">3</sup> Likewise, the test score gap between children from rich and poor families in the United States has widened over time, and is now over a full standard deviation.<sup class="endnote-pointer">4</sup></p>
<p>Important recent work by Reardon and his collaborators shows that not only test scores<sup class="endnote-pointer">5</sup> but also racial test score gaps<sup class="endnote-pointer">6</sup> vary dramatically across American school districts. In this latter paper, Reardon and coauthors report that while racial/ethnic test score gaps average around 0.6 standard deviations across all school districts, in some districts the gaps are almost nonexistent while in others they exceed 1.2 standard deviations. There are many potential explanations for this cross-district variation in achievement gaps, including racial differences in socio-economic status, differences in racial/ethnic segregation, differences in school and neighborhood quality, and the like, and the evidence to date about the leading causes of this variation is descriptive, rather than causal. Nonetheless, the fact remains that in some places, racial/ethnic and socio-economic differences are extraordinarily larger than in other places. These differences also correlate with important long-run economic outcomes as documented in a new work by Chetty and co-authors, where they find suggestive evidence that “quality of schools – as judged by outputs rather than inputs – plays a role in upward mobility.”<sup class="endnote-pointer">7</sup></p>
<p>Moreover, there exists tremendous variation in school quality within school districts.<sup class="endnote-pointer">8</sup> And there are some schools where relatively advantaged students do well but relatively disadvantaged students do poorly, other schools where the reverse is true, other schools where both relatively advantaged and relatively disadvantaged students do well, and still other schools where both relatively advantaged and relatively disadvantaged students do poorly.<sup class="endnote-pointer">9</sup> Furthermore, there exist considerable differences in these patterns across schools within the same school district.<sup class="endnote-pointer">10</sup></p>
<p>The next logical question is to ask whether there are any school-level policies or practices that predict whether schools do particularly well with relatively advantaged students, with relatively disadvantaged students, with both, or with neither. To study this question persuasively, there should either be an experiment that randomly assigns students to schools that have different sets of policies or practices, or a “natural experiment” caused by policy changes or a policy roll-out that affects some schools or areas differently from others. But a good first step is to correlate these performance measures with a broad and varied list of school policies and practices to observe the emerging patterns. Such an analysis would then help researchers and policymakers to shine a light on individual policies and practices using more rigorous empirical methods. This is the purpose of the present report.</p>
<p>In this report, we make use of a remarkable survey carried out during the 1999-2000, 2001-02, and 2003-04 school years by Figlio, Goldhaber, Hannaway, and Rouse. Figlio and colleagues attempted to survey the complete population of school leaders in Florida regarding a wide range of school policies and practices, and achieved response rates between 70 and 80 percent in every survey round.<sup class="endnote-pointer">11</sup> We match these survey responses to a student-level dataset that combines children’s birth certificate data with their educational records. The Florida Departments of Education and Health merged the birth and education records for the purposes of this research agenda.</p>
<p>Being able to match children’s school records to their birth certificates provides new opportunities for a much more detailed measure of socio-economic advantage or disadvantage than can be typically observed from school records. We combine information on parental education levels, maternal age, marital status, and poverty status at the time of birth<sup class="endnote-pointer">12</sup> to construct a continuous index of socio-economic status at the time of birth.<sup class="endnote-pointer">13</sup> Using this information, we calculate school-level performance of relatively advantaged and relatively disadvantaged students.<sup class="endnote-pointer">14</sup> Because the children in the matched dataset were born between 1994 and 2001, the school leader survey response years correspond to when the students in the matched administrative data were either in elementary school or just before they entered elementary school.</p>
<p>Using this matched dataset, we investigate the degree to which twelve popular school-level policies and practices correlate with the relative success of disadvantaged students, advantaged students, both, or neither.</p>
<h2><strong>School-level policies and practices considered</strong></h2>
<p>The surveys carried out by Figlio, Goldhaber, Hannaway, and Rouse include dozens of questions. For this initial investigation of the data, we limit ourselves to the twelve questions that have considerable variation in the frequency with which the policy is carried out. Many policies and practices are implemented by almost all schools and many policies and practices are implemented by very few schools, and we want to look at policies and practices that are more in the middle of the spectrum.<sup class="endnote-pointer">15</sup> Because our outcome of interest is the fifth-grade statewide test score, we limit the analysis to elementary schools.</p>
<p>While the surveys inquired about many policies and practices that were highly-frequently cited or rarely cited, the policies and practices identified in the surveys that are in the middle of the frequency spectrum are:</p>
<p>(1) Does this school use monetary rewards (including one-time cash bonus) to reward teacher performance, independent of incentives used by the district?</p>
<p>(2) Does this school use block scheduling?</p>
<p>(3) Does this school make use of subject matter specialist teachers?</p>
<p>(4) Does this school use looping (to keep students with teachers and classmates across years)?</p>
<p>(5) Does this school use multi-age classrooms?</p>
<p>(6) Does this school assign an aide to low-performing teachers to improve their performance?</p>
<p>(7) Does this school provide sponsored summer school?</p>
<p>(8) Does this school extend the school year beyond what the state and district require?</p>
<p>(9) Does this school sponsor Saturday school?</p>
<p>(10) Does this school require summer school for grade advancement of low-performing students?</p>
<p>(11) Does this school require before-school or after-school tutoring of low-performing students?</p>
<p>In addition, we constructed a twelfth school policy/practice regarding the required number of days of teacher professional development; to be parallel with these dichotomous outcomes, we measure whether the school is above or below the median in the number of required professional development days for teachers.</p>
<p>The survey intentionally did not define these terms, but rather left it to respondents to answer the questions as they saw fit.</p>
<h2>Analysis and results</h2>
<p>In this analysis, we look separately at students who are relatively advantaged (top quartile of the socio-economic distribution) and relatively disadvantaged (bottom quartile of the socio-economic distribution), and focus on schools that are reasonably heterogeneous – those with at least ten observed students in each socio-economic quartile. (All told, 1,223 public elementary schools have at least ten observed students in each socio-economic quartile across observed school years.) We first regress fifth grade statewide test scores on a series of background variables (race, ethnicity, country of birth, gender, gestational age, birth weight, and month and year of birth) and then compare these “residualized” test scores across schools that either offer the policy/practice or that do not, and do so separately for relatively disadvantaged and relatively advantaged students. Because test scores differ greatly across race-ethnicity-nativity groups, and these characteristics are permanent for each child, we prefer to “net out” any variation in achievement that does not come from either socio-economic status or school policies. While we recognize that racial and ethnic composition are themselves also indicators of socio-economic status and affluence, we want to try to get at the portion of socio-economic status that is not associated with race and ethnicity. We estimate and present a multivariable analysis, in which we consider a “horse race” between the twelve policies and practices; sometimes schools carry out two or more of these policies and practices, and we want to see which seem to be more strongly associated with test scores for different groups of students.<sup class="endnote-pointer">16</sup></p>
<p>The figures below present the fifth grade test score differences between schools that offer the policy/practice and those that do not, estimated separately for relatively disadvantaged and relatively advantaged students. The blue bars (to the left of each pair of bars) present the estimated relationships for the least advantaged students, and the red bars (to the right of each pair of bars) present the estimated relationships for the most advantaged students. We arrange the policies and practices based on the average socio-economic status of the schools that adopt these practices; schools educating the least advantaged students are the most likely to sponsor Saturday school, while schools educating the most advantaged students are the most likely to offer monetary incentives for teachers. To make the graphs more readable, we split the policies and practices into two groups of six, with the policies and practices that tend to be adopted by relatively disadvantaged schools presented in the first graph and the policies and practices that tend to be adopted by relatively advantaged schools presented in the second graph. Test scores are standardized and residualized as noted above, and we present estimated differences in terms of percentage of a standard deviation.</p>
<p><img class="alignnone size-article-inline lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" sizes="723px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=512%2C9999px&amp;ssl=1 512w" alt="Figure 1a" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1a.png?fit=512%2C9999px&amp;ssl=1 512w" /><a href="#_ednref1" name="_edn1"></a></p>
<p><img class="alignnone size-article-inline lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" sizes="723px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=512%2C9999px&amp;ssl=1 512w" alt="Figure 1b" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure1b.png?fit=512%2C9999px&amp;ssl=1 512w" /></p>
<p>To help to interpret these figures, consider the practice at the very left of the top graph – whether a school sponsors Saturday school, the practice most disproportionately associated with schools educating disadvantaged students. We find that the most disadvantaged students have 5.3 percent of a standard deviation higher test scores in schools that sponsor Saturday school than in schools that do not. But the difference in test scores for the most advantaged students goes the other way: The most advantaged students have 1.7 percent of a standard deviation <em>lower</em> test scores in schools that sponsor Saturday school than in schools that do not. As a consequence, the difference between the estimated relationships for disadvantaged versus advantaged students are 7 percent of a standard deviation.</p>
<p>This comparison makes clear that it might be challenging for a school to achieve high performance for all students – at least with the same set of policies and practices. While we are not estimating a causal relationship, and there are many unobserved reasons why a school might choose to sponsor Saturday school, it’s still the case that we observe that disadvantaged students’ test scores are higher in schools that sponsor Saturday school, while advantaged students’ test scores are lower in these same schools.</p>
<p>Indeed, consider the following scatterplot, in which each point represents a different Florida elementary school. We plot test scores for the most advantaged students on the horizontal axis and those for the least advantaged students on the vertical axis. The blue dots are schools that do not sponsor Saturday school, and the orange dots are schools that do. In general, schools that do better with one group of students do better with the other group of students. But for any given level of advantaged-student test scores, relatively disadvantaged students do better in schools that sponsor Saturday school than in those that do not.</p>
<p><img class="alignnone size-article-inline lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" sizes="723px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=512%2C9999px&amp;ssl=1 512w" alt="Figure 2" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?w=768&amp;crop=0%2C0px%2C100%2C9999px&amp;ssl=1 768w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=600%2C9999px&amp;ssl=1 600w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=400%2C9999px&amp;ssl=1 400w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/08/Figure2.png?fit=512%2C9999px&amp;ssl=1 512w" /></p>
<p>Looking more broadly, we observe that among the range of policies and practices that we consider, the policies and practices are associated with statistically significantly different associations for advantaged and disadvantaged students in seven of the twelve cases. In five of these seven instances, the estimated associations go in opposite directions for advantaged and disadvantaged students, whereas for the sixth and seventh (subject matter specialist teachers and multi-age classrooms) the estimated associations are negative for both advantaged and disadvantaged students, but much larger (and statistically distinct from zero) for advantaged students in the case of subject-matter specialist teachers, and for disadvantaged students in the case of multi-age classrooms. There are other cases where there are differences: Required summer school for low-performers is associated with worse test scores for advantaged students, but not for disadvantaged students; aides for low-performing teachers and more professional development are associated with worse test scores for advantaged students but better for disadvantaged students; and sponsored summer school seems to have a positive relationship for advantaged students and a negative one for disadvantaged students.<sup class="endnote-pointer">17</sup></p>
<p>Occasionally, we do see a practice that is associated with improved (or reduced) test scores for <em>both</em> advantaged and disadvantaged students: In addition to the cases of multi-age classrooms and subject-matter specialist teachers in elementary school, the estimated relationships point in the same direction (but are not statistically distinct from zero) in the case of extended school year (negative association for both). Again, while these are not causal estimates of relationships – to do so would require either an experiment or a natural experiment, as mentioned above – they do make clear that school policies and practices that are associated with better outcomes for some students might be associated with worse outcomes for others.</p>
<h2><strong>Conclusion</strong></h2>
<p>Our bottom line is this: Policies and practices that might be successful overall could actually help one group of students while harming another, so care should be taken when evaluating them to see whether they are benefiting all, some, or no students – and whom they are benefiting. Schools might do a better job ensuring success for all students the more they investigate how the practices are affecting different groups of students. We hope that this analysis will shed some light on possible policies and practices to be evaluated more rigorously, and to encourage a careful analysis of heterogeneous effects of policies and practices.</p>
<hr />
<p><em>The authors did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. They are currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/562087999/0/brookingsrss/series/evidencespeaks">
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/562087999/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi2.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f08%2fFigure1a.png%3fw%3d768%26amp%3bcrop%3d0%252C0px%252C100%252C9999px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/562087999/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="http://webfeeds.brookings.edu/-/562352096/0/brookingsrss/series/evidencespeaks.jpg" type="image/jpeg" />
		<atom:category term="K-12 Education" label="K-12 Education" scheme="https://www.brookings.edu/topic/k-12-education/" />
<feedburner:origEnclosureLink>https://www.brookings.edu/wp-content/uploads/2018/08/teacher_0011.jpg?w=248</feedburner:origEnclosureLink>
</item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/can-schools-commit-malpractice-it-depends/</feedburner:origLink>
		<title>Can schools commit malpractice? It depends.</title>
		<link>http://webfeeds.brookings.edu/~/560850768/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Mark Dynarski]]></dc:creator>
		<pubDate>Thu, 26 Jul 2018 09:00:24 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=529787</guid>
					<description><![CDATA[Recently seven students attending public schools in Detroit sued the state of Michigan in a Federal district court. Shortages of materials, not having skilled teachers, and poor conditions of their school buildings had deprived them of access to literacy, which, they argued, is essential in order to enjoy the other rights enumerated in the Constitution. &hellip;<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.24.2018_Literacy.jpg?w=290" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.24.2018_Literacy.jpg?w=290"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/560850768/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Mark Dynarski</p><p>Recently seven students attending public schools in Detroit sued the state of Michigan in a Federal district court. Shortages of materials, not having skilled teachers, and poor conditions of their school buildings had deprived them of access to literacy, which, they argued, is essential in order to enjoy the other rights enumerated in the Constitution. </p>
<p>From a research perspective, the case is interesting because the students (their law firm, at least) argued explicitly that evidence-based literacy programs are available and codified by the Institute of Education Sciences’ ‘What Works Clearinghouse,’ and, indeed, the suit names various reading programs supported by evidence that the state of Michigan did not use in Detroit schools.</p>
<p>The judge agreed that conditions in Detroit schools were ‘nothing short of devastating,’ but he ruled against the students because, he argued, a right of access to literacy is not among the fundamental rights in the Constitution. The case is being appealed to a Federal circuit court.</p>
<p>The students in the Michigan lawsuit were making their case at the Federal level, and for a judge to create a new right at that level is a tough ask. Why not argue at the state level that the school district (or, in this case, the state managing the district) was guilty of malpractice? The answer is that chances of winning are miniscule at that level. The K-12 education system has been nearly immune from claims of malpractice.  A recent scan covering the past 40 years found 80 cases alleging education malpractice, and only 1 was successful (that it was successful could be traced to particular wording in the Montana state constitution).<sup class="endnote-pointer">1</sup></p>
<p><a href="#_edn1" name="_ednref1"></a>Doctors, lawyers, accountants, and financial advisers can be and are sued for malpractice. (One study reported that about half of surgical specialists were sued for malpractice at least once within a six-year period.<sup class="endnote-pointer">2</sup>) Unlike doctors, lawyers, and other service providers, individual teachers are hired and monitored by districts and most decisions about curricula and materials are made at district levels. But why not sue districts? The answer is that such suits will probably lose. The reason is succinctly put by DeMitchell and DeMitchell: ‘While educators can be held liable for infringing on students’ rights and for negligence that causes students physical harm, <em>educators do not have a legal responsibility to educate students</em>.”<sup class="endnote-pointer">3</sup> (Emphasis added.)</p>
<p>That districts and their schools do not have a legal responsibility to educate students might come as a surprise to parents and taxpayers who gave K-12 public schools $634 billion a year as recently as 2014, a sum of money that might lead them to believe that the responsibility to educate students is at least implicit if not obvious.<sup class="endnote-pointer">4</sup> It raises the question of how districts came to be immune.</p>
<h2>The power of precedent</h2>
<p>Part of the answer is that courts follow precedent, the doctrine of <em>stare decisis</em>. Courts have a logical structure for assessing if malpractice occurred: did the provider have a duty to deliver a service (a ‘duty of care’), did they breach that duty, and was the breach the ‘proximate cause’ of injury or damages. Answering these questions is straightforward for simple cases in medicine. For example, a doctor prescribes a drug, that drug injures a patient, and the information about the drug specifically noted that it should not be used for patients having that condition. The doctor had a duty to the patient, breached it, and an injury resulted.</p>
<blockquote class="right-pullquote"><p>Malpractice is a failure to deliver what a reasonable professional would consider an appropriate service.</p></blockquote>
<p>Now consider a student who recently graduated from high school, with no condition such as dyslexia that would have hampered his ability to read, but he cannot read. Did his schools commit malpractice? Two landmark cases in the seventies focused on nearly exactly this situation, one in California and the other in New York. Together, the two cases have provided a basis for courts to reject education-malpractice claims for decades.</p>
<p>In 1976, a student, Peter W, brought a lawsuit against the San Francisco school district, alleging that the district committed malpractice because it graduated him from high school when he could only read at a fifth-grade level. A California appeals court ultimately ruled against Peter W. The court concluded it could find no workable basis for imposing a ‘duty of care’ on the school district.<sup class="endnote-pointer">5</sup> The court also expressed reluctance to identify schools as a proximate cause of Peter W’s poor education outcomes. The court said that an education is the product of a host of factors, and it was not possible to identify how much each contributed. Even if the court had found that schools had a duty of care, Peter’s malpractice claim would have failed because his poor reading ability could not be causally linked to ineffective teaching.</p>
<p>The court in the 1979 New York case, Donohue vs. Copiague School District, also declined to find malpractice, but for a different reason. The court first noted that ‘If doctors, lawyers, architects, engineers and other professionals are charged with a duty owing to the public whom they serve, it could be said that nothing in the law precludes similar treatment of professional educators.” Having found that a duty of care exists for schools, the court then argued that it was not up to the court to decide on these issues. The court said that finding that schools committed malpractice “would constitute blatant interference with the responsibility for the administration of the public school system lodged by Constitution and statute in school administrative agencies.”<sup class="endnote-pointer">6</sup></p>
<p>The implications of the two cases are at odds. The California case did not find a duty of care but did not express reluctance to ‘interfere’ with school administration. The New York case did find a duty of care but expressed reluctance to interfere with school administration. However, between these two cases, future courts had plenty of basis to dismiss malpractice claims&#8211;either there’s no duty of care (citing California) or courts should not get involved in these kinds of claims (citing New York).</p>
<p>But from a perspective of forty years of hindsight, courts now have ‘interfered’ on a wide range of school-related issues. As of this writing, 46 states have had court suits about education funding.<sup class="endnote-pointer">7</sup> In one of those suits, for example, the New Jersey Supreme Court ordered the state to ensure schools in disadvantaged districts operated full-day kindergartens, provided support to develop literacy in early readers, encouraged parent involvement, limited class sizes in elementary schools, and provided a range of social and support services.<sup class="endnote-pointer">8</sup> So much for not ‘interfering’ with school administration. Special education, accommodations for students with disabilities, drug testing in schools, vouchers for private schools—courts have tackled all these topics.</p>
<h2>Teachers can be linked to education outcomes</h2>
<p>The past two decades also have seen the emergence of a broad array of studies that are able to identify causal links between schools and teachers on the one hand and education outcomes on the other. I wrote previously about studies that identified causal links between education spending and education outcomes.<sup class="endnote-pointer">9</sup> More recently, several articles have argued that student test scores required by No Child Left Behind and the ‘Every Student Succeeds Act,’ coupled with their use to rank teachers through ‘value-added models,’ can be a basis for proving malpractice.<sup class="endnote-pointer">10</sup> What these scholars emphasize is that so-called ‘value-added’ models yield causal estimates of teacher contributions to a child’s education, and therefore can be evidence that some teachers are ineffective at educating children.</p>
<p>A value-added model uses data on test scores, student characteristics, and classroom characteristics, and estimates how much individual teachers contribute to the increase in test scores from one year to the next. Because the models account for student and classroom characteristics, such as whether students are from disadvantaged households or are learning English as a second language, in principle they create a level playing field for each teacher. If a school district is aware that a teacher is ineffective based on estimates from these models, yet does not remove the teacher the classroom, and indeed continues to assign students to the teacher, the argument is that the district can be found to have committed malpractice and be required to pay damages for deficient learning outcomes.</p>
<p>Hutt and Tang (2013) consider and reject arguments districts might advance about using value-added models to identify ineffective teachers. For example, they note that many states and districts only count a fraction of the value-added score in rating teachers (for example, currently in New Jersey, 35 percent of a teacher’s evaluation depends on test scores). But not basing the entire ranking on the value-added score does not rule out using the value-added score as a causal estimate of teacher effectiveness. It means only that states and districts can admit other factors into teacher rankings. More generally, districts could defend their continued employment of ineffective teachers by arguing that it is customary (‘everyone does it’), or that ineffective teachers need to be tolerated because replacing them is costly and burdensome.  To the first point, Hutt and Tang note that districts essentially would be arguing that they are not committing malpractice because lots of districts commit malpractice. To the second point, the employer, the district, would be blaming their own hiring and firing processes for the ineffective teachers they hire and do not fire.</p>
<p>Technical arguments that value-added models do not represent <em>causal</em> effects of teachers ultimately may be where battles are fought. Reardon and Raudenbush point out that in the formal science of causal inference, estimates of teacher effectiveness from value-added models are causal only if it is assumed that students essentially are randomly assigned to teachers.<sup class="endnote-pointer">11</sup> Violations of the assumption undermine the claim of causality, and it is easy to imagine, for example, that principals do not randomly assign students to teachers.</p>
<blockquote class="pullquote"><p>A recent scan covering the past 40 years found 80 cases alleging education malpractice, and only 1 was successful.</p></blockquote>
<p>Also, value-added models are statistical constructs. As the public saw before with the tobacco industry and lung cancer, and as they are seeing now with debates about whether human activity is contributing to global warming, estimates from statistical models are always subject to uncertainty. The statement ‘X causes Y’ is really a statement that ‘the evidence is strong that X causes Y,’ and observers can attach different meanings to ‘strong.’ If a district uses a value-added model to rank its teachers, that model will yield estimates for each teacher that have uncertainty attached to them. How much uncertainty depends on factors such as classroom sizes, characteristics of students, the number of available years of data, and the kind of statistical model that is estimated.</p>
<p>But courts applying a ‘reasonable person’ standard might conclude that a teacher who has been among the lowest-ranked teachers for each of the preceding years based on a value-added model is indeed ineffective, especially if that finding is corroborated by other evidence such as classroom observations oand the quality of the teacher’s lesson plans. Courts deal with uncertainty all the time and a preponderance of evidence means only that the evidence points to the same conclusion, not that it is perfect.<sup class="endnote-pointer">12</sup></p>
<h2>Looking ahead</h2>
<p>Malpractice suits serve a dual purpose of providing remedies to persons that have been harmed and deterring the harm itself. If more education malpractice cases were to be judged in favor of plaintiffs (students), the remedy presumably would be the value of lost earnings arising from weaker academic skills. Chetty <em>et al. </em>studied relationships between value-added and adult outcomes and found clear evidence that teacher value added was related to whether students attended college, the quality of colleges attended, and future earnings. Their estimates and estimates from the burgeoning literature on value-added seem like a reasonable basis for courts to set remedies. </p>
<p>In addition to direct costs of litigation and paying damages, the threat of malpractice may heighten scrutiny of value-added scores <em>per se</em>. Districts could opt to take defensive actions such as not making value-added scores public or try to make scores hard to know in other ways. As public organizations, however, states and districts are under pressure from various ‘freedom of information’ laws to make scores public and appearing to dodge scrutiny may be unappealing.</p>
<p>Scrutiny also might generate defensive reactions already alleged to have happened when test scores began to be used for school and teacher accountability. For example, districts might encourage teachers to narrow curricula to focus on reading and math, spend more time ‘teaching to the test,’ and be reluctant to test innovative methods because of their potential downside. Whether and how much this happens are empirical questions. A recent rigorous analysis of medical-malpractice reforms found that doctors improved their quality of care when the standards of care were raised but did not later reduce quality of care if the legal threat associated with malpractice was lowered by a reform like capped damages.<sup class="endnote-pointer">13</sup> If education followed the same pattern, districts and teachers would put more attention on teaching quality if malpractice risks associated with employing ineffective teachers escalated.</p>
<p>How might districts and schools take positive actions to deter being found to have committed malpractice? Obviously, focusing on ineffective teachers is part of deterrence. Districts could tighten their criteria for dismissing ineffective teachers (as the District of Columbia Public Schools has done with its IMPACT teacher-rating system), provide high-quality professional development for their teachers, and promote the use of sound instructional approaches in classrooms. Research plays an important role here. Malpractice is a failure to deliver what a reasonable professional would consider an appropriate service. Research showing various practices are more effective than other practices is part of the definition of the ‘appropriate service.’</p>
<p>The seven students in Detroit were arguing that their schools should have grounded literacy instruction in evidence-based practices—what schools delivered was not an appropriate service. Even if courts ultimately decide there is no fundamental right to literacy, the idea that teaching and instruction should be guided by research and evidence is sound.</p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. He is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/560850768/0/brookingsrss/series/evidencespeaks">
<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.24.2018_Literacy.jpg?w=290" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.24.2018_Literacy.jpg?w=290"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/560850768/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/560850768/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.24.2018_Literacy.jpg?w=290" type="image/jpeg" />
		<atom:category term="K-12 Education" label="K-12 Education" scheme="https://www.brookings.edu/topic/k-12-education/" /></item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/evidence-on-new-york-city-and-boston-exam-schools/</feedburner:origLink>
		<title>Evidence on New York City and Boston exam schools</title>
		<link>http://webfeeds.brookings.edu/~/559466156/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Susan M. Dynarski]]></dc:creator>
		<pubDate>Thu, 19 Jul 2018 09:00:39 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=528818</guid>
					<description><![CDATA[New York City is wrestling with what to do with its exam schools. Students at Stuyvesant, Bronx Science, and Brooklyn Tech (the oldest exam schools) perform brilliantly and attend the best colleges. Their students score at the 99th percentile of the state SAT distribution (with Stuyvesant at the 99.9th percentile) and they account for the&hellip;<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.18.18_Exam-School.jpg?w=305" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.18.18_Exam-School.jpg?w=305"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/559466156/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Susan M. Dynarski</p><p>New York City is wrestling with what to do with its exam schools. Students at Stuyvesant, Bronx Science, and Brooklyn Tech (the oldest exam schools) perform brilliantly and attend the best colleges. Their students score at the 99<sup>th</sup> percentile of the state SAT distribution (with Stuyvesant at the 99.9<sup>th</sup> percentile) and they account for the majority of New York City students attending Harvard, Princeton and Yale.<sup class="endnote-pointer">1</sup> These are by any measure elite schools and are revered as jewels of the city school system. </p>
<p>But of the 900 freshmen who enrolled at Stuyvesant this past fall, just 10 were black.<sup class="endnote-pointer">2</sup> By state law, admission to these schools is via a specialized, voluntary, admissions test. Mayor Bill de Blasio and others complain that this admissions system perpetuates inequality in opportunity to an excellent education.</p>
<p>A lot of ink has been spilled over the exam schools, in popular news outlets as well as in academic journals. In this piece, I address a narrow but relevant question: the causal impact of these schools on the students who attend them.  Do the exam schools produce academically outstanding graduates, or do they simply admit stellar students and enjoy credit for their successes? I also briefly discuss alternative methods the city could use to dole out scarce seats at these over-subscribed schools.</p>
<p>Understanding the effectiveness of any school is a challenge because parents choose their children’s schools. In many cases, the school a child attends is tied to her address, so a parent effectively chooses a school when she picks a residence. In places like New York and Boston, which have district-wide choice, families can choose from dozens of public schools, including charters, magnets and exam schools. And there are private schools for those who can afford them or who have vouchers to subsidize the cost.</p>
<p>Because parents have choices, some schools are filled with students (say, the children of well-educated, highly-motivated parents) who would perform well in almost any setting. This pattern could mislead us into thinking such schools provide an exemplary education, when the truth is they simply attract strong students.</p>
<p>This is selection bias, the greatest challenge in evaluating the effectiveness of schools. Stuyvesant High School is filled with smart students who might succeed anywhere. When those students do well, is it because of the school or the students or both?</p>
<p>In the case of exam schools, we have selection bias on steroids. Students who enter Stuyvesant have middle-school test scores a full two standard deviations above the city mean &#8211; that is, they score higher than 95% of the students in the city’s public schools. How can we possibly disentangle the effect of the exam schools in the face of such massive differences in baseline achievement?</p>
<p>To overcome this challenge, researchers have made use of the tests that make these <em>exam</em> schools. By state law, entrance to the exam schools in New York is determined by a student’s score on the Specialized High School Admissions Test (SHSAT). A student who scores high enough can win admission to Stuyvesant. A slightly lower score will get her into Bronx Science, and so on.<sup class="endnote-pointer">3</sup> Researchers have exploited these cutoffs to estimate the causal impact of exam schools on students’ academic achievement and college attendance.</p>
<p>The research method is called “regression-discontinuity” design. The key to this approach is that it’s essentially random whether someone ends up right above or right below the cutoff. By comparing students just on each side of the cutoff, we can capture the causal impact of the school on student outcomes.</p>
<p>Of course, it’s not at all random that some students have very high scores and others very low scores, and of course more of those with high scores will get into the exam schools. That’s exactly what we see in New York. What regression-discontinuity analysis relies on is the large, discontinuous <em>jump</em> in exam-school attendance right at the cutoff scores. A score a smidgen above the cutoff guarantees admission, while a score a smidgen below yields rejection. These smidgens could be the result of random variation in the test or in how a student is feeling on testing day.</p>
<p>Two sets of economists applied the regression-discontinuity methodology to the study of New York City’s exam schools. Atila Abdulkadiroğlu (Duke University), Joshua Angrist and Parag Pathak (both of Massachusetts Institute of Technology) published “The Elite Illusion: Achievement Effects at Boston and New York Exam Schools” in <em>Econometrica  </em>while Will Dobbie (Princeton) and Roland Fryer (Harvard) published “The Impact of Attending a School with High-Achieving Peers<em>: </em>Evidence from the New York City Exam Schools<em>” </em>in<em> American Economic Journal: Applied Economics</em>.<sup class="endnote-pointer">4, 5</sup></p>
<p>What do the researchers conclude? They find a precisely <em>zero effect</em> of the exam schools on college attendance, college selectivity, and college graduation. They put the data through the grinder, and that’s the unexciting result. Findings for Boston’s exam schools are the same, with a bonus finding of zero effect on test scores, including the SAT and PSAT. The authors note that it is still possible that the schools affect outcomes later in life, such as employment or wealth. But, if so, any such effect does not operate through attendance at an elite college.</p>
<blockquote class="right-pullquote"><p>What do the researchers conclude? They find a precisely <em>zero effect</em> of the exam schools on college attendance, college selectivity, and college graduation.</p></blockquote>
<p>These null results take a lot of the air out of the wrought discussions about the exam schools as gateways to economic opportunity. At least for the students just on the margin of admission to exam schools, the schools have no measurable effect on academic achievement or postsecondary outcomes. These students may well be happier, more engaged, or safer at these schools. But it is surprising we don’t see effects where so many expected them.</p>
<p>While a strength of the regression-discontinuity design is that we obtain causal effects for students who are just on the margin of admission, a weakness is that we can’t estimate effects for students who were certain of admission (the very top students) or those who don’t bother to apply under the current admissions regime.</p>
<p>The city, or at least the mayor, would like to diversify the exam schools. How can schools for gifted students be diversified?   Fortunately, we have a lot of excellent research on this question.<sup class="endnote-pointer">6, 7</sup></p>
<p>The current admissions approach almost certainly shuts out many gifted, disadvantaged students. When we rely on parents, teachers, or students to make the decision to apply to a program for gifted students (by, for example, voluntarily signing up for a test), evidence indicates it is disadvantaged students who disproportionately get shut out.</p>
<p>But getting rid of the test is <em>not</em> the answer.  Well-educated, high-income parents work the system to get their kids into these programs. The less transparent the approach (e.g., portfolios or teacher recommendations instead of a standardized test) the greater the advantage these savvy, connected parents have in winning the game.</p>
<p>An important step is to make the test <em>universal</em>, rather than one that students <em>choose</em> to take. In the dozen states where college admissions tests are universal (free, required, and given during school hours), many more students take the test and go on to college.<sup class="endnote-pointer">8</sup> The democratizing effect is strongest among low-income and nonwhite students. The same dynamic holds among young children: when testing for giftedness is universal, poor, Black and Hispanic children are far more likely to end up in gifted classes.<sup class="endnote-pointer">9</sup> A school district in Florida showed huge increases in the diversity of its gifted programs when it shifted to using a universal test, rather than recommendations from parents and teachers, to identify gifted students.</p>
<p>Rather than force students to take yet another test, New York could use its existing 7<sup>th</sup>&#8211; and 8<sup>th</sup>-grade tests to determine admission to the exam schools. These tests are, in principle, aligned to what is taught in the schools and so are an appropriate metric by which to judge student achievement.  When so many are complaining about over-testing, why have yet another test for students to cram and sit for?</p>
<p>The city could go further toward diversifying the student body by admitting the top scorers <em>at each middle school</em> to the exam schools. Texas uses this approach to determine admission to the University of Texas flagships: the top slice (originally 10%, now lower) of students in each high school is automatically admitted to these selective colleges. This ensures that Texas’s elite colleges at least partially reflect the economic, ethnic and racial diversity of the state’s (highly segregated) school system. </p>
<p>This “top 10%” approach could lead some parents to scramble to enroll their children at lower-performing schools, where their kids are more likely to score at the top. This effect was indeed observed in Texas. This wouldn’t necessarily be a bad outcome, since it helps to integrate the system racially and economically.</p>
<p>Some might object that the standardized tests given to all students are insufficiently challenging to pick out the academic elite suited for the exam schools.  This is called a “ceiling effect,” where a test can’t distinguish among high achievers and super-high-achievers. This is a plausible theory, but the data don’t support this hypothesis in New York. According to the teams who conducted the analyses discussed earlier, students at Brooklyn Tech score about 1.5 standard deviations above the rest of the city, which is within the normal, measurable variation of the city’s standardized test. Even at Stuyvesant, students are within two standard deviations of the city on middle-school tests.</p>
<p>If the schools and city are intent upon keeping the specialized admissions test, they could administer it on a school day to <em>all</em> students who score above a given threshold on the universal middle-school tests.</p>
<p>New York City has a lot to grapple with in deciding the fate of its exam schools. Taking into account the scientific evidence on their performance would be a terrific way forward.<a href="#_ednref1" name="_edn1"></a></p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. She is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/559466156/0/brookingsrss/series/evidencespeaks">
<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.18.18_Exam-School.jpg?w=305" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.18.18_Exam-School.jpg?w=305"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/559466156/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/559466156/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="https://www.brookings.edu/wp-content/uploads/2018/07/ES_7.18.18_Exam-School.jpg?w=305" type="image/jpeg" />
		<atom:category term="Education" label="Education" scheme="https://www.brookings.edu/topic/education/" /></item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/does-state-pre-k-improve-childrens-achievement/</feedburner:origLink>
		<title>Does state pre-K improve children’s achievement?</title>
		<link>http://webfeeds.brookings.edu/~/557996472/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Grover J. "Russ" Whitehurst]]></dc:creator>
		<pubDate>Thu, 12 Jul 2018 09:00:39 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=527439</guid>
					<description><![CDATA[Executive Summary There is a strong and politically bipartisan push to increase access to government-funded pre-K. This is based on a premise that free and available pre-K is the surest way to provide the opportunity for all children to succeed in school and life, and that it has predictable and cost-effective positive impacts on children’s&hellip;<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/557996472/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi0.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f07%2fFigure-1-01.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Grover J. &quot;Russ&quot; Whitehurst</p><h2>Executive Summary</h2>
<p>There is a strong and politically bipartisan push to increase access to government-funded pre-K. This is based on a premise that free and available pre-K is the surest way to provide the opportunity for all children to succeed in school and life, and that it has predictable and cost-effective positive impacts on children’s academic success. </p>
<p>The evidence to support this predicate is weak. There is only one randomized trial of a scaled-up state pre-K program with follow-up into elementary school. Rather than providing an academic boost to its participants as expected by pre-K advocates, achievement favored the control group by 2<sup>nd</sup> and 3<sup>rd</sup> grade. It is, however, only one study of one state program at one point in time. Do the findings generalize? The present study provides new correlational analyses that are relevant to the possible impact of state pre-K on later academic achievement. Findings include:</p>
<ul>
<li style="margin-bottom: 20px">no association between states’ federally reported scores on the fourth grade National Assessment of Educational Progress (NAEP) in various years and differences among states in levels of enrollment in their state’s pre-K program five years earlier than each of those years (when the fourth-graders taking NAEP would have been preschoolers);</li>
<li style="margin-bottom: 20px">positive associations (small and typically not statistically significant) between NAEP scores and earlier pre-K enrollment, when the previous analysis is conducted using NAEP scores that are statistically adjusted to account for differences between the states in the demographic characteristics of students taking NAEP; and</li>
<li style="margin-bottom: 20px">no association between differences among states in their gains in state pre-K enrollment and their gains in adjusted NAEP scores.</li>
</ul>
<p>Under the most favorable scenario for state pre-K that can be constructed from these data, increasing pre-K enrollment by 10 percent would raise a state’s adjusted NAEP scores by a little less than one point five years later and have no influence on the unadjusted NAEP scores.</p>
<p>Unabashed enthusiasts for increased investments in state pre-K need to confront the evidence that it does not enhance student achievement meaningfully, if at all. It may, of course, have positive impacts on other outcomes, although these have not yet been demonstrated. It is time for policymakers and advocates to consider and test potentially more powerful forms of investment in better futures for children.</p>
<h2>Background</h2>
<p>States vary considerably in the percentage of their four-year-olds that enroll in the state’s pre-K program. In the 2011-2012 school year, for example, there were 10 states without a state pre-K program at all whereas the average enrollment among the 10 states with the largest programs was 52 percent. The state that led the list that year, Florida, enrolled 79 percent of its four-year-olds.</p>
<p>With an occasional stutter, state pre-K enrollments have increased over time. From 2002 to 2017, the percentage of four-year-olds enrolled in state pre-K rose from 14 percent to 33 percent.<sup class="endnote-pointer">1</sup> A few states expanded dramatically during this time frame. Florida, the leader in enrollment by 2011-2012, had no state pre-K program in 2003-2004.</p>
<p>Advocates for government-funded pre-K argue that it is the surest way to provide the opportunity for all children to succeed in school and life. The buy-in by politicians is impressive. President Obama articulated this viewpoint in his 2013 state of the union address:</p>
<p>Tonight, I propose working with states to make high-quality preschool available to every child in America. Every dollar we invest in high-quality early education can save more than seven dollars later on – by boosting graduation rates, reducing teen pregnancy, even reducing violent crime. In states that make it a priority to educate our youngest children, like Georgia or Oklahoma, studies show students grow up more likely to read and do math at grade level, graduate high school, hold a job, and form more stable families of their own. So let’s do what works, and make sure none of our children start the race of life already behind. Let’s give our kids that chance.<sup class="endnote-pointer">2</sup></p>
<p>The push for expansion of state pre-K is bipartisan. About a third of U.S. governors who delivered state of the state addresses in 2018 highlighted early learning initiatives. More than half were Republicans.<sup class="endnote-pointer">3</sup></p>
<p>Leaving aside the positions taken by politicians and pre-K advocates, is there good reason to believe that state pre-K is effective? Or is it another one of the periodic crazes that grip education reform in America, in the absence of or despite available evidence? <a href="#_ednref1" name="_edn1"></a></p>
<h2>Does state pre-K raise student achievement?</h2>
<p>Here I address the question of whether state pre-K improves students’ academic achievement in elementary school. This is surely not the only valuable outcome that is posited by pre-K advocates, e.g., noncognitive effects that play out in later life are increasingly part of the popular model of why preschool is valuable. But the goal of increasing school readiness and thereby later academic success is at the core of the preschool movement. For example, the statutory mission of Head Start, the federal preschool program founded in 1965, is “to promote the school readiness of low-income children”.<sup class="endnote-pointer">4</sup></p>
<p>The strongest evidence on elementary school impacts of state pre-K would come in the form of randomized trials of scaled-up state pre-K programs with follow-up of children in the treatment and controls groups as they progress through elementary school. There is only one such study: Children of parents seeking enrollment of their children in the Tennessee Voluntary Pre-K Program (TVPK) were randomly assigned to be admitted to the program or not. Outcomes have been tracked through third grade. The <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/evidencespeaks/~https://www.sciencedirect.com/science/article/pii/S0885200618300279">findings</a> as described by the authors in their peer-reviewed report of the study are that:<sup class="endnote-pointer">5</sup></p>
<ul>
<li style="margin-bottom: 20px">positive achievement effects at the end of pre-K reversed and began favoring the control children by 2nd and 3rd grade;</li>
<li style="margin-bottom: 20px">TVPK participants had more disciplinary infractions and special education placements by 3rd grade than control children; and</li>
<li style="margin-bottom: 20px">no effects of VPK were found on attendance or retention in the later grades.</li>
</ul>
<p>As critics have pointed out, this is only one study of one state pre-K program at one point in time. There may be something anomalous about the TVPK program itself that caused the surprising negative impacts of pre-K participation on academic achievement and socio-emotional outcomes in later grades.</p>
<p>Who knows how long we will have to wait for another randomized trial of a state pre-K program with follow-up of participants through the school years? In the meantime, it may be informative to examine other types of evidence to determine whether there are patterns of data that would strengthen confidence that the TVPK findings are generalizable, or call those findings into question.</p>
<p>I explore the association between different levels among the states of enrollment of four-year-olds in state pre-K and differences in the performance of students in those states on the National Assessment of Educational Progress (NAEP) five years later. Do states that enroll more of their four-year-olds in state pre-K in a given year have higher scores on NAEP when those children reach fourth grade than states with lower levels of pre-K enrollment?</p>
<p>Only a randomized trial or something similar would assure that there are no differences among states being compared that would affect NAEP scores other than their dosage of state pre-K. The opportunity to carry out such a causally rigorous study through planned variation in levels of pre-K provision is long gone.</p>
<p>We can leave it at that and accept the TVPK results as definitive. Or we can carry out epidemiological analyses that fall considerably short of supporting causal certainty but that have the potential of reducing the degree of confusion about whether state pre-K impacts later academic achievement. I follow the latter path. Others have as well, both with studies of individual states that have ramped up their pre-K programs<sup class="endnote-pointer">6</sup> and by using, as I do, variation among all states in pre-K access.<sup class="endnote-pointer">7</sup></p>
<p>The analyses I carry out are simple, descriptive, and rely entirely on publicly available data. I do not apply the usual array of statistical tools for analyzing panel data because the assumptions those techniques require are not well met with the data at hand, the presentation of their results would interfere with my effort to be transparent to a general audience about the logic of the analysis, and I do not require precise estimates to draw conclusions.<sup class="endnote-pointer">8</sup></p>
<p>My approach involves reducing through statistical adjustments the differences among states in the background characteristics of their fourth graders taking NAEP. Family background is the strongest predictor of school achievement, and states vary considerably in the demographics of their school-age populations. If the effects of family background are wrung out of state-level NAEP scores the influence of access to state pre-K is more likely to be visible.</p>
<p>In that context, I carry out an analysis of the association between pre-K enrollment at the state level and state NAEP scores five years later for five separate cohorts of four-year-olds. These are cohorts that participated in NAEP as fourth graders in the spring of 2009, 2011, 2013, 2015, or 2017 (NAEP is administered in the spring of every other school year). These five cohorts were four-year-olds and eligible for whatever pre-K program was offered in their states in the 2003-2004, 2005-2006, 2007-2008, 2009-2010, or 2011-2012 school years. I report the correlation between pre-K enrollment levels in each of five relevant years and NAEP scores five years later.</p>
<p>Data on the percentage of the population of four-year-olds in each state enrolled in state pre-K for each of the five cohorts were transcribed for the present analysis from the relevant annual <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/evidencespeaks/~nieer.org/state-preschool-yearbooks">State of Preschool Yearbook</a> published by the National Institute for Early Education Research.<sup class="endnote-pointer">9</sup></p>
<p>The analyses are reported separately for unadjusted NAEP scores as reported in the federal government’s public release of NAEP, as well as for NAEP scores adjusted for five student background variables (age, race/ethnicity, frequency of English spoken at home, special education status, free or reduced-price lunch eligibility, and English language learner status). The adjusted NAEP scores were calculated for <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/evidencespeaks/~apps.urban.org/features/naep/">America’s Gradebook</a>, produced by the Urban Institute, and are publicly available for download.<sup class="endnote-pointer">10</sup></p>
<p>Hawaii is excluded from all analyses reported below because the technical appendix for America’s Gradebook cautions that the adjusted scores for Hawaii may be misleading due to the very high proportion of students in that state who are Native Hawaiian or other Pacific Islander.<sup class="endnote-pointer">11</sup> Consistent with that red flag, Hawaii is an extreme outlier if included in the analyses reported below and, thus, is excluded.</p>
<p>The pattern of data that would provide the strongest support for pre-K impact would be: a) positive correlations between levels of state pre-K enrollment and NAEP scores five years later for specific NAEP cohorts; b) larger correlations for adjusted than for unadjusted NAEP scores; c) replications of the pattern of correlations across cohorts; d) increases in pre-K enrollment within states being associated with increases in NAEP scores in those same states; and e) correlations between pre-K enrollment and NAEP scores large enough to suggest that meaningful increases in student achievement could be a consequence of expansion of enrollment in state pre-K.</p>
<p>Correlations between reading and math NAEP scores and state pre-K enrollment five years prior are presented in Figure 1. Reading and math scores are presented separately for each of five years of NAEP testing. Solid bars represent the correlations between pre-K enrollment and adjusted NAEP scores. Patterned bars represent the correlations for unadjusted NAEP scores. Blue bars are for reading whereas orange bars are for math.<a href="#_ednref1" name="_edn1"></a></p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=500%2C375px&amp;ssl=1 500w" alt="Correlations between state NAEP scores in 4th grade in five separate years and state pre-K enrollment five years prior" data-src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-1-01.png?fit=500%2C375px&amp;ssl=1 500w" />The correlations for unadjusted NAEP scores (patterned bars) are close to zero for reading and slightly negative for math. In other words, for five different years over the most recent 10-year period, the level of enrollment in state pre-K in a given year is not associated with that cohort of students’ unadjusted (raw) NAEP scores as fourth graders. Tell me whether a state has a state pre-K program in a given year and how many children it enrolls, and I can tell you nothing about how that state performs on its federally released NAEP scores in the year that cohort of children reaches fourth grade.</p>
<p>In contrast, the correlations between pre-K enrollment and adjusted NAEP scores are consistently positive for both reading and math, consistently higher for reading than for math, and statistically significant for reading for the 2013 and 2015 testing years. Thus, once we adjust NAEP scores across states so that differences between states in the demographics of their students are neutralized, states with larger state pre-K programs in a given year have fourth graders who do better on NAEP five years later.</p>
<p>Leave aside for the moment the crucial question of whether these positive associations appear to reflect a causal influence of pre-K access on later achievement. Are the correlations large enough, if causal, to suggest that new investments in state pre-K expansion could lead to meaningful improvements in student achievement? The strongest cross-sectional correlation in the data (the <em>r </em>= 0.348 between adjusted NAEP reading scores in 2013 and pre-K enrollment five years prior), if interpreted causally, indicates that a 10 percent increase in state pre-K enrollment would result in less than a one-point increase in a state’s adjusted NAEP reading scores five years later. To put this in context, the standard deviation on NAEP reading at fourth grade for individual students is 38 points and the white-black achievement gap is 26 points.<sup class="endnote-pointer">12</sup> A one-point increase on NAEP at the state level would not make a meaningful contribution to the sizable challenge of reducing the large differences in education outcomes for students from different backgrounds.</p>
<p>A causal interpretation of the positive cross-sectional correlations in Figure 1 would be strengthened if the positive association of pre-K enrollment and adjusted NAEP scores held for longitudinal observations within the states. If level of enrollment in state pre-K causes later improvements in school achievement, states that increase their state pre-K enrollment more over time should show larger increases in adjusted NAEP scores than states that increase their pre-K enrollment less (or not at all).</p>
<p>Further, the timing should line up such that a step-up for pre-K enrollment for a state in a given year should be followed in five years by a step-up in adjusted NAEP scores.</p>
<p>Figure 2 addresses the first of these issues, whether states that increase their state pre-K enrollment more over time show larger increases in adjusted NAEP scores. It is a scatterplot of change scores for each state on adjusted NAEP reading between 2009 and 2015 against the change scores for state pre-K enrollment between 2004 and 2010.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scatter plot of change over time in state pre-K enrollment against growth in adjusted NAEP reading scores" data-src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-2-01.png?fit=500%2C375px&amp;ssl=1 500w" />A trend line between these points in close to flat, reflecting the small and statistically insignificant correlation between the two variables of <em>r</em> = 0.078. As expected with essentially a zero correlation, states are equally represented in all four quadrants of pre-K expansion and growth on adjusted NAEP. Lots of states experienced substantial changes in their adjusted NAEP scores with very small to nonexistent changes in their state pre-K enrollment, e.g., Utah, Nevada, Indiana. Others had large increases in pre-K enrollment while being unexceptional in improvements in adjusted NAEP, e.g., Vermont, Iowa.</p>
<p>What about a longitudinal pattern within states in which increases in enrollment in state pre-K are followed in exactly five years by increases in adjusted NAEP scores? The lack of a correlation between growth in enrollment and increases in NAEP disregarding the timing of either, per Figure 2, suggests the futility of looking for a positive correlation that imposes additional temporal requirements. A detailed examination of exactly that relationship by Bartik and Hershbein using a longer series of data and the application of a formal econometric model finds no relationship: “We find no evidence that the average state program affects the average student’s test scores.”<sup class="endnote-pointer">13</sup></p>
<p>Florida is an example of a state having a strong cross-sectional association between state pre-K enrollment and later NAEP scores but not showing the temporal sequence between rising pre-K enrollment and rising NAEP scores that would be expected if pre-K were having a causal effect on later reading achievement. Figure 3 displays the trend line for adjusted NAEP reading scores for Florida, including every testing year for which test scores are available, along with the trend line for the state’s pre-K enrollment as measured five years prior to each NAEP testing.</p>
<p>There is no upward movement in the trend line for NAEP that corresponds with increases in state pre-K enrollment five years prior. If anything, progress in adjusted NAEP reading scores, which had been large in the years before Florida instituted a voluntary state pre-K program, tapered off five years after state pre-K enrollment began to increase dramatically. Florida is not simply an example. Rather it plays an outsized influence in the correlations between pre-K enrollment and lagged NAEP reading scores shown in Figure 1: On average across the testing years, each of the positive correlations in Figure 1 would drop by .07 and the two statistically significant correlations would disappear if Florida were excluded from the data.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=500%2C375px&amp;ssl=1 500w" alt="Trend lines for Florida adjusted NAEP reading and for state pre-K enrollment 5 years prior to each NAEP testing year" data-src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/07/Figure-3-01.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<h2>What does it mean?</h2>
<p>The correlational evidence depicted in Figure 1 is consistent with state pre-K enrollment having a small influence on later reading achievement. However, the evidence presented in Figures 2 and 3 is inconsistent with a direct causal impact of state pre-K enrollment on later reading achievement: Where a causal relationship would lead to the expectation that states that are increasing pre-K access show increasing academic achievement, there is no consistent relationship between increases over time in enrollment and increases over time in adjusted NAEP scores.</p>
<p>The most parsimonious explanation of the disharmony between the cross-sectional data (Figures 1) and the longitudinal data (Figures 2 &amp; 3) is that states that have invested in larger state pre-K programs are also engaged in other education reforms that affect NAEP scores independent of pre-K.</p>
<p>Again, Florida can serve as an example. It has the largest state pre-K program and excellent adjusted NAEP scores, but it has also invested heavily in other state education reforms, including a reading initiative that could have affected NAEP scores during the testing periods covered in the present analysis.<sup class="endnote-pointer">14</sup> The longitudinal relationship shown in Figure 3 between rising pre-K enrollment and rising adjusted NAEP reading scores is more consistent with a causative influence of other reforms such as the reading initiative than it is with the influence of pre-K.</p>
<p>What do the present results imply with respect to the generalizability the findings from the only existing large-scale randomized trial of a state pre-K program? There is nothing here that calls the findings from the TVPK into serious question. Specifically, there are no findings in the present data of substantive positive changes in student achievement that can be reasonably attributed to increases in access to state pre-K programs. Such relationships as are found between pre-K enrollment and NAEP achievement are small and not causally persuasive.</p>
<p>How about the consistency or lack there of between the present results, the TVPK findings, and the much larger literature on the effects of preschool on later achievement? I have written extensively about that broader literature and its limitations. I do not have the space here to do much more than point to some of those papers.<sup class="endnote-pointer">15</sup> Suffice it to say that the presence of “fadeout” during the school years of the academic effects of pre-K programs is well-documented, pervasive across dozens and dozens of studies, and not in dispute among scholars in the field.<sup class="endnote-pointer">16</sup> The results of the present study add information specific to state pre-K programs but should be unsurprising with regard to the general finding of little to no measurable influence of pre-K on fourth grade achievement.</p>
<p>It is important to stress that neither the broader literature nor the present data foreclose the possibility that some state pre-K programs have positive long-term impacts on the achievement of some children; that the positive effects of state pre-K programs “sleep” during the school years but emerge in later life; that differently designed and delivered state pre-K programs or better alignment between state pre-K programs and the public schools could lead to substantive impacts; or that positive effects of state pre-K play out primarily through pathways of family financial support rather than children’s early learning in center-based care.<sup class="endnote-pointer">17</sup> These are all hypotheses that can be pursued.</p>
<p>I have argued elsewhere that the policy path forward for the center-based care and education of young children is muddled.<sup class="endnote-pointer">18</sup> The present analysis reinforces that judgment. Putting nearly all our eggs in the same basket &#8212; enhancing access to state pre-K for four-year-olds – shows little evidence to date of having a substantive payoff in later school achievement. It is time for enthusiasts for increased investments in state pre-K to confront the evidence that it does not enhance student achievement meaningfully. They need to temper their enthusiasm for more of the same and, instead, support testing of other approaches that appear promising.<sup class="endnote-pointer">19</sup><a href="#_ednref1" name="_edn1"></a></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/557996472/0/brookingsrss/series/evidencespeaks">
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/557996472/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi0.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f07%2fFigure-1-01.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/557996472/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="http://webfeeds.brookings.edu/-/557912638/0/brookingsrss/series/evidencespeaks.jpg" type="image/jpeg" />
		<atom:category term="Early Childhood Education" label="Early Childhood Education" scheme="https://www.brookings.edu/topic/early-childhood-education/" />
<feedburner:origEnclosureLink>https://www.brookings.edu/wp-content/uploads/2018/07/RTR3EMA0.jpg?w=270</feedburner:origEnclosureLink>
</item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/fafsa-completion-rates-matter-but-mind-the-data/</feedburner:origLink>
		<title>FAFSA completion rates matter: But mind the data</title>
		<link>http://webfeeds.brookings.edu/~/556512388/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Richard V. Reeves, Katherine Guyot]]></dc:creator>
		<pubDate>Thu, 05 Jul 2018 09:00:06 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=525418</guid>
					<description><![CDATA[FAFSA season has just ended -- the final deadline to fill out the 2017-18 Free Application for Federal Student Aid (FAFSA) was June 30. This year, as every year, many students who are eligible for aid will have failed to complete the form.1 This means many miss out on financial aid, which can have a&hellip;<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/556512388/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi2.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f07%2fES_20180705_Reeves-FAFSA.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Richard V. Reeves, Katherine Guyot</p><p>FAFSA season has just ended &#8212; the final deadline to fill out the 2017-18 Free Application for Federal Student Aid (FAFSA) was June 30. This year, as every year, many students who are eligible for aid will have failed to complete the form.<sup class="endnote-pointer">1</sup> This means many miss out on financial aid, which can have a serious impact on postsecondary enrollment, persistence, and completion.<sup class="endnote-pointer">2</sup> As many as one in seven students eligible for financial aid who enroll in college do not complete the FAFSA.<sup class="endnote-pointer">3</sup></p>
<p>FAFSA completion is positively associated with college enrollment, and FAFSA completion rates can be important early indicators of postsecondary access and success. But FAFSA completion rates can be tricky to track and to compare across time or between places, and are prone to misinterpretation. Overall completion rates in particular schools or districts may also disguise important divides by socioeconomic background.  Differences in filing rates will almost by definition be related to differences in financial need (though, as we will discuss, the correlation may not be in the expected direction). </p>
<p>In theory, calculating FAFSA completion rates should be child’s play: after all, you just need a numerator (FAFSA completions) and a denominator (12<sup>th</sup>-grade enrollment). As we show below, however, neither piece of that equation is as straightforward as it seems, which can lead to inconsistent or incomparable estimates. These measurement challenges are important to address: to the extent that FAFSA completion rates matter, measuring them as accurately as possible matters too.</p>
<p>Specifically, anyone trying to calculate FAFSA completion rates must address the following questions:</p>
<h3><strong>The Numerator: Which FAFSAs count?</strong></h3>
<p>FAFSA completions were once measured using only self-reported survey data, which are likely inaccurate. Federal Student Aid (FSA), an office of the U.S. Department of Education, now provides tallies from actual FAFSA submissions. This is a marked improvement over student self-reports. But a few special considerations are in order when using the public use (school- or district-level) dataset to calculate completion rates.</p>
<p>For one thing, the FAFSA does not ask applicants to report whether they are seniors in high school, so calculating the completion rate among high school seniors requires some assumptions about who counts as a senior. Until mid-April of 2017, FSA identified high school seniors as first-time filing applicants no older than 18, but this age limit has since been changed to 19. Researchers must be clear and consistent about which data series they are using. Additionally, FSA suppresses data for schools with under 5 applications. In aggregate, it does not matter much if we exclude these schools from the dataset or assume that each has, say, 2.5 completions. At a more granular level, though, this choice could have a nontrivial impact on FAFSA completion rates, particularly in districts with many small schools.</p>
<p>It’s important not to mix up artificial differences created by the definitional change or by the handling of small schools with actual changes in FAFSA completions. Actual changes may also be confused with concurrent changes in the <em>timing </em>of FAFSA filing in 2016-17 due to an executive order that (1) allowed students to start applying for aid in October rather than January, and (2) allowed students to use tax information from an earlier year so they do not have to wait until tax filing season to apply for aid.<sup class="endnote-pointer">4</sup> Identifying true increases in FAFSA completions is key to understanding the extent to which policy changes like this one can improve access to financial aid.</p>
<h3><strong>The Denominator: How many students are enrolled?</strong></h3>
<p>This may sound like the easy part, since the National Center for Education Statistics (NCES) publishes official school-level enrollment data in the Common Core of Data. But as of June 2018, the most recent enrollment data is for the 2015-16 school year. By contrast, FAFSA numbers are updated monthly.</p>
<p>One option for getting up-to-date completion rates is to use enrollment data for a previous year as a proxy for current enrollment. Another is to get more recent enrollment data from state statistical agencies. But not all states measure enrollment the same way. Some report membership on a given day, or average daily membership over a month, or cumulative enrollment over the year (not subtracting those who leave the rolls). Further, since enrollment fluctuates over the school year, it is important to measure enrollment on a reasonably consistent date, so as to enable comparisons across place and time. The Common Core of Data reports membership on October 1.</p>
<p><em>What matters most, of course, is comparing apples to apples. The results of a FAFSA Completion Challenge run by the National College Access Network</em>, an organization experienced in calculating accurate FAFSA completion rates,<sup class="endnote-pointer">5</sup> <em>were distorted because </em>a grantee in Greensboro, North Carolina (Say Yes to Education) reported <em>cumulative</em> enrollment in the <em>ninth month</em> of the school year for the first year of the challenge, rather than the numbers NCAN asked for: the enrollment numbers reported to the Common Core of Data—that is, membership on October 1.<sup class="endnote-pointer">6</sup></p>
<p>Greensboro took the prize for both the highest overall FAFSA completion rate and the biggest increase in FAFSA completion, when in fact (by our calculations) the award for the biggest increase should have gone to Cheyenne, Wyoming.<sup class="endnote-pointer">7</sup> The first-place prize was $75,000, and Cheyenne still received $50,000 for coming in second. Small beer, financially speaking, but the warning is clear: FAFSA completion rates have to be calculated very carefully and consistently, especially if they are used to compare different places, or changes over time.</p>
<p>For its 2018-19 challenge, NCAN is requiring districts to “have access to weekly student-level FAFSA completion data provided by the Office of Federal Student Aid’s FAFSA Completion website, and the ability to match that with student-level demographic data from the district’s student information system.”<sup class="endnote-pointer">8</sup><a href="#_ednref1" name="_edn1"></a></p>
<h3><strong>Distributional Impact: Who is completing FAFSAs?</strong></h3>
<p>Pushing up FAFSA completion rates is an important policy goal, in order to maximize the number of eligible students who receive support. As things stand, of the 30 percent of undergraduate students who did not apply for federal student aid in 2011-12, roughly a third were likely eligible for Pell Grants (though we should note that Pell Grant eligibility is difficult to estimate from survey data).<sup class="endnote-pointer">9</sup> For the purpose of awarding need-based aid, what matters most is increasing financial aid applications among those most likely to be eligible for financial aid. Driving up completion rates by inducing more students from affluent families to fill out the FAFSA is close to pointless unless we are primarily interested in raising the number of students who apply for student loans.</p>
<p>We might think (or hope) that FAFSA completion rates would be highest in school districts with the greatest need. But students in relatively affluent districts are probably more likely to have access to the kind of one-on-one assistance that is key to getting more students to submit the FAFSA, enroll in college, and receive more financial aid.<sup class="endnote-pointer">10</sup></p>
<p>A recent NCAN study found that school districts with higher child poverty levels have lower FAFSA completion rates—in the realm of 3 percentage points for every 10-percentage-point difference in the child poverty rate.<sup class="endnote-pointer">11</sup> This relationship varies across states: four states (Alabama, California, Minnesota, and Montana) see slightly higher rates of FAFSA completion in low-income districts (those at the 90<sup>th</sup> percentile of the national district-level poverty distribution) than in high-income districts (those at the 10<sup>th</sup> percentile):<img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=1000%2C750px&amp;ssl=1" sizes="1379px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=500%2C375px&amp;ssl=1 500w" alt="FAFSA" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/07/ES_20180705_Reeves-FAFSA.png?fit=500%2C375px&amp;ssl=1 500w" /><a href="#_ednref1" name="_edn1"></a></p>
<p>We should note that despite within-state gaps in FAFSA completion rates between high- and low-poverty districts, high-poverty districts in Tennessee and Maine still have higher completion rates than many of the low-poverty districts in other states. In fact, Tennessee regularly has the highest overall rate of FAFSA completion of any state. This is no accident: students must complete the FAFSA to apply for Tennessee’s HOPE and Promise scholarships.<sup class="endnote-pointer">12</sup></p>
<p>Ideally, we would like to know how many of the students who file the FAFSA as a result of these place-based scholarships are eligible for aid, and how many fill out the form simply to “check a box” on the path to obtaining a non-need-based scholarship. Wider access to data on FAFSA completion among demographic subgroups could help us to determine if these programs are increasing FAFSA completion among those who are most in need of aid. Another potential approach is to compare completion rates in schools with students with similar socioeconomic characteristics (say, majority low-income).</p>
<p>The good news is that FAFSA completion rates can be raised by providing one-on-one personal assistance,<sup class="endnote-pointer">13</sup> holding “FAFSA completion nights,” or just by making the application process simpler.<sup class="endnote-pointer">14</sup> But it is important for policymakers, scholars, and practitioners to ensure they are working with consistent and comparable data sources. As in so many areas, data really matters here.</p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/556512388/0/brookingsrss/series/evidencespeaks">
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/556512388/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi2.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f07%2fES_20180705_Reeves-FAFSA.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/556512388/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="http://webfeeds.brookings.edu/-/556512386/0/brookingsrss/series/evidencespeaks.jpg" type="image/jpeg" />
		<atom:category term="Higher Education" label="Higher Education" scheme="https://www.brookings.edu/topic/higher-education/" />
<feedburner:origEnclosureLink>https://www.brookings.edu/wp-content/uploads/2018/07/ES_07-02-2018_FAFSA.jpg?w=270</feedburner:origEnclosureLink>
</item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/the-challenges-of-curriculum-materials-as-a-reform-lever/</feedburner:origLink>
		<title>The challenges of curriculum materials as a reform lever</title>
		<link>http://webfeeds.brookings.edu/~/555070276/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Morgan Polikoff]]></dc:creator>
		<pubDate>Thu, 28 Jun 2018 09:00:21 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=524352</guid>
					<description><![CDATA[Executive Summary There is increasing momentum behind the idea that curriculum materials, including textbooks, represent a powerful lever for education reform. As funders are lining up and state leaders are increasing their policy attention on curriculum materials, this report discusses the very real challenges of this effort. The report draws on my experience over the&hellip;<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180627_Classroom.jpg?w=270" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180627_Classroom.jpg?w=270"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/555070276/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Morgan Polikoff</p><h2>Executive Summary</h2>
<p>There is increasing momentum behind the idea that curriculum materials, including textbooks, represent a powerful lever for education reform. As funders are lining up and state leaders are increasing their policy attention on curriculum materials, this report discusses the very real challenges of this effort. The report draws on my experience over the last several years collecting and analyzing textbook adoption data, as well qualitative interviews of school district leaders and teachers. It identifies challenges in three main areas: collecting and analyzing textbook adoption data; encouraging districts to make different adoption decisions; and encouraging teachers to make different use decisions. The report concludes with specific recommendations, which are aimed primarily at state policymakers who seek to use curriculum materials as a policy reform. </p>
<h2>INTRODUCTION</h2>
<p>In widely read Brookings reports, Whitehurst in 2009<sup class="endnote-pointer">1</sup> and Chingos and Whitehurst in 2012<sup class="endnote-pointer">2</sup> wrote about the impact of curriculum and its potentially transformative power as a lever for reform. Their arguments were straightforward. First, citing recent experimental studies, they documented that curriculum materials can have large direct effects on student learning<sup class="endnote-pointer">3</sup>. Second, they noted that school and district leaders could not make textbook adoption decisions on the basis of textbook quality, because such evidence did not widely exist. Third, they claimed that there was little data of even a descriptive nature on textbook adoption patterns and practices, but that this would be relatively easy to collect. And finally, they argued that if the above issues were solved, textbooks could be an inexpensive (both politically and in dollars and cents) reform strategy.</p>
<p>The Chingos and Whitehurst report ended with specific recommendations, including the following: a) State education agencies should collect data from districts on the instructional materials in use in their schools; b) the NGA and CCSSO should put their weight behind the effort to improve the collection of information on instructional materials; and c) foundations could provide the start-up funding needed to collect data on instructional materials and support the research that would put those data to use.</p>
<p>While it is of course not known whether their report is the direct impetus, it is clear that some of the recommendations they made are coming to fruition. For example, the Gates Foundation is moving into the area of curriculum materials,<sup class="endnote-pointer">4</sup> and other funders appear interested as well. Chiefs for Change recently released a statement on the importance of curriculum materials and the role of state departments of education in collecting better data on the topic.<sup class="endnote-pointer">5</sup> And a number of researchers, myself included, have begun paying attention to curriculum as a reform lever and collecting and using data to analyze the impact of materials on student achievement.<sup class="endnote-pointer">6</sup></p>
<p>But how good are the prospects for this as a serious reform effort? And what are the potential barriers? The purpose of this report is to take stock of where we are and to offer suggestions for this effort moving forward.<sup class="endnote-pointer">7</sup> To answer these questions, I draw on three main sources. One source is my recent efforts to collect textbook adoption data in the five largest U.S. states – I draw on my experiences in doing this work and also the data that we have ultimately collected and analyzed.<sup class="endnote-pointer">8</sup> A second source is a set of interviews of school district leaders in the state of California focusing on their districts’ textbook adoption policies and practices. And a third source is evidence on teachers’ use of textbooks, drawn from both my data collection efforts through interviews and others’ efforts through surveys.</p>
<p>In what follows, I organize my discussion around what I view as the three main areas of challenge:</p>
<ul>
<li>The challenge of collecting and analyzing textbook adoption data to determine which books are most effective.</li>
<li>The challenge of getting the most effective books in teachers’ hands (i.e., through school and district textbook adoptions).</li>
<li>The challenge of getting teachers to use these books once they have them.</li>
</ul>
<p>I conclude with specific recommendations for how to overcome these challenges. While I am optimistic about this reform strategy, textbooks will not be a successful reform without serious, sustained engagement along these dimensions. </p>
<h2>The challenges of collecting and analyzing textbook adoption data <a href="#_ednref1" name="_edn1"></a></h2>
<p>My team’s experiences in collecting and analyzing textbook adoption data suggest that there will be a number of hurdles if states seek to undertake this kind of effort.</p>
<h3>Collecting the Data</h3>
<p>Textbook titles seem like straightforward pieces of data to collect, but in fact the issue is more complicated than it may seem. Even well resourced state departments of education may struggle to simply collect the data in ways that make it usable for the kinds of research Chingos &amp; Whitehurst recommend.</p>
<p>First, there is the simple fact that even a piece of information as seemingly innocuous as textbook titles may be seen as having political implications. And these implications may lead to resistance to sharing the data. For instance, teachers or district leaders may worry that collecting data on textbook adoptions is the camel’s nose under the tent that may lead to more prescriptive state control over curriculum issues (which are historically the bailiwick of local authorities). Unless the collection is made mandatory and routine, then, there will likely be some resistance to sharing the data. But the more prescriptive the effort is, the more educators’ hackles may be raised.</p>
<p>Second, there is the complication of whom to ask for the data. Setting aside schools of choice, districts are very likely the units responsible for making the actual purchases in most or all states. But in some states, districts are typically “uniform adopters” (all schools in the district use each adopted book), and in other states they are not. Will states really get in the business of surveying every school in a state to gather this information? If they survey districts, will districts actually know what books are used in schools? Respondents—either district or school—may also lack key information such as adoption years, which are necessary for the most sophisticated analytic approaches.</p>
<p>Third, there are many complications in identifying books that make this task more difficult than it may seem at first blush. Many book series have multiple editions—Pearson’s enVision Math had state-specific versions, then a Common Core version, and now an enVision 2.0 version—it is easy to confuse these in data entry. Districts/schools may differ in the type of license they select—digital materials, consumable books, multi-year licenses, etc.—will this kind of information be collected? Some of these problems could be solved by collecting ISBN numbers, as suggested by Whitehurst and Chingos, but there are challenges with that approach as well (people may be less willing to fill out surveys if they have to go find ISBN numbers, for instance). Then of course schools and districts have up to 13 grades, multiple subjects, and sometimes four or more academic tracks—will textbook information be collected on all of these, or just some? How will titles be linked to courses?</p>
<p>Fourth, while some of the complications just mentioned could be avoided if data were collected each time materials were purchased, many districts do not purchase books. Some districts use materials like EngageNY—full-year materials that they obtain for free online. Other districts assemble or develop their own materials. How would a state data collection account for these eventualities?</p>
<h3>Analyzing the data</h3>
<p>Once the data are collected, there are multiple analytic strategies that can be used to determine which textbooks work best, and there is not consensus on the best approach. Some researchers have used matching or other regression-based approaches with school-level achievement data, while others have used student-level data and value-added analyses. Regardless, the goal is to identify the causal effect of districts’ choice of one textbook over another.</p>
<p>In general terms, the main methodological concern is to eliminate selection bias so that the identified “impacts” of a textbook are not actually attributable to some other preexisting difference among districts choosing one textbook over another. Koedel and his coauthors have used various matching approaches, presenting evidence that districts’ textbook adoption choices are not strongly related to observable school and district characteristics.<sup class="endnote-pointer">9</sup> They also conduct a series of falsification tests that provide convincing evidence that selection is not at play. However, in unpublished analyses, my colleagues and I have investigated textbook effects in other subjects (science), other grades (middle school mathematics), and other time periods (post-Common Core) and found that there seems to be more evidence of selection bias in those areas than in prior studies. Specifically, we have found sometimes large differences among schools adopting particular textbooks in terms of prior achievement or other demographic variables. This could be evidence that something is changing in textbook adoptions to make selection bias more of a concern. Regardless of the specific technical concern, the point is that the science on using observational data to estimate textbook impacts is far from settled, and the methods that work in one instance may not work in all others. Furthermore, the time and resources to get this analytic work done may be substantial.</p>
<h2>trying to get schools or districts to make better adoptions</h2>
<p>Suppose we were able to collect good enough textbook adoption data from large numbers of schools and districts and use it to calculate impact estimates for each book. Would districts make use of these data in their adoption decisions? My research team’s interviews with school district leaders suggest many reasons why they may not. Of course, it is possible that if the data were better we might have found different things in our interviews, but there are likely some real barriers to getting districts to make different/better adoption decisions.</p>
<p>First, the very decentralized nature of educational governance makes getting virtually any reform adopted at scale a real challenge. In some states (approximately half) the state is involved in school and district textbook adoptions by putting out a formally approved list of materials in certain grades and subjects. California does this, though California’s textbook adoption list is advisory—districts are not required to purchase off the list. In states with these kinds of lists, getting the most effective books to appear on the state list would go a long way toward getting the best books in the most schools. But in California we found around a quarter of schools used books from off the state-approved list. And many other interviewees, even in districts that adopted from the state list, expressed concern about the quality of the state’s review process. In states without a list, schools and districts are generally on their own to figure out which books to adopt.<sup class="endnote-pointer">10</sup> In short, changes to state laws or policies that strengthen the role of the state in textbook adoptions would probably be helpful if the goal was widespread adoption of the most effective books.<sup class="endnote-pointer">11</sup></p>
<p>Second, school districts have complex, highly ceremonial practices when it comes to textbook adoptions, which would likely be a barrier to more streamlined forms of decision making. We found in our interviews that virtually all districts have processes that involve a) one or more committees of teachers, b) evaluation of textbooks against complex rubrics (even in the case where the books were on the state list and thus had already been evaluated), c) multi-week pilots, and d) one or more formal votes before reaching a final decision. While better evidence could certainly be fed into this process along the way, it is far from guaranteed that the process would result in the best books being chosen.</p>
<p>Third, the timeline for getting evidence of effectiveness in the hands of district leaders to inform their decisions is challenging at best, and impossible at worst (though this depends at least in part on how states handle revisions to their standards over time). In the core subjects, most states have standards adoption and revision cycles every 7 to 10 years. Publishers put out new versions of books perhaps two years after a new set of standards is adopted, and states put out their lists thereafter (for instance, California put out its math textbook list during the 2013-14 school year, approximately three years after the adoption of Common Core). In order for impact estimates to be calculated, there needs to be a reasonably large number of districts adopting books and using them for at least a couple years. This would mean that the earliest impact estimates could probably have been obtained post-Common Core would have been perhaps 2016. By this point, almost every district in the state had already made an adoption purchase, meaning they were/are not looking to make another purchase soon. By the time the next textbook adoption cycle happens at the district level the standards will have been in place for approximately a decade (assuming they are not dramatically changed in the interim), and it is not even clear that publishers will be publishing the same versions of their books. Of course, if the standards stay stable and the published books stay mostly unchanged, then the results could be useful to districts making another adoption at that time, but this is a large number of contingencies given the transient nature of education policy.</p>
<h2>Encouraging teachers to make better textbook use decisions<a href="#_ednref1" name="_edn1"></a></h2>
<p>The fact is that, while many teachers still use textbooks, large proportions of teachers use them as simply one resource among many. This finding is confirmed in both large, state- and nationally-representative surveys, as well as in our interviews of California teachers. In our 67 interviews, no teachers said they used only the district-adopted textbook for their 8<sup>th</sup> grade mathematics instruction. Most teachers reported that the adopted book was inadequate in one of two ways—it lacked sufficient opportunity for students to practice foundational skills, or it lacked sufficient enrichment exercises to cover the more conceptual content in the standards. Whatever the gap in the materials, teachers reported supplementing with lessons from old books or with materials they sourced from various websites on the internet. An illustrative quote from one of our teachers was ““We have had to use additional resources. We can’t just settle on just using the [Textbook Title]. There isn’t enough quality in it in order to make it a full, 100 percent program. If you just used the book itself and nothing else, it wouldn’t be enough for them to learn the entire curriculum.” Given this view of textbooks, getting even the best-quality materials to be used with fidelity by teachers may be a challenge.</p>
<p>Teacher surveys suggest that textbooks may not be the main source of lessons for large proportions of teachers. For instance, a five-state study found that 72-80 percent of teachers (depending on subject) reported using instructional materials developed by them or their colleagues at their school at least once a week, as compared to 43-53 percent for materials created by external organizations such as publishers.<sup class="endnote-pointer">12</sup> Another national survey pegged the proportion using district-adopted textbooks once or more a week at about 62 percent.<sup class="endnote-pointer">13</sup> National data from the American Teacher Panel found greater than 90 percent of teachers reported using Google, and more than 70 percent reported using TeachersPayTeachers and Pinterest, to find lessons.<sup class="endnote-pointer">14</sup> Regardless of the data source, it is clear that textbooks are widely used but are far from the only source of curriculum in typical American classrooms. Furthermore, these numbers are quite a bit lower than those cited in Chingos and Whitehurst’s report,<sup class="endnote-pointer">15</sup> suggesting that the use of textbooks has declined over time. Certainly it is possible that this could change if teachers had better books available, but textbook reform would likely affect a modest proportion of the curriculum of the typical classroom.</p>
<p>To be sure, our teacher interviews did find certain district-level policies that seemed to be associated with better implementation of standards. For example, we found that teachers did need some sort of backbone for their curriculum, and having a formal textbook adoption provided that. Teachers in districts that did not formally adopt a curriculum, or that took a very long time after the standards were written to do so, complained about the lack of support and their concomitant inability to fully implement the standards. So districts should adopt something, and it’s possible that a stronger backbone—offered by a more effective textbook—would be used even more. In addition, teachers said they needed specific kinds of professional development focused on both the textbook itself and the standards more generally. They were critical of publisher-provided professional development, which they said often focused on surface elements of the materials. And they often were unable to state specific changes that were called for by the standards, perhaps reflecting a lack of deep knowledge of the standards. In short, teachers in general would like to have both a formally adopted material and support to understand and implement the standards through professional learning opportunities.</p>
<h2>recommendations</h2>
<p>There are good reasons to believe that curriculum materials can serve as an important reform lever. But this report has laid out some of the challenges in getting this reform to achieve its desired impact. Based on these issues, I make the following recommendations.</p>
<p>In terms of data and analysis:</p>
<ol>
<li>The best approach will be to routinize data collection, perhaps at the time of purchase, for each district in the state. If this is not possible, embedding annual data collections in other data collection activities could also work, but recalling will always have more error—and probably more burden—than more automated approaches.</li>
<li>The state should decide which subjects, grades, and courses will be the target of its collection efforts. This decision might be informed by surveying educators to understand where textbooks are currently most used.</li>
<li>At a minimum, the state should collect titles/publishers, editions, and adoption years for any book on which they gather information. Again if this is done routinely at purchase it would be straightforward.</li>
<li>The state should consider what it wants to collect from districts or schools that do not claim to use any formal textbook. Short surveys or audits of curriculum materials from samples of teachers in those sites may be the best approach. It would not be appropriate to collect no data simply because the district does not use textbooks—access to quality curriculum is an equity issue that is under the state’s purview.</li>
<li>Rather than merely collecting the data and hoping someone analyzes them, the state should have in place plans or a relationship that ensures the data get routinely analyzed by qualified researchers or staff. Otherwise this is unlikely to happen.</li>
</ol>
<p>While these recommendations will not ensure that trustworthy impact estimates will be created, they will go a long way toward ensuring that the conditions at least exist.</p>
<p>In terms of district adoption and teacher use:</p>
<ol>
<li>There appears to be little reason for states not to put out lists of quality materials. These lists can drive adoption decisions and can simplify the task of adopting for schools and districts.<sup class="endnote-pointer">16</sup> That said, states should be sure that their adoption processes are transparent and high quality so educators can trust the results. In states where such a move would be politically feasible, they should consider incentivizing or mandating districts purchase off the state-approved list.</li>
<li>Intermediary organizations, like California’s County Offices or New York’s BOCES, can serve an important role in helping smaller districts collaborate on, select, and implement materials. States should consider supporting these organizations directly for this purpose.</li>
<li>Because the use of non-textbook resources is large and growing, states should consider evaluating these for quality and creating curated lists of approved supplementary resources. They could also work with districts or intermediary organizations for this effort. Doing this might ease the curriculum selection burden for teachers and result in better quality materials in teachers’ hands.</li>
<li>Similarly, states should consider getting in the business of supporting quality professional development aligned with the standards and to support the implementation of the top-rated curriculum materials. This could ease the burden on schools and districts and prevent them from having to find or create their own learning opportunities.</li>
<li>Finally, states should plan regular data collection and analysis related to teacher adoption and use of curriculum materials. They might specifically work with districts that are adopting new materials to use those opportunities to research implementation and effects.</li>
</ol>
<p>Together with the data collection and analysis activities described above, these efforts are likely to help ensure public school students have access to high quality curriculum in all of the state’s schools. Without these kinds of sustained efforts, the strategy of using curriculum materials to leverage educational improvements may be unlikely to succeed in the long-term.</p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. He is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<p><a href="#_ednref1" name="_edn1"></a></p>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/555070276/0/brookingsrss/series/evidencespeaks">
<div class="fbz_enclosure" style="clear:left"><a href="https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180627_Classroom.jpg?w=270" title="View image"><img border="0" style="max-width:100%" src="https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180627_Classroom.jpg?w=270"/></a></div>
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/555070276/BrookingsRSS/series/evidencespeaks,"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/555070276/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180627_Classroom.jpg?w=270" type="image/jpeg" />
		<atom:category term="Education" label="Education" scheme="https://www.brookings.edu/topic/education/" /></item>
<item>
<feedburner:origLink>https://www.brookings.edu/research/what-accounts-for-gaps-in-student-loan-default-and-what-happens-after/</feedburner:origLink>
		<title>What accounts for gaps in student loan default, and what happens after</title>
		<link>http://webfeeds.brookings.edu/~/553612588/0/brookingsrss/series/evidencespeaks/</link>
		
		<dc:creator><![CDATA[Judith Scott-Clayton]]></dc:creator>
		<pubDate>Thu, 21 Jun 2018 09:00:17 +0000</pubDate>
				<guid isPermaLink="false">https://www.brookings.edu/?post_type=research&#038;p=523250</guid>
					<description><![CDATA[Executive summary In a previous Evidence Speaks report, I described the high rates at which student loan borrowers default on their repayment within 12 years of initial college entry, often on relatively modest amounts of debt. One of the most striking patterns emerging from that report and other prior work is how dramatically default rates&hellip;<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/553612588/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi1.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f06%2fFigure-1-011.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</description>
										<content:encoded><![CDATA[<p>By Judith Scott-Clayton</p><h2>Executive summary</h2>
<p>In a previous Evidence Speaks <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/evidencespeaks/~https://www.brookings.edu/research/the-looming-student-loan-default-crisis-is-worse-than-we-thought/">report</a>, I described the high rates at which student loan borrowers default on their repayment within 12 years of initial college entry, often on relatively modest amounts of debt. One of the most striking patterns emerging from that report and other prior work is how dramatically default rates vary by institution sector and by race/ethnicity: black, non-Hispanic entrants and for-profit entrants experience default at much higher rates than other students. In this report, I use the same source of data to examine whether these disparities in default rates can be explained by other factors. I also examine what happens after a default, and whether this also varies by race or institution sector. </p>
<p>I find that differences in student and family background characteristics, including measures of family income and wealth, can account for about half of the black-white gap in default (reducing it from 28 to 14 percentage points). But even accounting for differences in degree attainment, college GPA, and post-college income and employment cannot fully explain the black-white difference in default rates, which remains large and statistically significant at 11 percentage points in the most complete model.</p>
<p>Similarly, differences in student and family background characteristics can account for slightly less than half of the gap in default rates between for-profit borrowers and public two-year college borrowers (reducing it from 25 to 14 percentage points). Somewhat surprisingly, the gap across sectors is not fully explained by differences in attainment, or by measures of employment and earnings. Entering a for-profit is associated with a 10-point higher rate of default even after accounting for everything else in the model.</p>
<p>Adjusted and unadjusted gaps both provide important information; one is not more “correct” than the other. The adjustments are only as good as the measures included, and better data on earnings, employment, and other post-college circumstances might explain more of the gap. Differences in loan counseling or loan servicing might also play a role. The better we can understand what drives these stark gaps, the better policymakers can target their efforts to reduce defaults.</p>
<p>An additional analysis of what happens post-default shows that more than half of all defaulters (54 percent) were able to successfully resolve at least one of their defaulted loans via rehabilitation, consolidation, paying in full, or having a loan discharged. At least 14 percent of defaulted borrowers managed to emerge from default and re-enroll in school. While there is no black-white difference in resolution rates conditional on default, white defaulters are more likely to rehabilitate defaulted loans while black defaulters are more likely to consolidate. Similarly, defaulters from for-profit institutions were more likely to consolidate and less likely to rehabilitate a defaulted loan than defaulters from public two-year institutions.</p>
<h2>background and data</h2>
<p>This report utilizes data released by the U.S. Department of Education in October 2017, linking survey and administrative data from the Beginning Postsecondary Student (BPS) surveys to administrative data on debt and defaults from the National Student Loan Data System (NSLDS). I focus on the BPS 2003-04 survey sample, which is nationally representative of college entrants who enrolled for the first time in 2003-04.<sup class="endnote-pointer">1</sup> Respondents were re-surveyed in 2006 and 2009, and the NSLDS data are available through 2015, enabling certain outcomes to be measured up to 12 years after initial college entry. While some of the statistics reported below are publicly accessible from the National Center for Education Statistics (NCES) using the online Power Stats tool, I have computed others using the individual-level data which can only be obtained via a restricted-use data license. Where possible, I have validated my calculations using the restricted data against publicly available measures.</p>
<p>Figure 1 below summarizes previously reported rates at which student experience a default within 12 years of entry, by sector and by race for the BPS-2004 cohort. Figure 2 provides the same information, but limited to undergraduate borrowers only.<sup class="endnote-pointer">2</sup> The figures show that 17 percent of all entrants (28 percent of undergraduate borrowers) experienced a default within 12 years of entry. The figures also highlight the stark disparities in default by sector and race/ethnicity. For-profit entrants are nearly four times as likely to experience a default compared to public two-year entrants (47 percent versus 13 percent), while black non-Hispanic entrants are more than three times as likely as white non-Hispanic entrants to experience a default (38 percent versus 12 percent).<img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 1" data-src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-1-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 2" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-2-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<h2>What accounts for patterns of student loan default by sector and race?</h2>
<p>Institution sector and race/ethnicity are clearly important correlates of student loan default. But to what extent might these differences be explained by other student characteristics? And since these two factors are clearly not determinative, what other characteristics or experiences might help explain patterns of default, even for students within a given sector or of a given race/ethnicity? The goal of the analyses conducted below is not to attempt to identify “causal impacts” of given factors on default, but rather to better understand the constellation of factors that can or cannot explain the stark gaps across race and sector. For example, if racial or sectoral gaps could be explained fully by differences in degree attainment, policy attention might be best directed toward what happens during college than what happens after.</p>
<p>In order for a given factor to explain these gaps, two things must be true: the factor must be associated with likelihood of default, and the prevalence of the factor must differ across groups. Prior work has identified a range of factors predicting default, many of which are not terribly surprising. In addition to institutional sector and race, students’ age and gender, parental income and education, degree attainment, prior credit scores, and labor market outcomes are all related to default.<sup class="endnote-pointer">3</sup></p>
<p>One well-documented result that many <em>do</em> find surprising is that the amount of debt students hold is if anything <em>inversely</em> related to default rates—that is, those with more debt are significantly less likely to default.<sup class="endnote-pointer">4</sup> This pattern is driven by the fact that students with larger balances also tend to have much higher levels of attainment and earnings.<sup class="endnote-pointer">5</sup>  After controlling for attainment, prior work has found that the inverse relationship goes away, but the remaining correlation between debt size and default is still small and only weakly positive.<sup class="endnote-pointer">6</sup></p>
<p>Deming, Goldin, and Katz (2012) perform a similar analysis of sectoral gaps in three-year cohort default rates using institution-level data, and find that the gap between for-profits and other sectors cannot be explained by differences in student composition and other institution-level characteristics.<sup class="endnote-pointer">7</sup> The new linkage of the student-level BPS data with the NLSDS provides the opportunity to examine the drivers of default for a relatively recent college entry cohort, over an extended period of time, and with the ability to consider an unusually rich set of survey and administrative variables as potential explanatory factors. Using the same data employed here, Kelchen (2018) finds that racial gaps in default cannot be fully explained by other factors, though I will include a more comprehensive set of measures.<sup class="endnote-pointer">8</sup></p>
<p>In order to understand what is driving sectoral and racial gaps in default rates, I first run a regression predicting the likelihood of ever experiencing a default within 12 years as a function of the richest set of predictors available.<sup class="endnote-pointer">9</sup> I limit the sample to students who ever borrowed for undergraduate education. The full set of predictors included, along with their relationship to the likelihood of default, can be found in Appendix Table A1. In brief, the analysis includes:</p>
<ul>
<li style="margin-bottom: 20px"><em>Student and family background characteristics</em>. These characteristics, measured in the first year of enrollment, include race/ethnicity, gender, age and age-squared, whether the student was classified as dependent, EFC (this is a summary measure of financial need driven primarily by family income)<sup class="endnote-pointer">10</sup>, whether or not parents owned a home, parents’ highest level of education, whether parents provided financial support, SAT scores or equivalent when available, and an indicator for whether or not the student had a credit card in the first year of college.</li>
<li style="margin-bottom: 20px"><em>Undergraduate borrowing</em>. The regression includes the total amount borrowed for undergraduate education, as well as this amount squared to allow for the relationship to be non-linear.</li>
<li style="margin-bottom: 20px"><em>Institution sector and selectivity</em>. The regression includes indicators for whether the first institution was for-profit, public four-year, and private not-for-profit institutions, with public two-year entrants as the reference group. Four year institutions are additionally distinguished by level of selectivity.</li>
<li style="margin-bottom: 20px"><em>College performance and attainment</em>. The regression includes indicators for the highest level of attainment at the time of the six-year follow-up survey (2009), including whether the respondent was still enrolled, and with BA/BS attainment as the reference group. I also include last known GPA as of the six year follow-up survey (this variable is primarily derived from student transcripts, not self-reports).<sup class="endnote-pointer">11</sup></li>
<li style="margin-bottom: 20px"><em>Measures of employment, earnings, and debt-to-income ratios.</em> The regression includes self-reported employment and earnings (for those not still enrolled) at the time of the 6-year follow up (2009), as well as measures of monthly loan repayment amounts, and debt-to-income ratios. Unfortunately, the data do not include measures of employment or earnings beyond 2009.</li>
</ul>
<p>Even as measures of correlation rather than causation, individual coefficients from these regressions should be interpreted cautiously, because some factors in the model are closely related to each other. When this happens, the model cannot always distinguish which of the related factors is driving the association.</p>
<p>The results confirm previously established patterns by race, institution sector, and attainment, as well as by measures of financial need (EFC), but also add some new details. For those with SAT or ACT score data, scores are not significantly related to default holding all else constant, but last known college GPA is, with each GPA point associated with an 8-percentage-point lower rate of default. Proxies for parental wealth—including parental homeownership, parental education, and how much financial help parents provided to students while enrolled—are significantly negatively related to likelihood of default, even after controlling for everything else in the model. For example, students whose parents owned their home at college entry are 3 percentage points less likely to experience a default holding all else constant.</p>
<p>Finally, the full model indicates default is still significantly <em>negatively</em> correlated with undergraduate borrowing and default (with an additional $10,000 of debt associated with a 4-point lower rate of default), even after controlling for other factors including attainment.<sup class="endnote-pointer">12</sup> However, default is significantly <em>positively</em> correlated with debt-to-income ratios, highlighting the role of capacity to repay: a 10 point increase in this ratio associated with a 2-points higher rate of default.<sup class="endnote-pointer">13</sup> One surprising result is that being employed in 2009 is positively associated with defaulting within 12 years. This could be because those not employed in 2009 are more likely to acquire further education and have less time in repayment.</p>
<h2>Can these factors explain institutional and racial/ethnic gaps in student loan default?</h2>
<p>I next examine the extent to which the dramatic disparities in default rates by sector and race can be explained by differences in student/family background, amounts borrowed, college achievement and attainment, and post-college earnings and employment. To do this, I run a series of regressions similar to above, but adding predictors step-by-step in groups. For example, to examine disparities in default by sector, I first run a probit regression including only a set of indicators for institution type. The resulting coefficients describe the unadjusted differences in default rates by sector, as compared with the default rate in the reference group (in this case public two-year institutions). I then add additional predictors in the groups described above and evaluate how much the coefficients on the sector indicators change.</p>
<p>The results for institution sector are summarized in Figure 3 (full regression results are available in Appendix Table A2). The first set of columns shows the unadjusted gaps in default rates for undergraduate borrowers from each sector, as compared with the rate for borrowers who entered public two-year colleges (26 percent). The second set of columns shows how the gaps change after adding student and family background characteristics. Interestingly, while four-year college borrowers have lower unadjusted default rates than public two-year college borrowers, this advantage is completely eliminated after accounting for differences in student and family background across sectors. The for-profit disadvantage shrinks, but at 14 percentage points still remains large and statistically significant.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 3" data-src="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i2.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-3-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 4" data-src="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i0.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-4-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p>Adding additional controls for amounts borrowed, attainment, and GPA does little to further explain the for-profit disadvantage.<sup class="endnote-pointer">14</sup> The richest model, including controls for employment in 2009 and debt-to-income ratios, shrinks the gap modestly to 11 percentage points, but if for-profit entrants have lower employment and earnings than other borrowers with similar characteristics, this could well be a consequence of for-profit enrollment rather than a mitigating explanatory factor.</p>
<p>In Figure 4, I repeat the same exercise to examine racial disparities. The first set of columns shows the differences in default rates by race/ethnicity, as compared with the rate for white non-Hispanic borrowers (21 percent).<sup class="endnote-pointer">15</sup> The second column accounts for additional student and family background measures that may differ by race. Adding these measures explains about half of the black-white gap and more than 80 percent of the Hispanic-white gap, but none of the white-Asian gap. Accounting for differences in amounts borrowed has little additional effect. Accounting for sector, selectivity, attainment, and GPA reduces the measured black-white gap a bit further. Interestingly, accounting for job status and debt-to-income ratios hardly changes the black-white gap at all after everything else is included. The richest model still leaves a large, statistically significant 11 percentage point black-white gap in likelihood of default, while the adjusted gap between white borrowers and those of Asian or Pacific Islander descent is 9 percentage points.</p>
<p>Some important caveats are required for interpretation. First, because many predictors are correlated with each other, the order in which predictors are added matters. Attainment and earnings may have relatively little additional explanatory power, not because they don’t matter, but simply because their effect has already been captured by other variables. In fact, in results not shown, I find that differences in sector, selectivity, and attainment, if added on their own, can explain almost half the black-white gap.<sup class="endnote-pointer">16</sup>  Second, predictive models are only as good as the measures that are included, and additional or more precise measures might reduce gaps further.<sup class="endnote-pointer">17</sup> The 2009 measures of employment and income, in particular, are less than ideal because they are self-reported at a time when many in the sample have not yet entered repayment, and many are still enrolled in school.<sup class="endnote-pointer">18</sup></p>
<p>Finally, while the adjusted and unadjusted gaps presented here provide distinct information, one is not necessarily more correct or more useful than the other. For example, even if the black-white gap in default could be fully explained by family income and wealth, this would not make it any less problematic for black borrowers who cannot change their family background. Moreover, borrowing, degree attainment and earnings are themselves potential functions of race and/or institution sector. To the extent that controlling for these factors reduces the gap in default, it simply shifts the question to why there are gaps in these predictors.</p>
<h2>What happens to defaulters after a default?<a href="#_ednref1" name="_edn1"></a></h2>
<p>The high rates of default among black borrowers and those attending for-profit colleges is cause for concern due to the potential financial ramifications of default. When a student loan enters default, the entire balance becomes immediately due, and borrowers lose access to options that might otherwise have applied, such as deferment and forbearance.<sup class="endnote-pointer">19</sup> If the borrower does not make arrangements with their servicer to get out of default, the loan may go to collections. Fees of up to 25% of the balance due may be added as a result.<sup class="endnote-pointer">20</sup> Defaulting on a student loan can also lower credit scores, making it harder to access credit or even to rent an apartment in the future. In some states, default can lead to revocation of professional licenses, and credit histories may be evaluated as part of employment applications, making it harder to find or keep a job. Also, students cannot receive any additional federal student aid while they are in default, making it more difficult to return to school.</p>
<p>Still, default is a status, not a permanent characteristic, and many students who experience a default do eventually emerge from it. In fact, more than half of those of those who ever defaulted (54 percent) were able to resolve at least one of those defaults by the end of the 12-year follow up, and at least 14 percent returned to school after a default.<sup class="endnote-pointer">21</sup> There are four ways to get out of default: rehabilitation, consolidation, paying in full, or having a loan discharged.</p>
<p>Rehabilitation offers the advantage of having the default removed from the borrower’s credit record, but it requires successfully making 9 payments over 10 months, and can only be used once. Consolidating defaulted loans into a new loan can get a borrower out of default more quickly and may be the only feasible option for those with multiple defaulted loans, but the default remains on the credit record for up to 7 years.</p>
<p>Figure 5 shows the percentage of defaulted students who were ever able to successfully resolve a defaulted loan by the end of the 12-year follow up, as well as the percentage ever emerging from default via one of these pathways, by race/ethnicity. Though black borrowers have a much higher rate of default in the first place, black and white defaulters emerge from default at similar rates, while Hispanic defaulters were slightly more likely to resolve a default.<sup class="endnote-pointer">22</sup> At the end of the follow-up period, about 54 percent of white defaulters had resolved at least one defaulted loan, compared to 53 percent of black defaulters.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 5" data-src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-5-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p>Black and white defaulters differ, however, in how they emerge from default: black defaulters are more likely to get out of default via consolidation (23 versus 15 percent), while white defaulters are more likely to rehabilitate (32 versus 26 percent) or pay in full (34 versus 30 percent).<sup class="endnote-pointer">23</sup> Since rehabilitation can only be used once, I also examine patterns of resolution for the first defaulted loan (not shown), and find that the same general pattern holds.</p>
<p>Figure 6 shows the same statistics for defaulters by first institution sector. Defaulters from private institutions—whether for-profit or not-for-profit—were more likely to resolve a default than defaulters from public institutions. These defaulters were also more likely than those from public institutions to resolve via a consolidation. Again, this pattern also holds if I examine only the first defaulted loan.</p>
<p><img class="alignnone size-article-outset lazyautosizes lazyload" src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=1000%2C750px&amp;ssl=1" sizes="1363px" srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=500%2C375px&amp;ssl=1 500w" alt="Scott Clayton Defaults Figure 6" data-src="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=1000%2C750px&amp;ssl=1" data-srcset="https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=1000%2C750px&amp;ssl=1 1000w,https://i1.wp.com/www.brookings.edu/wp-content/uploads/2018/06/Figure-6-011.png?fit=500%2C375px&amp;ssl=1 500w" /></p>
<p>Future work could apply methods similar to those used above in order to better understand the predictors and consequences of consolidation versus rehabilitation among defaulted borrowers. Preliminary analysis (not shown) indicate that defaulters that resolve their first defaulted loan via consolidation have larger total balances at the time of default than those who rehabilitate ($19,185 versus $17, 124), are more likely to have experienced multiple instances of default (56 percent versus 41 percent), and more likely to receive federal student aid post-default (26 percent versus 14 percent).<sup class="endnote-pointer">24</sup> While the interpretation of these findings is not fully clear, it is consistent with consolidation being the more appealing option for defaulted borrowers with multiple defaulted loans, and also for defaulters who seek to re-enroll in college (since consolidation can happen more quickly than rehabilitation).</p>
<h2>Take-away findings and implications</h2>
<p>A number of key findings emerge from this analysis. First, about half of the total black-white gap in default rates, and just under half of the gap between for-profits and public two-year colleges, can be explained by student and family background including measures of parental wealth and support. Second, adding additional controls reduces both gaps further; yet even controlling for degree attainment, GPA, and measures of 2009 employment, earnings, and debt-to-income ratios cannot fully explain either gap. Finally, more than half of defaulted borrowers are able to resolve at least one of their defaulted loans within the 12-year follow-up window, with black defaulters and those from private institutions more likely than other groups to resolve via consolidation.</p>
<p>Adjusted and unadjusted gaps both provide important information; one is not more “correct” than the other. The adjustments are only as good as the measures included, and because some of the predictors are correlated with each other, the order in which groups of predictors are added can matter. For example, differences in college sector, selectivity, and attainment explain more of the black-white gap in default when these predictors are added prior to adding student/family background characteristics.</p>
<p>What could explain the remaining gaps in default? Better measures of income and other post-college financial factors would further explain the gap, as might more information about the timing of when students left school and when they entered repayment. Some of the remaining gap may relate to the quality of loan exit counseling or loan servicing, which could vary by race or sector. Indeed, other research has found significant variation in repayment outcomes across the individual loan servicing agents that communicate with borrowers.<sup class="endnote-pointer">25</sup></p>
<p>This report also shows that more than half of defaulted borrowers are able to resolve at least one of their defaulted loans, though resolution does not necessarily erase the consequences of default. Conditional on experiencing a default, the likelihood of resolution does not vary by race, but those who attended private institutions (whether for-profit or not-for-profit) are more likely to resolve a defaulted loan. The pathway to resolution varies both by race and sector: compared with other students, consolidation is more common for black defaulters and those from private institutions.</p>
<p>A final caveat is that this report has focused on default rather than repayment. Just because a student is not in default, does not necessarily mean they are paying down their loan. And while defaults may be of greatest consequence to borrowers, repayment rates are a legitimate concern for policymakers and taxpayers. A similar analysis of predictors of successful repayment would further enrich our understanding of student loan outcomes. Qualitative research to illuminate how students transition from school into repayment, and so often into default and then back out again, would also be very valuable. The better we can understand what drives these patterns, the better policymakers can target their efforts to improve student loan outcomes.</p>
<hr />
<p><em>The author did not receive any financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. She is currently not an officer, director, or board member of any organization with an interest in this article.</em></p>
<p><a href="#_ednref1" name="_edn1"></a></p>
<h2><a href="#_ednref1" name="_edn1"></a></h2>
<Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0;width:1px!important;height:1px!important;" hspace="0" src="http://webfeeds.brookings.edu/~/i/553612588/0/brookingsrss/series/evidencespeaks">
<div style="clear:both;padding-top:0.2em;"><a href="http://webfeeds.brookings.edu/_/28/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/29/553612588/BrookingsRSS/series/evidencespeaks,https%3a%2f%2fi1.wp.com%2fwww.brookings.edu%2fwp-content%2fuploads%2f2018%2f06%2fFigure-1-011.png%3ffit%3d1000%252C750px%26amp%3bssl%3d1"><img height="20" src="https://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/24/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/19/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a href="http://webfeeds.brookings.edu/_/20/553612588/BrookingsRSS/series/evidencespeaks"><img height="20" src="https://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;&#160;</div>]]>
</content:encoded>
					
		
		
		<enclosure url="http://webfeeds.brookings.edu/-/553687100/0/brookingsrss/series/evidencespeaks.jpg" type="image/jpeg" />
		<atom:category term="Higher Education" label="Higher Education" scheme="https://www.brookings.edu/topic/higher-education/" />
<feedburner:origEnclosureLink>https://www.brookings.edu/wp-content/uploads/2018/06/ES_20180621_CollegeClassroom.jpg?w=270</feedburner:origEnclosureLink>
</item>
</channel></rss>

