Eighth-Grade Algebra for All?


What Do the California Standards Test Results Reveal About the Movement Toward Eighth-Grade Algebra for All?

In California, an increasing number of 8th graders have taken algebra courses since 2003. This study examines students’ California Standards Test (CST) results in grades 7 through 11, aiming to reveal who took the CST for Algebra I in 8th grade and whether the increase has led to a rise in students’ taking higher-level mathematics CSTs and an improved performance in following years. Results show that the pipeline of 8th-grade algebra and following years’ higher-level mathematics CSTs has a significant leak in it. Furthermore, the longitudinal analysis reveals that 9th-grade students have a 69% greater chance of succeeding in algebra if they passed the CST for General Mathematics in 8th grade compared to those who failed the CST for Algebra I.

The Unintended Consequences of an Algebra-for-All Policy on High-Skill Students: Effects on Instructional Organization and Students’ Academic Outcomes

In 1997, Chicago implemented a policy that required algebra for all ninth-grade students, eliminating all remedial coursework. This policy increased opportunities to take algebra for low-skill students who had previously enrolled in remedial math. However, little is known about how schools respond to the policy in terms of organizing math classrooms to accommodate curricular changes. The policy unintentionally affected high-skill students who were not targeted by the policy—those who would enroll in algebra in its absence. Using an interrupted time-series design combined with within-cohort comparisons, this study shows that schools created more mixed-ability classrooms when eliminating remedial math classes, and peer skill levels declined for high-skill students. Consequently, their test scores also declined.

You have read this article with the title . You can bookmark this page URL http://universosportinguista.blogspot.com/2012/08/eighth-grade-algebra-for-all.html. Thanks!

Heterogeneity in High Math Achievement Across Schools


Related article

Complete paper

This paper explores differences in the frequency with which students from different schools reach high levels of math achievement. Data from the American Mathematics Competitions is used to produce counts of high-scoring students from more than two thousand public, coeducational, non-magnet, non-charter U.S. high schools. High-achieving students are found to be very far from evenly distributed. There are strong demographic predictors of high achievement. There are also large differences among seemingly similar schools. The unobserved heterogeneity across schools includes a thick tail of schools that produce many more high-achieving students than the average school. Gender-related differences and other breakdowns are also discussed.

You have read this article with the title . You can bookmark this page URL http://universosportinguista.blogspot.com/2012/08/heterogeneity-in-high-math-achievement.html. Thanks!

New Research on Text Complexity


Full report:Supplemental Information for Appendix A of the Common Core State Standards for English Language Arts and Literacy: New Research on Text Complexity

Related article

Appendix A of the Common Core State Standards (hereafter CCSS) contains a review of the research stressing the importance of being able to read complex text for success in college and career. The research shows that while the complexity of reading demands for college, career, and citizenship have held steady or risen over the past half century, the complexity of texts students are exposed to has steadily decreased in that same interval. In order to address this gap, the CCSS emphasize increasing the complexity of texts students read as a key element in improving reading comprehension.

The importance of text complexity to student success had been known for many years prior to the release of the CCSS, but its release spurred subsequent research that holds implications for how the CCSS define and measure text complexity. As a result of new research on the quantitative dimensions of text complexity called for at the time of the standards’ release1, this report expands upon the three-part model outlined in Appendix A of the CCSS in ELA/Literacy that blends quantitative and qualitative measures of text complexity with reader and task considerations. It also presents new field-tested tools for helping educators assess the qualitative features of text complexity.

The quantitative dimension of text complexity refers to those aspects—such as word frequency, sentence length, and text cohesion (to name just three)—that are difficult for a human reader to evaluate when examining a text. These factors are more efficiently measured by computer programs. The creators of several of these quantitative measures volunteered to take part in a research study comparing the different measurement systems against one another. The goal of the study was to provide state of the science information regarding the variety of ways text complexity can be measured quantitatively and to encourage the development of text complexity tools that are valid, transparent, user friendly, and reliable.2 The six different computer programs that factored in the research study are briefly described below:

ATOS by Renaissance Learning

ATOS incorporates two formulas: ATOS for Text (which can be applied to virtually any text sample, including speeches, plays, and articles) and ATOS for Books. Both formulas take into account three variables: words per sentence, average grade level of words (established via the Graded Vocabulary List), and characters per word.

Degrees of Reading Power® (DRP®) by Questar Assessment, Inc.

The DRP Analyzer employs a derivation of a Bormuth mean cloze readability formula based on three measureable features of text: word length, sentence length, and word familiarity. DRP text difficulty is expressed in DRP units on a continuous scale with a theoretical range from 0 to 100. In practice, commonly encountered English text ranges from about 25 to 85 DRP units, with higher values representing more difficult text. Both the measurement of students’ reading ability and the readability of instructional materials are reported on the same DRP scale.

Flesch-Kincaid (public domain)

Like many of the non-proprietary formulas for measuring the readability of various types of texts, the widely used Flesch-Kincaid Grade Level test considers two factors: words and sentences. In this case, Flesch-Kincaid uses word and sentence length as proxies for semantic and syntactic complexity respectively (i.e., proxies for vocabulary difficulty and sentence structure).

The Lexile® Framework For Reading by MetaMetrics

A Lexile measure represents both the complexity of a text, such as a book or article, and an individual’s reading ability. Lexile® measures include the variables of word frequency and sentence length. Lexile® measures are expressed as numeric measures followed by an “L” (for example, 850L), which are then placed on the Lexile® scale for measuring reader ability and text complexity (ranging from below 200L for beginning readers and beginning-reader materials to above 1600L for advanced readers and materials).

Reading Maturity by Pearson Education

The Pearson Reading Maturity Metric uses the computational language model Latent Semantic Analysis (LSA) to estimate how much language experience is required to achieve adult knowledge of the meaning of each word, sentence, and paragraph in a text. It combines the Word Maturity measure with other computational linguistic variables such as perplexity, sentence length, and semantic coherence metrics to determine the overall difficulty and complexity of the language used in the text. SourceRater by Educational Testing Service

SourceRater employs a variety of natural language processing techniques to extract evidence of text standing relative to eight construct-relevant dimensions of text variation: syntactic complexity, vocabulary difficulty, level of abstractness, referential cohesion, connective cohesion, degree of academic orientation, degree of narrative orientation, and paragraph structure. Resulting evidence about text complexity is accumulated via three separate regression models: one optimized for application to informational texts, one optimized for application to literary texts, and one optimized for application to mixed texts.

Easability Indicator by Coh-Metrix

One additional program—the Coh-Metrix Easability Assessor, developed at the University of Memphis and Arizona State University—factored in the research study but was not included in the cross analysis. It analyzes the ease or difficulty of texts on five different dimensions: narrativity, syntactic simplicity, word concreteness, referential cohesion, and deep cohesion.This measure was not included in the cross analysis because it does not generate a single quantitative determination of text complexity, but it does have use as a tool to help evaluate text systematically. The Coh-Metrix Easability Assessor creates a profile that offers information regarding the aforementioned features of a text and analyzes how challenging or supportive those features might be in student comprehension of the material.



The research that has yielded additional information and validated these text measurement tools was led by Jessica Nelson of Carnegie Mellon University, Charles Perfetti of University of Pittsburgh and David and Meredith Liben of Student Achievement Partners (in association with Susan Pimentel, lead author of the CCSS for ELA). It had two components: first, all the developers of quantitative tools agreed to compare the ability of each text analyzer to predict the difficulty of text passages as measured by student performances on standardized tests. Second, they agreed to test the tools’ ability to predict expert judgment regarding grade placement of texts and educator evaluations of text complexity by examining a wide variety of text types selected for a wide variety of purposes. The first was measured by comparing student results in norming data on two national standardized reading assessments to the difficulty predicted by the text analyzer measures. The second set of data evaluated how well each text analyzer predicted educator judgment of grade level placement and how well they matched the complexity band placements used for the Appendix B texts of the CCSS. In the final phase of the work, the developers agreed to place their tools on a common scale aligned with the demands of college readiness. This allows these measures to be used with confidence when placing texts within grade bands, as the common scale ensures that each will yield equivalent complexity staircases for reaching college and career readiness levels of text complexity.

The major comparability finding of the research was that all of the quantitative metrics were reliably and often highly correlated with grade level and student performance based measures of text difficulty across a variety of text sets and reference measures.5 No one of the quantitative measures performed significantly differently than the others in predicting student outcomes. While there is variance between and among the measures about where they place any single text, they all climb reliably—though differently—up the text complexity ladder to college and career readiness. Choosing any one of the text-analyzer tools from second grade through high school will provide a scale by which to rate text complexity over a student’s career, culminating in levels that match college and career readiness.

In addition, the research produced a new common scale for cross comparisons of the quantitative tools that were part of the study, allowing users to choose one measure or another to generate parallel complexity readings for texts as students move through their K-12 school careers. This common scale is anchored by the complexity of texts representative of those required in typical first-year credit-bearing college courses and in workforce training programs. Each of the measures has realigned its ranges to match the Standards’ text complexity grade bands and has adjusted upward its trajectory of reading comprehension development through the grades to indicate that all students should be reading at the college and career readiness level by no later than the end of high school.


You have read this article with the title . You can bookmark this page URL http://universosportinguista.blogspot.com/2012/08/new-research-on-text-complexity.html. Thanks!