Join Us Today

make a donation

Upcoming Events

Please sign up to be notified of future events.

All of us do not have equal talent, but all of us should have an equal opportunity to develop our talent.quote

— John F. Kennedy

Education Reform Now, a non-partisan 501c3 organization, is committed to
ensuring all children can access a high-quality public education
regardless of race, gender, geography, or socio-economic status.

Please take a moment to watch the video above from our 2014 "Camp Philos" where the nation's top progressive thought leaders came together to discuss innovative solutions to improve our public education system.

Learn more

Our Blog

The Soft Bigotry of Low Expectations Has No Place in K-12 or Higher Education (Part 2)

By Mary Nguyen Barry, Policy Analyst
     Michael Dannenberg, Director of Strategic Initiatives for Policy  

We've been hard in our opposition to recommendations by organizations calling on the Department of Education (ED) to adjust the results of their planned college ratings system based on immutable student characteristics. So called "risk adjustment" proposals effectively suggest there be different expectations for different groups of students based on demographics alone.  It's what former President George W. Bush decried as the "soft bigotry of low expectations."

But it's not enough to be opposed to those who call for risk adjustment.  Instead, we offer an alternative for ED to consider: compare unadjusted outcomes of similar colleges serving students with similar levels of academic preparation.  That's different from demographic characteristics per se.

From an accountability perspective, it makes very little sense to compare an outcome, like graduation rates, at a college like Hofstra University in New York with those at Harvard University. Those two schools enroll students with completely different levels of academic preparation, not to mention are institutions with vast differences in size and wealth.

Hofstra University, and all colleges, should be compared to similar colleges that serve similarly prepared students. When one does this "peer institution" comparison, you see that while Hofstra University graduates its first-time full-time students at a rate similar to the national average (61 percent compared to 59 percent), Hofstra University underperforms almost all of its peers in educating its students. Hofstra does an even worse job educating its underrepresented minority students as compared to its peers: just over half (54 percent) graduate within six years.

A peer comparison analysis would ask why do similar colleges with students with similar levels of academic preparation, like Syracuse University and Fordham University, both also in New York, graduate their students at much higher rates? 

The same peer comparison analysis can also identify extremely poor performers. We've found that 9 times out of 10, a college with a graduation rate below 15 percent falls in the bottom of its peer group. These are the colleges - and there are over 100 of them - that ED's rating system should identify and warn students and families against.

In short, whereas a risk adjustment model embraces different and lower expected outcomes for some students, based on race for example, a peer institution comparison technique avoids the embrace of artificially deflated expectations. 

The trick is how do you identify peer groups of similar institutions.  For the above analysis, we used the College Results Online (CRO) algorithm identifying peer groups.  It has been peer reviewed and in use for 10 years.  We suggest ED use a similar algorithm, but with a slight modification to remove the consideration of student wealth.  We submit that ED should only consider key institutional characteristics such as students' academic preparation, as measured by entering freshman high school GPA and SAT/ACT score, and institutions' size, sector, admissions selectivity, and funding levels when constructing peer groups for accountability purposes.  Once created, ED should:

  • Identify high, middle, and low performers among the ultimate outcomes it chooses for access, affordability, and success; aaaaaaaaaaaaaaaaaaaaaaaaaaaa
  • Measure an institution's improvement over time by examining changes in its position within its peer group. Consider San Diego State University, for example, that steadily rose from the bottom third of its CRO peer group in 2002 with a 38 percent six-year graduation rate to the top third of its peer group since 2005, with a graduation rate now at 66 percent; and aaaaaaaaaaaaaaaaaaaaaaaaaaaa
  • Guard against perverse incentives by rewarding successful access, affordability, and success outcomes among disaggregated groups of underrepresented students, such as racial minorities, low-income students, adult students, and upward transfer students. That's what many states' performance-based funding systems - like Tennessee, Ohio, and Indiana - do.

Finally, in order to encourage positive decision-making among students and families, we recommend that ED create a second tailored peer group for presentation purposes to consumers. This second peer group would compare schools to other colleges a student is likely to apply. That's because a student's choice set - which may be driven by factors like geography or reputation - likely differs greatly from a national peer group of colleges that serve similarly academically prepared students. These informative consumer selection peer groups can be constructed based on groups of colleges that students list on their FAFSA applications and/or on their ACT and SAT submissions.

By following this two-level, institution peer group approach, ED can ensure its rating system is both fair to institutions and helpful to students and families.  

College Ratings & Higher Ed Accountability (Part 1)

By Mary Nguyen Barry, Policy Analyst
     Michael Dannenberg, Director of Strategic Initiatives for Policy 



This past Monday, Education Reform Now submitted recommendations to the Department of Education (ED) regarding the President's proposed college ratings system. In case you don't want to dive into our 12 pages of comments, we'll give you some highlights over the next two blog posts. First up: A general overview.

The Good

Overall, we support the concept and need for a federal college ratings system.

Despite improvements over the past 50 years, the American higher education system still calcifies economic inequality rather than acts as an engine of socioeconomic opportunity. College access for students from low-income families has improved, but the gap in degree completion rates between those from low and upper income families has grown. Rising net prices, driven by state higher education funding cuts, have outstripped growth in wages for the poor, working-class, and even middle-income families. The result is heavier debt burdens, especially among low-income families, that are exacerbated by low completion rates and a long time to degree even among those who do complete.

We support rating colleges as "high-performing", "low-performing," and middle.

Not all colleges contribute equally, nor solely, to overall postsecondary education underperformance. There are high-performers - colleges that buck the trend and enroll and serve students from low-income families well - and low-performers - colleges that act as "engines of inequality," "college dropout factories," or "diploma mills." It's much easier, in the initial rounds, for ED to identify the "best and worst" colleges and to leave more nuanced gradations for later iterations of the ratings system.

We support using the ratings system to drive improved accountability and information to consumers.

The three-tiered rating system lends itself to rapid accountability provisions. We've suggested previously that the federal government at least begin the accountability process by identifying the "worst of the worst" colleges on a variety of access, success, and post-enrollment success metrics. These colleges should lose access to certain federal grant, loan, and tax benefits.  Or, at the very least, they should be subject to a loss in competitive standing when pursuing non-formula based discretionary grant funding and separately, heightened scrutiny, including Department 'program reviews' of regulatory compliance.

On the consumer front, students and families need clear indications of a college's performance along a streamlined set of outcome measures. The identification of the "worst of the worst" colleges would also send a bright signal to consumers that these are institutions that should be avoided. We'll discuss in more detail in the next blog as to how the Department's ratings can serve both accountability and consumer purposes.

The Bad

We do not support ED's consideration of proposals to adjust outcomes for student characteristics or institutional mission.

We cannot stress enough our philosophical opposition to proposals (like the Association of Public and Land-Grant Universities') that call for adjusting institutional outcomes based on personal student characteristics. Such "risk adjustment" consecrates a different set of expectations for different groups of students based on immutable characteristics, such as race and gender. It could also allow colleges to escape responsibility for providing quality service to every student they voluntary enroll. It's what former President George W. Bush referred to as "the soft bigotry of low expectations."

Never before has there been any outcome adjustment in federal higher education policy.  In fact, the Obama administration firmly rejected this approach in the past during the gainful employment debates. ED insisted back then that it was appropriate to hold all institutions to certain minimum standards irrespective of student demographics. ED should apply that same principle in the context of a ratings system applicable to all degree-granting institutions of higher education.

What would an alternative be? Unadjusted outcomes should be compared among similar colleges serving similarly academically prepared students. This can be accomplished by creating "institutional peer groups." We'll discuss in more detail how that would work and highlight individual performers in our next post.

Over 25 Groups Back Obama Teacher Prep Reg:
"An Education Equity Mandate"

By Hajar Ahmed

The Obama administration's teacher education reform plan won an influx of support from more than 25 education advocacy and service organizations this week.  Advocacy groups that haven't been aligning of late with respect to K-12 school accountability issues all echoed the same theme in comments submitted to the U.S. Department of Education. 

Groups including Teach For America, the Center for American Progress, Deans for ImpactEducation Trust, and the California Business Round Table all called for the Obama administration's teacher preparation rule-making plan to go forward and for States to rate teacher education programs based first and foremost on teacher candidate outcomes, including candidate performance in PK-12 classrooms.  The largest coalition, led by Democrats for Education Reform, called for the Education Department's final rule to:

  • Ensure use of multiple measures by states in rating traditional and alternative route teacher preparation program effectiveness;
  • Ensure that no state rate a program as effective or higher absent evidence that teacher candidates go on to generate satisfactory student learning outcomes in K-12 classrooms;
  • Encourage states to create at least four teacher preparation program evaluation performance categories (i.e. low-performing, at-risk, effective, and highly effective) that meaningfully differentiate preparation programs; 
  • Establish a link between state program evaluation results and institutional eligibility to participate in the TEACH "grant/loan" program; and 
  • Require states to report publicly to prospective teacher candidates, employers, and others teacher preparation program evaluation results.

Notable is that supportive groups still called for a number of improvements in the Education Department's regulatory effort, including ensuring that State teacher preparation program evaluation efforts be driven by a "rigorous and streamlined" set of requirements. Advocates called for an assurance that all teacher preparation programs be evaluated on an equal basis and for a removal of carve-outs based on subject matter (the NPRM exempts STEM programs).  The DFER led group also called on the administration to drop the requirement that States survey program graduates and incorporate student growth data in non-ESEA required tested grades and subjects.

Below is a list of groups that submitted supportive comments to the Department of Education.  At least 200 individuals submitted similar supportive letters as well. 

A+ Denver
Aspire Public Schools
Association of American Educators
California Business Roundtable
Center for American Progress
Civic Builders
Deans for Impact
Democrats for Education Reform
Education Reform Now
Ed Trust
Educators 4 Excellence
Great Oakland Public Schools
Green Dot Public Schools National
Kevin Carey, New America Education Policy Program
National Council on Teacher Quality
Reading and Beyond
Relay GSE
Students Matter
Success Academy Charter Schools
Teach for America
Teach Plus
The Mind Trust
Third Way
Urban Teacher Center

Teacher Preparation Coalition Comments Sign On Letter