We’ve Got Numerical Outcomes, Now What?

Author: Maxine Roberts, Director, Strong Start to Finish

Introduction

Developmental education (DE) reforms are addressing some barriers in college progression that students in traditional prerequisite course structures face, especially students who are racially minoritized, students with low incomes and adults returning to college. In fact, many of the structural changes often produce better outcomes data in the aggregate than when traditional placement and course models are used. However, after disaggregating these data, leaders who notice outcome differences between student groups have asked us, “How can I address this disparity, and what should I do with these data?” In this blog, I share outcomes from DE reforms and offer a tool that helps to refine the approach to DE reform by using outcome data to address ineffective institutional policies and practices.

Outcomes from Corequisite Courses

Results from corequisite courses show that this model works. In this structure, students enroll in a gateway course along with a support course to help them succeed in college-level coursework. Through corequisite courses, all students benefit from improved course pass rates and, in many cases, outcome differences shrink—particularly between student groups that differ by race/ethnicity and socioeconomic status[1].

Complete College America (CCA) surveyed their Alliance states where corequisite courses were implemented, and shared completion rates from AY 2019–2020. These data represent outcomes for full-time students assessed as “unprepared for college math” at four-year colleges. While some students were placed into prerequisite courses, others were placed into corequisite math.

Average of Percentage Completed in Math for Pell Recipients in 4-Year Colleges (by race/ethnicity)

 

  Prerequisite Math

 

  Corequisite Math

  Student Groups   

   % Complete

 

  Student Groups   

   % Complete

  All   

   19.6%

 

  All   

   67.4%

 American Indian or Alaskan Native   

   6.3%

 

  American Indian or Alaskan Native*   

   60.0%

 Asian*[2]   

   22.2%

 

 Asian*   

   100.0%

 Black or African American   

   16.1%

 

 Black or African American   

   60.9%

 Hispanic   

   25.3%

 

 Hispanic   

   72.4%

 White   

   19.7%

 

 White   

   68.0%

 

Of the colleges surveyed, students who enrolled in corequisite math completed at higher rates than peers enrolled in prerequisite math. These data are compelling for higher education leaders making the case for replacing prerequisite courses with corequisite courses in four-year institutions. The outcomes begin to provide answers about what has happened in CCA’s Alliance states after the implementation of these reform efforts. 

Using Quantitative Data to Generate Qualitative Inquiry

Addressing inequities in higher education calls for using quantitative data to identify where outcome differences are occurring, and then considering the policies and practices that could be reinforcing inequities. This process is valuable because — as shown earlier in the national data on corequisite math — while the resulting information can reveal improvements produced after reforms are enacted, it can also leave some questions unanswered. Our review of practices in the field shows that suggestions to use disaggregated data as part of a process to address inequities often start and stop with data collection. When these data are reviewed without attention to why outcome differences exist, the numbers only provide a glimpse into what’s happening in any classroom, institution or system. Additionally, this leaves leaders, practitioners and researchers on their own to make sense of the inequalities between different student groups. This process can be harmful when answers to the question, why result in deficit-focused perspectives of students. Addressing inequities in educational settings requires we consider how we use disaggregated data to make sense of these outcomes. Supporting state, system and institutional leaders as they begin to answer the why, we can turn to the Equity-minded inquiry series: Data Tools from the Center for Urban Education.

Employing one of the data tools, Making Sense of Equity Gaps, involves disaggregating data and analyzing the outcomes. Instead of stopping there if outcomes between groups differ, the authors ask readers to develop inquiry questions that help them consider which policies (rules and systems for accountability) and practices (implementation of the rules) might be contributing to the inequities that the data reflect. Practitioners can use this learning to identify and act on the policies and practices that may be impeding student success.

Reflecting on the ways that individuals use their beliefs and perceptions to explain data outcomes, authors present five categories of deficit-minded practices that some may use to rationalize outcome differences they’re seeing:

  • Focus on student behavior.
  • Emphasize that students are not college-ready.
  • Fixate on students’ socioeconomic backgrounds.
  • Rely on stereotypes.
  • Concentrate on fixing what students lack.

These practices are problematic when they serve as the primary rationale for outcomes and don’t allow leaders and practitioners to consider other issues that may be at play. For instance, outcome data from the implementation of corequisite math shared earlier reveal that all students who are Pell recipients benefitted from the implementation of corequisite math. However, a closer look at the numbers show that American Indian or Alaskan Native and Black students have the lowest completion rates after corequisite math was implemented. An example of a deficit-minded explanation for these outcomes combines a focus on student behaviors with beliefs that they are not ready for college such as, “Despite changes to assessment and placement, Indigenous students and Black students do not have the academic behaviors and habits that make them college ready.” Individuals who hold this belief can engage with students in ways that affect their learning negatively.

Rather than focusing on deficit-minded rationale for outcomes, the authors provide these equity-minded practices that can be used to understand and interrogate outcomes.

  • Clarify and unpack processes and structures.
  • Identify institutional actors and their roles.
  • Understand why some student groups are better served by a policy, practice or structure.
  • Gather or analyze data that’s close(r) to practice.
  • Understand existing data practices.
  • Unpack institutional values and beliefs.

Following through with this example from the data, practitioners who employ an equity-minded approach to understand these outcomes might ask, “What classroom or institutional factors might contribute to Asian, Hispanic and White students having high completion rates?” Attention to these types of inquiries and a process for discovering the answers encourages practitioners to consider the classrooms and institutional practices that may be at play and how they affect student experiences in the classroom and their outcomes.

Conclusion

Efforts to reform DE continue to improve quantitative results for students in the aggregate and for minoritized students. However, when reforms do not close outcome differences between student groups (e.g., by race or SES), understanding why these outcomes exist is our next step in this reform. One way forward involves using a set of tools that help us understand policies and practices in the system, institution and classroom that could be negatively impacting outcomes. Pairing quantitative and qualitative data helps us make sense of and address outcome differences and provides a comprehensive process for understanding the policies and practices related to reform activities.  

 

References

Center for Urban Education. (2020). Equity-minded inquiry series: Data Tools. Rossier School of Education, University of Southern California.

Complete College America. No Room for Doubt: Moving Corequisite Support from Idea to Imperative (2021).  

 

[1] See page 9 in CCA’s report, No Room for Doubt.

[2] Asterisk (*) indicates populations with a smaller sample size. It should be noted these samples are a subset of a larger data set.