Photo of Department of Education Definition of Exemplary Institutions Uses Metrics LOWER Than GE

In the Calculation of Gainful Employment Regulations, The Department Repeatedly Refused to Include Uses Metrics Which Take Into Consideration Data Sets of 200 Exemplary Public and Private Non-profit Institutions Methodology Used To Develop A List Of

Overview
If you attended last week’s CSPEN Federal Legislative & Regulatory Update you would have heard about an announcement made last Tuesday (October 8th) highlighting Secretary Cardona’s “Raise the Bar (www.ed.gov/about/ed-initiatives/raise-the-bar/raise-the-bar-college-excellence-and-equity#:~:text=Building%20a%20higher%20education%20system%20that%20is%20inclusive,) ” initiative and the April 24, 2024 launch of the Postsecondary Success Recognition Program (www.ed.gov/about/news/press-release/biden-harris-administration-create-new-national-recognition-program) the latter of which is “a program that uses data and evidence to identify exemplars (emphasis added) across the country and uplift institutions of higher education that serve as engines for economic mobility by supporting all student to complete affordable credentials of value that prepare them well to participate in the workforce, their communities, and our democracy.”

The release went on to state, “after receiving feedback from the public on program design,” including the three metrics that would be used to assess institutional performance – Access, Success, and Post-College Outcomes – the Department was “publishing the list of 200 institutions that clearly have a story to tell that demonstrates what institutions do matters to ensure student success – and inviting those institutions to submit an application to be recognized for their efforts. The list includes 100 predominantly bachelor’s degree granting institutions (www.ed.gov/media/document/postsecondary-success-program-eligibility-list-bachelors-institutions-2024) and 100 predominantly associate’s or certificate granting institutions (www.ed.gov/media/document/postsecondary-success-program-eligibility-list-associate-certificate-institutions).”

The release goes on to explain that:

“The Department identified these institutions using publicly available data to measure performance on key indicators related to equitable access, success, and post-college outcomes at public and non-profit institutions across the country. These metrics consider the extent to which institutions:

  • Enroll low-income students and students of color in their state
  • Help those students graduate or transfer successfully; and
  • Prepare them to enter the career of their choice and enjoy upward mobility, including recouping their costs and earning a premium over high school graduates in their state.”

And that the group of 200 institutions comprised “75 percent of predominantly bachelor’s institutions and nearly 90 percent of predominantly associate or certificate institutions on the list are Minority-Serving Institutions or eligible for the Strengthening Institutions Program, which focuses on institutions serving large proportions of low-income students.”

The Institutional Performance Metrics
The Department explains how they derived two newly defined categories of institutions of higher education – Predominately Bachelor’s Degree Granting & Predominately Associate or Certificate Granting – that reflected the purpose of the program, and of a universe of over 982 bachelor’s-granting and 820 associate’s/certificate-granting institutions satisfied these requirements how they came to determine to two lists of 100.

The sixteen metrics were grouped into the three categories noted above, as follows:

Within the three categories the Department explains how the data sets were determined, and this is where there are substantial variations from the same metrics used in the calculations of this list of exemplary institutions that serve as “institutions of higher education that serve as engines for economic mobility by supporting all student to complete affordable credentials of value that prepare them well to participate in the workforce, their communities, and our democracy.”

The Access metric “take into account both the number and proportion of low-income students and underrepresented students of color (Black, Hispanic, Native American, Pacific Islander, Two or More Races) enrolled by institutions. The race/ethnicity metrics are based on 12-month enrollment data and the income metrics are based on Pell Grant recipients among all undergraduates using IPEDS.”

The Success metric “leverage the best publicly available data from IPEDS, though there are known limitations. First-year retention was included as a near-term or leading metric of success, though there are no disaggregations available in IPEDS and the metric is not available for some institutions based on current IPEDS reporting conventions (e.g., predominantly associate’s or certificate granting institutions classified as four-year institutions in IPEDS only report retention on their bachelor’s students). Graduation and success rates were generated using the IPEDS Outcome Measures (OM) Survey, which improves upon the IPEDS Graduation Rate Survey (GRS) by including all students – first-time and transfer students and full-time and part-time students. Graduation rates after 4 years were used to measure timely completion for all institutions. Additionally, 8 year graduation rates were used for predominantly bachelor’s institutions and 8 year success rates (graduation + transfer) were used for
predominantly associate’s or certificate institutions.”

For the Post-College Success metric the Department explains “Given concerns raised by the field about the use of raw earnings metrics for this program, which might undervalue lower-paying professions with high social value among other issues, we used employment rates and earnings thresholds related to earning a premium over high school graduates and earning enough to recoup costs instead. College Scorecard data were used to calculate an employment rate for students working and not enrolled 10 years after initial entry. This rate was compared to employment rates for 25-34 year olds working in the state the institution is located in by credential level using American Community Survey data obtained via IPUMS (e.g., bachelor’s or higher for BA institutions and some college or an associate’s degree for AA/Cert institutions). The relevant denominator was not available at this time from the College Scorecard dataset to construct this metric disaggregated for low-income students. It is important to
note that the College Scorecard data cannot be used to construct an accurate unemployment rate, since we cannot ascertain whether former students who are not working intend to do so. College Scorecard employment data are not yet disaggregated by race/ethnicity.”

What This Really Means
The bottom line is this – in all three metrics the Department took into consideration and adopted methodologies based upon the input from the other sectors of the higher education community that were openly opposed throughout every federal negotiated rulemaking on Gainful Employment – and the most recent Financial Value Transparency final rule.

And the conclusion is — these exemplary schools are being held to a lower standard than GE for at a minimum the following reasons:

The metrics take Federal Pell Grant enrollment and ethnicity of the populations served into account – the higher the institutional populations serving these cohorts, the better you do in the assessments. GE provides no such consideration in the assessment and calculation for purposes of eligibility determinations, despite repeated pleas for recognition of these very factors in every neg reg.

Earnings are measured for working people ONLY (excluding all nonworking individuals) and looks at the earnings data 10- years after graduation. GE assess the earnings of all individuals after only 3 years. This basically evens out the arguments regarding the use of this data to compare with the employment rates of 25-34 year olds – a much easier way to clear the HS benchmark.

And that’s just the tip of the proverbial iceberg. Couldn’t you envision the Department using this methodology for the 2026 presentation of Financial Value Transparency reporting – while forcing institutions subject to the GE eligibility criteria to use the methodology contained within the regulations?