Education Higher Ed
December 19th, 2019 20 Minute Read Report by Nathan Arnold, Jessica Morales, Bethany Little

How Federal Higher-Education Policy Can Safely Support Innovation

Amid rising college tuition, low college-completion rates, and ever-changing employer demands for specialized skills, new forms of postsecondary education are gaining prominence. These “nontraditional” programs (such as coding boot camps) are typically short in duration and designed to prepare students for a particular occupation or career—ideally, at costs lower than at a traditional college. Some programs account for prior learning and job experience, some are offered by noninstitutional providers, and others are self-directed.

Students in many of these programs are not eligible to receive federal loans or Pell grants. As Congress works toward reauthorizing the Higher Education Act (HEA), however, policymakers are beginning to more actively consider whether, and under what circumstances, shorter educational programs aimed at imparting specific and employable skills should be eligible for federal student aid.

While jobs vary in pay, stability, and perceived social value, any educational program receiving federal aid money should prepare students for employment that enables them to afford monthly payments on student loans. While such positive student outcomes should be the standard generally in higher education, it is especially relevant for innovative educational delivery methods with an untested track record. High-quality postsecondary education imparts critical skills and knowledge but ultimately must provide students value for their time and money.

There are many ways to measure the value and quality of postsecondary educational programs to determine their eligibility for federal student aid. But as this report explains, the process by which eligibility is currently determined—by the U.S. Department of Education, a federally recognized accreditor, and the state or states in which the institution operates—largely limits the pool of educational programs to traditional colleges and is poorly suited for nontraditional educational programs and models. Recent examples in federal policy show us that new, student-outcomes-focused methods may be needed to determine eligibility, especially for these nontraditional types of programs.

If federal policymakers want to open the federal student aid programs like Pell grants and student loans to such programs, they must first establish sufficient safeguards and valid determinants of value to ensure that taxpayers are not funding ineffective programs and that students are not using their limited federal aid only to be left worse off than when they started.

DOWNLOAD PDF

Introduction

Educational innovation can appear at any time and not necessarily as the result of federal support. This is the case with the many new “competency-based” educational programs,[1] which can include boot camps, on-the-job training, and subscription-based learning programs that have gained prominence in recent years. The growth of these programs[2] raises the question of whether the students enrolled in them should have access to federal student aid[3]—and, if so, under what conditions. Federal support can help promising educational practices to scale up; but without sufficient quality controls, such support may waste taxpayer money and leave students worse off. Policymakers must therefore carefully decide when it is appropriate to scale innovation through access to student aid funds, ensuring that new delivery models demonstrate “proof of concept” before they are able to access federal dollars.

Quality can be measured in different ways—including job placement, completion rates, post-program income growth, licensure passage rates, and student and employer satisfaction, among others. Though it is true that the market wage for a job is not always aligned with its social value and that wages for similar jobs can vary geographically, a quality program should minimally leave students capable of securing jobs with wages sufficient to cover the cost of completing the program. If a program does not meet this requirement, including by failing to lower its price to meet market demand, it is not leaving students better off and taxpayers cannot fairly be asked to support it.

The problem today is that accreditation and U.S. Department of Education (ED) requirements in many cases effectively bar such innovative educational programs from receiving federal aid funds, regardless of their effectiveness. For example, “seat-time” requirements mandate that students complete a certain amount of class time before completing a course or credential. Federal law generally requires that institutions base the number of credits—and thus the amount of aid for which a student is eligible—on the amount of time a student spends sitting in class (or at a computer) learning the material.

Such requirements exist for good reason. For many years, predatory institutions would inflate the number of credits that were awarded for a particular class to maximize the Pell grant funds that the institution could capture, even for students not enrolled on a full-time basis. However, seat-time requirements also make it much more difficult for institutions to award credits to students on the basis of their demonstrated skills and abilities rather than the time they take to complete a program. For these reasons, an outcomes-focused approach to evaluating the effectiveness of such programs could allow more people to gain effective skills while protecting students and taxpayers from programs that leave graduates unable to find a job or repay their loans.

The Current State of Federal Policy

The federal government’s primary focus in higher education has historically been one of increasing access: getting more students enrolled in educational programs. Current federal policy does this through voucher-based funding (most commonly in the form of Pell grants, student loans, work-study, and GI Bill benefits) made available to students who are then free to enroll in any program at an eligible institution of higher education.

The way institutions are deemed eligible for federal student aid largely limits the pool to traditional colleges, even if they deliver the education largely through electronic means. All institutions must be approved by the “triad”: ED; a federally recognized accreditor; and the state(s) in which the institution is operating. Each of these entities reviews institutions to ensure that they are complying with oversight requirements, be it financial stability, deadlines for disbursement of student aid, or standards of quality for student learning. Additionally, in order to maintain their eligibility, all institutions must comply with requirements regarding the percentage of students who have not made a single payment on their federal loans within the last year.[4] With regard to for-profit institutions, at least 10% of their revenue must be derived from private dollars and/or GI Bill benefits rather than ED sources.[5]

These requirements are intended to protect students and taxpayers, given widespread institutional abuses of the federal student aid programs.[6] The precipitous closures of Corinthian Colleges[7] and ITT Technical Institute are well-known,[8] but more recent closures, at Argosy University[9] and Dream Center Educational Holdings[10] (which owned the Art Institutes), left tens of thousands of students with debt and degrees of questionable value, at best—often with little warning or ability to recoup their lost time and money. Closures and poor outcomes serve as a warning that federal policy continues to lack sufficient incentives, positive or negative, for institutions to prioritize student success.

There is, moreover, a growing concern generally for more accountability in higher education—including the value of degrees earned by traditional college graduates. The concerns are motivated in part by research indicating the importance of completion rates and quality to the overall value of investment in higher education (for students as well as taxpayers).[11] One result is the chairmen of the House and Senate education committees—republican Lamar Alexander (R., Tenn.) and Bobby Scott (D., Va.)—each offering public proposals to introduce new measures of student success, such as whether students are successfully paying down their loans, or by requiring an evaluation of college-completion rates and the workforce participation of graduates as part of the accreditation process. Christopher Murphy (D., Conn.), a prominent member of the Senate education committee, recently pointed out in a speech that a focus on outcomes could reduce regulation and increase focus on student success by providing the incentives and the freedom to allow educational programs to serve students in lower-cost ways that still lead to good outcomes.[12]

For now, there are relatively few ways the federal government funds or approves innovative practices, since the determination of eligibility or ineligibility for federal student aid by the “triad” is the primary lever by which federal policy interacts with higher education. The Obama administration attempted to fund promising institutional practices that would encourage completion through a competitive grant program called First in the World (FITW).[13] However, FITW was funded for only two years and has not received funding since 2015.[14]

Another attempted avenue for change is the Fund for the Improvement of Postsecondary Education, a program office and grant program within ED that is responsible for funding innovative postsecondary practices that will lead to improved institutional quality and lower student costs.[15] While Congress appropriated limited funding for this office, recent efforts have been relatively small, the primary example being a $5 million pilot to lower textbook costs.[16]

ED has also attempted to encourage innovation through regulation and sub-regulatory guidance. In 2013, the Obama administration provided detailed guidance to institutions seeking to gain federal student aid eligibility to offer so-called direct assessment or competency-based course work.[17] Rather than relying on seat time as a measure of learning, such programs are designed to shorten an educational program by evaluating (typically, mid-career, returning) students’ knowledge, skills, and abilities with respect to the program’s design, awarding credit for previous learning so that students do not waste time taking courses in subjects they have already mastered. However, there is little evidence of widespread adoption of such practices or an uptick in federal aid awarded for such course work.

In 2018–19, the Trump administration attempted to work with ED to remove regulatory requirements deemed burdensome and likely to hold back innovative practices. Ultimately, those regulations remained largely untouched, with ED instead opting to adjust the accreditation process more broadly.

The EQUIP Experimental Site

The most relevant recent attempt to support educational innovation is the Experimental Sites authority (commonly referred to as “ex sites”).[18] The authority to run this experiment stems from a provision in HEA that allows ED to waive statutory or regulatory requirements for a select group of institutions to test educational practices not otherwise allowed. The goal is to better inform policymakers about potential changes to the law.

In October 2015, ED announced a pilot program called Educational Quality through Innovative Partnerships (EQUIP), “to accelerate and evaluate innovation through partnerships between colleges and universities and non-traditional providers of education in order to equip more Americans with the skills, knowledge, and training they need for the jobs of today and tomorrow.”[19] In August 2016, the EQUIP pilot selected eight partnerships in which to test alternative forms of accreditation.[20]

EQUIP allowed traditional institutions of higher education to partner with new providers to obtain federal financial aid for programs that offered innovative course work (e.g., coding, advanced manufacturing) and/or innovative program styles (e.g., credit for previous learning or blended instruction—online and traditional classroom learning and self-paced videos). For example, Northeastern University partnered with General Electric to provide on-the-job, advanced skills-driven training that was not only aligned with the company’s existing needs to help students advance in their existing job but also provided a broader set of skills applicable to the field to help students eventually advance outside GE.

That program combined non-classroom-based course work with a specific job and employment need that would increase students’ earnings in the near term while providing for more student-centered value than typical job training. In theory, such programming could provide increased value to students with less time to completion, all without requiring students to discontinue their job at GE to enroll in a more traditional educational program.

Each EQUIP partnership was to be reviewed and monitored by its own Quality Assurance Entity (QAE), an independent third party that would hold programs accountable for educational quality by assessing evidence of student learning and post-program employment. If an EQUIP partnership failed to meet the quality standards established by its QAE—or if other problems were raised by the higher-education institutional partner, its accreditor, or ED—the institution was required to improve, suspend, or terminate the educational partnership. Each QAE selected to participate in the EQUIP experiment had a unique history, organizational structure, theory of action, and area of focus that informed its approach and ambitions in postsecondary quality assurance.

The EQUIP experiment had the potential to demonstrate new ways for determining the eligibility of innovative programs for federal funds. Unfortunately, only two QAE models have been approved to evaluate their innovative partners, several applicants have had to drop out of the experiment, and still others await approval or denial from ED.

EQUIP: Lessons Learned

Although the EQUIP experiment has largely failed to deliver on the promise of its initial goals, the way that the experiment broke down provides valuable lessons for the future. One lesson is apparent: the existing approval structures, especially the current system of accreditation, are not equipped to evaluate the value of nontraditional educational models. Another is that QAEs—or another objective entity responsible for evaluating outcomes—are more likely to be effective than traditional accreditors in evaluating student outcome metrics. Other lessons include:

The design and implementation of experiments must be improved. Although ED sought to give QAEs flexibility in evaluating program quality through assessments of student learning and employment, it failed to provide them with reasonably clear parameters at the outset. As a result, QAEs, traditional institutions, and new programs were then subject to shifting directives and instructions. In the future, ED should establish a clear rubric with high-level requirements, timelines for completion, and a demonstration of capability upfront to establish buy-in from all parties. The timeline should provide pressure to move things forward while allowing participants to demonstrate sufficient compliance to prevent one administrative stumbling block from hampering the entire endeavor.

Before a partnership’s initial approval, all parties should be required to demonstrate readiness to engage and begin to initiate the experiment through a checklist of clear standards and transparency of findings. Additionally, ED should be ready to help program participants when they run into administrative difficulties. For example, there were instances where EQUIP timelines conflicted with the standard timelines that many regional accrediting agencies use, resulting in QAEs having to look for information on how to address the matter.

However, even with a smoother implementation, EQUIP would have failed because it attempted to test both the feasibility of innovative providers and the sufficiency of the review provided by QAEs, making it impossible to determine which—if any—of the program participants were effective at which task. Additionally, because each participant was partnered with one QAE, instead of each QAE evaluating several participants, there was no basis to compare the quality-assurance models.

Entrenched interests may impose barriers to innovation. The EQUIP pilot required QAEs to seek approval not only from ED but also from a traditional educational institution (already eligible to receive federal financial aid), its accreditor (which is paid by the institutions it oversees), and, in some cases, the state regulatory authority. The approving entities, all comfortably entrenched, had little incentive to speed up the process and, in the case of some accreditors, faced little incentive to approve a potential competitor. In the future, ED will need to use its authority to push state regulators and traditional accreditors if it hopes to make progress on new models of education and new methods of quality assurance.

Consistent, collaborative, and timely technical assistance is necessary. Though the EQUIP pilot experiment was designed to give approved QAEs flexibility on quality assurance, the ultimate arbiter for determining whether a program was to be approved shifted between QAE and ED. This undermined QAE’s authority.

From the beginning, ED’s announcements and communications with the institutions of higher education and noninstitutional providers developing the new programs were fraught. The department put forth complex requirements for aid eligibility but offered few details or examples of what would satisfy such requirements. Nor was it clear on the timeline or steps for approval. The department also failed to offer concrete descriptions of expected progress or technical assistance to participants.

A New Approach to Federal Support for Innovation

Despite the limitations of the EQUIP experiment, innovative programs have the potential to serve as an important alternative for students seeking lower-priced postsecondary education. To the extent that such programs lower costs and enable students to get quality education and training in a shorter period of time than a two- or four-year college degree, they deserve federal funding. But how can these programs be fairly and efficiently evaluated and approved without falling into the same trap that doomed EQUIP?

One promising approach comes from Third Way,[21] a think tank that has suggested an approval process that resembles charter school authorization in the K–12 system.[22] Third Way’s proposal would establish a pathway to federal funds with ED playing the role of an “authorizer.” The goal is to streamline eligibility compared with the current accreditation triad, while still ensuring that only programs providing sufficient value to students are able to access federal funding. Programs would need to meet four key quality thresholds:

1. The program must demonstrate that it can successfully serve students coming from low- and moderate-income backgrounds. This could initially be done through a non–Title IV funding mechanism such as employer-sourced funding, a competitive grant fund, scholarship, or models that condition future loan repayment on sufficient earnings.

2. Programs must provide data showing that their students can complete their courses of study, find employment, and earn a wage that is sufficient to pay off program-related debt and, ideally, higher than the wage of a comparable individual who does not have the skills taught in the program. These evaluation metrics and standards could include graduation and employment screens that require a given percentage of entering students to graduate and a sufficient percentage of graduates to find employment in their field of study within a specified time frame (Third Way recommends a threshold of 75% for each over four years). Additionally, programs should be required to demonstrate sufficient earnings for the program, either through a minimum earnings threshold or the affordability of program-related debt given the graduate’s salary earned upon completion. While the 75% completion rate is higher than the nationwide average for all higher education, a higher rate is appropriate in this case, for several reasons:

  • Typically, competency-based programs are designed for faster completion and are sometimes significantly shorter than traditional programs. Such programs (which typically grant a certificate) often have higher completion rates than their degree-granting peers because there is less time for life to get in the way of completion; existing certificate programs have completion rates 20 percentage points higher than two-year college credentials, for example.[23]
  • These programs are designed specifically to prepare students for particular jobs, so program quality should be measured by success in getting students jobs that can support repaying the cost of the program.
  • As we have argued throughout this report, the federal government should fund—and therefore scale—only those innovative programs that have demonstrated sufficient quality and value through eligibility for federal aid.

3. Programs need to demonstrate that they are filling gaps in the workforce. This could be demonstrated through one or more of the following proposed ways, perhaps accounting for geographic differences:

  • Education and credential attainment: the number of working-age people with certain credentials compared with the openings for jobs for people with those credentials.
  • State analyses: state labor agencies that publish labor supply-and-demand analyses proving a need in a given area or industry.
  • Agreement with a local or regional employer: an agreement guaranteeing the employment of students who finish the program.

4. A program’s eligibility for federal student aid would extend for four years with annual reviews. Following the fourth year, a program could be reauthorized if it has met ED’s standards of quality and shows that it can continue to do so.[24] With respect to certain types of programs or programs of certain durations, experience may show that a four-year authorization is either too long or too short and needs to be adjusted.

Proposals such as the one offered by Third Way allow for sufficient flexibility to design different educational models while providing clear metrics for quality and value. Currently, innovative programs seeking federal eligibility for financial aid from the existing accreditation system often feel as though they are trying to fit a square peg into a round hole. A new framework for evaluating quality, designed specifically for innovative programs attempting to scale, would give potential entrants a clear set of objective criteria to meet, with achievable metrics for programs without large enrollments or that do not have decades-long track records. It would also provide crucial safeguards to students and taxpayers by requiring educational programs to establish at least a limited track record of success before they gain eligibility.

Conclusion

The current system for approving new educational programs for federal student aid is nominally intended to prevent taxpayer funding from going to programs that leave students worse off. But it is difficult for these programs to be fairly evaluated by traditional accreditors and state regulators. As long as this is the case, it will be exceedingly difficult to take promising, if nontraditional, new educational delivery models to scale. New frameworks for evaluating nontraditional educational models are needed, and they need to be focused on measuring student outcomes. This focus will ensure that students and taxpayers benefit from new models while allowing new programs to demonstrate their efficacy and become sustainable.

Endnotes

See endnotes in PDF

About the Authors

Nathan Arnold is a senior policy advisor at EducationCounsel LLC. He previously served in the Obama and Trump administrations at the U.S. Department of Education as a senior policy advisor and chief of staff to the acting under secretary. There, Arnold led cross-agency teams to develop legislative proposals and regulations governing the federal student aid programs. He also led an interagency task force on proprietary education and oversaw the department’s efforts supporting borrowers attending closed schools, particularly Corinthian Colleges, Inc. and ITT Technical Institute. Arnold holds a B.A. in political science from the University of Wisconsin–Madison and a J.D. from the University of Virginia

Jessica Morales is a policy assistant at EducationCounsel LLC. She previously worked as a policy advocate for Generation Progress, the youth engagement arm of the Center for American Progress. Morales served the U.S. Department of Education’s negotiated rulemaking committee, which established the Revised Pay as You Earn (REPAYE) income-driven repayment plan in 2015. She has also worked as a policy consultant for the Texas State Teachers Association. Morales holds a B.A. in government from the University of Texas–Austin.

Bethany Little is a principal at EducationCounsel LLC, where she supports foundations, education associations, and other nonprofits with advancing improvements in education outcomes. She was education advisor to President Clinton and Vice President Gore on the Domestic Policy Council and the U.S. Department of Education. Little was chief education counsel to the Senate Health, Education, Labor and Pensions (HELP) Committee under Edward Kennedy and Tom Harkin and has also served as vice president for policy and advocacy at the Alliance for Excellent Education and the director of government relations for the Children’s Defense Fund. She serves on the boards of the National Center for Teacher Residencies, Veterans Education Success, and Cesar Chavez Public Charter Schools for Public Policy. Little holds a B.S. in foreign service from Georgetown University.

This report was written as a part of MI's Solutions from Beyond the Beltway series

Donate

Are you interested in supporting the Manhattan Institute’s public-interest research and journalism? As a 501(c)(3) nonprofit, donations in support of MI and its scholars’ work are fully tax-deductible as provided by law (EIN #13-2912529).