Looking for our new site?

Higher Ed Watch

A Blog from New America's Higher Education Initiative

< Back to the Education Policy Program

Gainful Employment Liveblog Day 3

Published:  September 11, 2013

We are back for the final day of the first session of negotiations on gainful employment. This session will only be a half day. Here are links to liveblogs from Day 1, Day 2 Morning, and Day 2 AfternooonA summary of the regulatory text under consideration is here.

Working Groups

After a half hour closed session, the committee has agreed to the following six working groups:

  1. Repayment rates--led by Jack Warner from the South Dakota Board of Regents
  2. Placement rates--led by Della Justice, from the Kentucky Attorney General's office
  3. Transition periods/opportunities to improve--led by Belle Wheelan from SACS and Marc Jerome from Monroe College
  4. Program level cohort default rates--led by Brian Jones from Strayer University
  5. Upfront requirements--led by Barmak Nassirian from AASCU
  6. Student consequences--led by Eileen Conner from the New York Legal Assistance Group

Groups will try to get materials in by September 30, but make no promises. The Department does ask that the extent to which thresholds are recommended that they be justified.

9:30 a.m. New programs

John Kolotos from the Department of Education talks about the program approval process used last time. Institutions would notify the Department and it would choose to review and approve the program or not. If it did not say anything, then it was automatically approved. Last time, the Department was able to review every program that applied. What they were looking for was making sure schools did due diligence to ensure it would be high quality.

ED says it is OK to not review every program, but would want to review any related to ineligible programs. It also notes that program participation agreement issues would trump all this.

Wheelan asks how this would work with substantive change requirements. She notes that offshoots from existing programs do not need accreditor approval, just notification. ED says current procedures say that if there is a restriction in the program participation agreement that require approval for new programs, it would need approval, otherwise if it was within the existing scope, they would not need approval.

ED also says it would want to know if the new program did require accreditor approval or not.

Jones asks if the Department would consider using the existing program participation agreement requirements around provisional certification to accomplish some of these new program issues. For example, he says institutions that fall below certain metrics could end up in provisional certification status and those would then have to programs approved. This would be an institutional evaluation and would apply to all programs at the school. Jones notes there is a bit of a conflict between an institutional requirement (the program participation agreement) and a programmatic one (the proposal for gainful employment).

Kevin Jensen, from the College of Western Idaho, said there needs to be more clarity about what requires approval as being out of scope and what does not. Said he goes to the state first for approval, but accreditor role is mostly around adding new locations. He adds its not always clear what schools need to add and what's appropriate.

Another representative of community colleges (I believe it is Glen Gabert from Hudson County Community College, but cannot quite see) talks about the extensive approval process that community colleges already have to go through. This includes selling it to the board of the school, going to the state, and a number of other steps. This means up to two years of work and potentially tens of thousands of dollars, resulting in programs that have "been vetted to death." He says the regulations need to be simple and affordable for adding new programs. He says three months for approval does not work, because he needs to publish schedules in March and send them to printer in January.

Margaret Reiter, representing consumer advocacy organizations, says there need to be certain specific parameters that must be met to weed out the worst of new programs. For example, demonstrate that the program has any necessary programmatic approval for licensure and/or private tests needed in the industry for things like health fields. Or that the SOC code shows that this career requires training beyond the high school level for this career. She said they tried to focus on bright lines and not get into committees, boards, the need, etc.

Thomas Dalton, from Excelsior College, emphasizes the desire to not have this conflict too much with the program participation agreement process.

Justice from the KY AG's office says there is no need to look at every program. Said it should be automatic that only once you get a certain number of programs in the zone or close to it, or something like that, to trigger the need for new programs. The idea would be heightened review if there had been trouble with programs along the way. She also brings up the idea of whether cost should factor into the need for prior approval so that lower-cost programs would not need to be considered. She would want this to work as an attestation of clear things that were done that could be looked at specifically in a program review.

Jensen asks whether there are expectations in regulations to ensure that all programs, especially at schools without layers of government, go through a similarly rigorous process of development or if they can be set up and approved in three months. Wheelan says they ask the same questions of institutions regardless of the sector--go through faculty, go through rationale, community support for certificates. Justin Berkowitz, from Daytona College (a non-publicly traded for-profit) says they go through equivalent rigors as well, talking to business, students, etc.

Raymond Testa, from Empire Education Group, which owns a series of beauty schools, notes that 90 percent of his work is tied to state licenses. So if a state changes its licensing requirements for things like adding a new medical aesthetician license, he needs the ability to be flexible and start new programs. He also points to the difference in how program approval works by state. He says some are easy, but others, like New York, said adding one course requires starting over from scratch like its a new program. He says "our types of schools have the same scrutiny" as other schools and can take upwards of a year.

Whitney Barkley, from the Mississippi Center for Justice, asks how with all these protections in place are we getting such disconnects like a $24,000 medical assisting program that will earn less than that in Mississippi or an online culinary program.

Reiter asks if programs consider a rough debt-to-earnings ratio when setting up programs. Testa says they will look at Bureau of Labor Statistics data, but that is not helpful with new licenses.

Justice asks what the Department asks of accreditors when they approve new programs. Kolotos says "we do not ask the accreditors to do anything special for gainful employment programs." ED asks for a conference call next week to continue this discussion about new programs.

Rory O'Sullivan from Young Invincibles asks to see copies of program applications to get a sense of what has been submitted.

In response to Barkley's question, Warner says that states differ a lot, especially with oversight of non-public sectors. Some states require every program to go through approval, but in South Dakota he says only public institutions have to go through state-level scrutiny. He says accreditors also vary in how much they do. He says, "if the state doesn't do it and the accreditor doesn't do it those are two of the legs of the alleged tripod." He adds if that leg doesn't do it, then there's no approval.

10:10 a.m. Exceptional Programs

Kolotos says the Department is interested in what could be done to recognize good or exceptional programs. He says it could be regulatory relief, administrative relief, or something else. He also adds that it could be inside or outside of the rule.

 

Jerome suggests if there are exceptional programs or programs that roll up to an exceptional institution should give you a regulatory pass. ED says it needs to know exactly what they would want waived.

Editorial comment: one challenge with "exceptional" programs is that a measure that can capture a lack of quality does not inherently indicate quality. For example, a high debt-to-earnings ratio shows that the amount of debt relative to the earnings is higher than desired. But a very low income program with no debt, looks very good on a debt-to-earnings ratio. The existing data released by the Department show that a number of programs can pass the debt-to-earnings ratio, but have graduates with average wages below 150 percent of poverty. So it may be that an exceptional program cannot be measured on the same metrics.

Nassirian says if there's overwhelming evidence of the quality of due diligence that maybe they could be able to avoid some disclosure or reporting requirements. He says that most of ED's work is nuisance work and that the worst actors will hire enough lawyers to game anything they do here, while good actors are already stuck with lots of reporting and disclosure.

Jerome says he and Jones will put together a proposal in this area that could involve perhaps recognition or also relief. He says they are less concerned with getting an award than getting regulatory relief in a program that may be in the zone but there are other factors that could not be captured by the regulation.

Jensen says there is merit behind identifying exceptional institutions and giving some relief in terms of some regulations. He suggests new programs, because if exceptional providers can more easily get exceptional programs set up could help tamp down the ability of predatory programs to pop up. He says it is important to recognize institutions and states that are performing well. He says there is a "poison of low expectations" and if there is not commentary of high expectations, then it will not encourage it. He wants ED to recognize states with lots of high performing institutions to get them to talk about the triad and give them something to aspire to.

Warner says high completion, job placement, loan repayment, or licensure passage rates or low rates of debt-to-earnings would all be starting point to think about exceptional programs. Heath says the national recognition is extremely important, indicating that once those kinds of things get to boards of trustees and becomes an expectation, then schools will coalesce around that kind of measure and raise themselves up to it.

Reiter says other consumer statutes say that if the state's law is at least as protective of consumers as the federal law, as demonstrated by a state application from the attorney general, then the states do not have to comply with the federal law. For example, if the state law is more protective than the federal, then community colleges may not need to do as much burden. She closes with the observation that we are so far from preventing the worst actors actors from getting in that it does not make sense to use these measures to recognize the best actors.

Eileen Conner from the New York Legal Assistance Group makes the point that when people ask her about schools its easier for her to point out a bad school than to point out what a good school is.

10:40 a.m. Sample Sizes

Jack Buckley, from the National Center for Education Statistics is here to talk about why the Department thinks that a sample size of 10 is worth considering. He'll also talk about the problems NCES has had in the past on job placement rate determinations and the statistical standards for surveys.

Buckley says the issue with n size is whether the data represent a sample. He thinks it does. He notes that in the purpose of generating this data is not to make statements about the people we are collecting data on, but how the program is performing. There's also a discussion about super populations, which I didn't have time to jot down accurately, but is probably the coolest name mentioned so far today. He notes the number 30 often comes up from the central limit theorem, which generally says that means and medians get smoothed out once you hit that figure (I'm really paraphrasing here). He said they used students from the Beginning Postsecondary Students Survey to do an analysis of the earnings, annualized payments, debt, to draw a bunch of different samples and see how often the results would be wrong for a program with given characteristics. In other words, how often would a program that should not be failing be declared failing as a false positive.

He talks about creating a hypothetical marginal program and see how often it gets declared to be failing depending on sample size. With 100 completers, he says it is basically never labeled failing incorrectly (0.4 percent false positives for one year), and when you think about failing two out of three years, it drops to basically zero. At 30 completers, the false positive rate is 3 percent and two out of three is less than 1 percent. At 10 completers, he says the odds of a false positive is 11 percent, but with the two out of three rule, gives you a probability of 3.6 percent being a false positive. He says given the tradeoff of the number of programs that get picked up when you move from 30 to 10 completers, then the gain is worth the risk. And he notes this is only for a marginal program that is ineligible after four years anyway. When you look at non-marginal programs, the likelihood of incorrect identification basically drops to zero.

10:50 a.m. Placement rates

Buckley notes how NCES surveyed different accreditors and states to see how they defined job placement rates. He noted that there's no standard for how to define the cohort, what successful placement is, how long you have to be placed for, when you get measured for placement. The only thing that was constant was exclusion for death of the student. The most concerning thing he saw was what constituted valid documentation for placement rates. He says this is solvable, but difficult.

Buckley lays out two solutions: (1) let people do whatever they are doing and collect it all, but have a template so that you have a standard template for how the measures are different or (2) try to make these things standard and with the same data source. The definition cannot just be common, but the access to the same underlying types of data must be the same. If they are not, then you have to do a lot of crosswalks to determine how the data sets vary.

The least burdensome approach to job placement would be some kind of global federal match, like the Social Security Administration earnings data. But SSA data can't tell us about the occupation type, so you cannot see enough information for a job placement rate. He says ED looked at statewide longitudinal data systems. He notes they vary--some states have all the information you need, others are behind. But even highest-performing states can't capture people across state lines, especially metropolitan areas that straddle different states.

11:00 a.m. Surveys

The final way to do a placement rate would be surveys. But good surveys are expensive and time consuming. Buckley says NCES standards are similar to a reputable polling agency, what academics would use, good state survey methodology, etc. The most relevant requirements for this task would be survey response rates and design. On the design, he says there would have to be a standard instrument created and used across colleges, which NCES could develop. Collecting or analyzing the data would then require a fairly large institutional research department. He notes that generally some kind of incentive to participate, like cash, helps response rates too. He suggests the response rate for something like this would try to be around 85 percent, but if they do not hit this target they would have to analyze for a non-response bias, both on the survey entirely and on individual items within the survey.

11:05 a.m. Discussing the NCES presentation

Jones asks about the change in the error rate going from 30 to 10. Buckley says for the marginal program that should not fail but is right on the edge and would be ineligible after four years in the zone, the probability of a single failure was 3 percent and 0.2 percent for two failures. At 10 percent, the odds of a single failure was 11 percent and two failures was 3.6 percent.

Jerome asks about a requirement that says the survey must be done in 60 days. ED says the survey must be done within 60 days of getting the notice of final rates, but it could be started sooner but does not need to be approved by ED in that period. Jerome points out they would only do the survey if they failed, so that means they have to do the survey in 60 days. ED points out that the school will know where it is when it gets its draft rates, which are several months before the final rates come out. ED also suggests that since the school knows who is in the period, it could start looking at students sooner.

Jerome says he is concerned about a sample size of 10 meaning that the difference of five students borrowing instead of four could mean it is measured and potentially failing versus not measured at all. He's concerned about the effect of one student on the stiff disclosure penalties and then quick death sentence.

Jerome also brings up the issue discussed yesterday--what happens when a student moves on to a higher credential and is excluded from the lower one. He also asks about what to do when a program's Title IV recipients may not reflect the whole program. Buckley says he would create a model to see if the people in the pool are systematically different from those not in the pool.

In response to a question from O'Sullivan, Buckley reiterates that the false positive rates are not the typical program, but one that is barely squeaking by, which is the hardest practical case that you would want to distinguish between. He notes that the chances of a false positive decrease as you move away from the marginal cases. He notes the best way to do this would be to see the real SSA data, but he can't see it because no one can see it.

Reiter asks if NCES has looked at the accreditor placement rates to see if they are valid. Buckley says he is not aware of ever having seen an institution sending a non-response bias analysis to an accreditor. She asks about the risk of a failing program be incorrectly labeled as passing. Buckley says there is some risk, but it's not symmetrical to a false positive risk.

11:30 a.m. Disclosure requirements

Kolotos says the biggest difference between the suggested disclosure requirements and the 2011 final rule is that there would be a required template that has to be one click away from the home page. He asks for the committee to consider what should be on the limited space on the templated--such as disaggreation of earnings for completers, non-completers, the type of occupation the program is targeted toward, etc. He says ED identified all the items that could be disclosed and would then pick a few, and could change the template in future years if it did not get it right through a notice in the Federal Register.

The other change he notes is they would use completion rates using the common federal completion rate definition but for all Title IV recipients, not just first time, full-time. He also suggests using a borrower-based repayment rate instead of the dollar-based one used in the prior rule. He calls a borrower-based rate easier to understand for consumers.

Rhonda Mohr from the California Community College system asks about the completion rate in normal time for students who do not attend full time. ED says it will consider it. Mohr mentions the system's Salary Surfer tool, which has info on wages by program type and level (though struggles with students who move out-of-state), and can tell the earnings before, two years, and five years after they attended. She asks if the template would allow a link to something like Salary Surfer or a clear statement of a placement rate. Kolotos says they will consider it.

Kolotos notes that one other change is that the categories of certificate programs have been further disaggregated to include those that are less than one year, at least one year but less than two, and two years or longer. He says this was done because schools had programs with the same classification code, but different lengths because the costs and outcomes vary. ED would do this type of disaggregation for disclosures as well.

Justice pushes strongly on a desire for a standard placement rate, noting that its very difficult because no one calculates them the same way. Libby DeBlasio, from the Colorado Department of Law emphasizes a concern about schools not putting information anywhere that is easy to access or intuitive on a website.

Barkley asks about the requirement that a program meets programmatic accreditation in the state. She notes that many times employers may have different requirements from what the state needs. For example, she notes that employers may require approval by one of two agencies for allied health, but the state does not require either.

Ted Daywalt from VetJobs asks if the Department would accept self-disclosed marketing spending. Kolotos says ED will consider it.

Jerome asks if it would be possible to report repayment rates on all programs, not just gainful employment programs. ED says it will take that back and consider it.

Noon Wrapping up

ED would like to have the subcommittee recommendations by Sept. 30, ideally earlier. The committee can arrange a conference call in the interim, but if the Department arranges a conference call, it would be open to the public.

And we're done. Next sessions will be in October.

Join the Conversation

Please log in below through Disqus, Twitter or Facebook to participate in the conversation. Your email address, which is required for a Disqus account, will not be publicly displayed. If you sign in with Twitter or Facebook, you have the option of publishing your comments in those streams as well.

Related Programs