There is so much focus on student outcomes in higher education right now, but are we potentially looking at the wrong numbers?
Should we stop evaluating colleges by the students they graduate, and judge them instead by the improvements the students make while enrolled there?
And if so, would different “winners” jump to the tops of the ranking lists – those most successful at elevating the outcomes of students from traditionally underrepresented backgrounds now enrolling at record rates?
Dr. Christina Ciocca Eller, Assistant Professor of Sociology and Social Studies at Harvard University, joined the Enrollment Growth University podcast to talk about a new way of looking at college accountability.
Are We Looking at the Wrong Numbers?
The current system of higher education accountability has several different branches. There’s a regulatory role played by federal and state governments. Colleges need to report certain data to these bodies in order to show how they are serving their students.
Then, there are accreditors who assess what’s going on internally to make sure that the college or university is doing what it says it’s doing. And of course, we have informal bodies such as US News and World Report and the Times Higher Education Supplement that rank colleges and universities.
All these bodies look at average rates of student performance, usually measured on outcomes such as degree completion. For example, the BA completion rate at X College is 50% within six years of entry.
But what does that number tell us about student growth versus how students would perform anyway, given the academic and personal experiences they’ve had prior to college entry?
Nothing. It tell us nothing.
“And this is really the rub of the current system,” Christina explained. “These numbers aren’t good at showing us much about student growth, especially student growth as a direct byproduct of the impacts of colleges and universities.”
How Could Impact Measurement Affect Performance-Based Funding?
What if performance-based funding systems took into account the students who are walking onto each college campus? They could assess the extent to which those students have faced adversity prior to coming to college and then the extent to which colleges are helping those students overcome their adversities, leading to a particular graduation rate.
“Unfortunately, a lot of research suggests that even in contexts where we think about adversity,” Christina explained, “there is a gap between the expectations placed on colleges by performance-based funding schemes and the ability of colleges to respond effectively and directly to the demands of these schemes.”
Say a college is going to be evaluated on graduating a certain percentage of traditionally underrepresented minority students. They get a bit frantic at this news and scramble internally to try to fix this system. In a very worst-case scenario, they might take certain students out of the calculation by making them part time instead of full time.
Even if we get the accountability piece right, though, how do we help institutions create meaningful change that is not a frantic response to the idea that the college might lose funding?
“What I’m saying in sum is that there are kind of two issues,” Christina said. “The first issue is: Are we getting the numbers right? Are we asking colleges to evaluate the right things? And the second issue is: Are we supporting colleges in such a way that they’re able to have sufficient time and resources to make progress and growth in light of what better accountability data would show us?”
The Potential Market Demand Effects of Student Impact Data
Successful colleges could actually have rather low graduation rates — a 40% graduation rate within six years of student entry, for example. It may well be that the school has that 40% rate, but given who the students are coming in, those students should actually be predicted to have a 25 to 30% rate.
But given those learners’ backgrounds and the kind of preparation they’ve had for college, that college is actually doing a great job. We’re mobilizing the students, seeing them where they are when they walk into a college classroom, and helping them speed along towards graduation at a better-than-anticipated rate.
“These colleges that we write-off as commuter colleges, local colleges, or just ‘that place down the road’, in fact, might be doing really great things for students and we just don’t know,” Christina said.
Next-Steps for Better Utilizing Student Growth Data
Measuring student growth data requires more statistical effort than the typical work done now to create averages. So a first step would be to understand the data resources being produced internally and the ways they might be harnessed to see these growth rates.
“It might require some more digging,” Christina said, “but I think it would be time well spent to get a better look at exactly how colleges are impacting those students.”
Another great step is just having a conversation internally about the extent to which college leaders and administrators are willing to get on board with looking at student growth as a primary metric, especially in a disaggregated way.
“In my own research,” Christina told us, “there are usually substantial differences in the growth rates for groups that are traditionally underrepresented in higher education including black, Hispanic, indigenous, and low-income students.”
Even though a college feels that it is affecting all of its students equitably, it may not, in fact, be doing that when evaluated by the numbers.
“Inequity in education is just a fact unfortunately of American life,” Christina said, “and we really need to, at the higher ed level, do everything we can to continue our efforts to combat those inequalities.”
If you don’t use iTunes, you can listen to every episode here.