The move to incorporate data analytics in higher education was touted loudly at the 2016 Educause Annual Conference today. Yet as higher ed is noticeably responding to the clarion call for data-driven decision making, I’m picking up on a common theme that is as dangerous as it is well intentioned. A number of sessions at this point have followed a similar train of thought—a thought which goes something like this:
“We’ve noticed that students who register for classes earlier tend to drop-out less…so we’ve moved up the registration deadline.”
“Students who log in 4x/week to the LMS perform better…so we’ve mandated all students log in 4x/week.”
While the move to incorporate data analytics in higher education is laudable—and indeed, should be done—research on human behavior shows why this approach is as myopic as it is well intentioned. We’re trying to prescribe some behaviors of high-performing students to ALL students, yet human behavior is far more complicated than that. In the case of early registration, for example, that behavior may be indicative of a whole other subset of qualities or practices that speak more to student success than the single data point suggests. Thus, designing an intervention that moves up the registration deadline likely won’t address what’s driving successful students to register early. All it really does is tighten admission standards, which many private institutions simply can’t afford.
And, in fact, the very act of searching for certain behaviors indicative of student success can have an adverse effect on decision makers if they’re not careful to incorporate enough data. When institutions grab on to singular data points, weight them disproportionately in terms of how they affect outcomes, and ultimately build entire interventions around them, leaders are likely missing out on the larger picture. Consider the assertion of Robert Cialdini, in his newest book Pre-Suasion, that “what is focal is seen to have causal properties—to have the ability to make events occur.” It is true that we elevate the importance of the things we’re focusing on at the time, and in terms of data science in higher ed, those things are often singular data points. Therefore, linking such things as LMS log-ins and early registration to student success are more likely to be seen as causing student success when brought into focus and presented for their correlation with high performers.
Or, consider the phenomenon that Daniel Kahneman assigned the ghastly title of WYSIATI, or What You See Is All There Is. This phenomenon, described in his 2011 game changer Thinking, Fast and Slow, speaks to “the remarkable asymmetry between the ways our mind treats information that is currently available and information we do not have.” The fact of the matter is that LMS data is available and accessible. Registration data is available and accessible. And while trends and correlations can surface from these data alone, they cannot speak to the factors at play just beyond our perception, and often that extra data is vital when making actual assertions about student success.
Higher ed should absolutely move to incorporate more data science into their decisions. Yet only when we align data insights with goals that address the root issues facing the institution can we hope to bring about real data-driven change. Before you start analyzing data, determine what you’re intending to do with it. Are you trying to increase student success and retention? Increase enrollments? Improve accessibility? Personalize the student experience? Only once you have the core questions in place can you move to capture both the kind of and breadth of information you need to thoroughly answer those questions.