I’m going to go in a little different direction this week–less on personal reflections, and trying to more directly answer Bryan Alexander’s discussion questions from his post on this week’s readings.
Chapter 4 continues the relevance to higher education and echoes “Lower Ed”, another book club reading, in it’s discussion of how the for-profit higher education industry uses WMD and big data to advertise and ultimately exploit vulnerable populations. The work is sophisticated and ruthless. Chapter 5 contains things that remind you of “Minority Report” (weird side note, I saw that movie in Vermont, so it’s obliquely makes me think of Bryan a bit) but stresses that using data to predict crime means you find more crime where you predict it, which means the models predict more crime, and so on.
Bryan poses three questions in his article:
1) Are there advertising campaigns using big data that avoid these problems? I’d like to believe it’s possible, but quite frankly the perceived value in these sort of operations is the ability to make conclusions from an abundance of data, remixed in interesting ways, makes the potential for abuse much too high. I’d like to see better regulation of these operations, more protections for a person’s data, etc., but I’m a tech person and I have no idea how to build systems that can ensure enforcement of these laws, so we’d largely be relying on self-reporting or a huge regulatory apparatus to enforce them. The first would be ineffective; the second is not plausible in our current political climate. So the idea that a company is voluntarily putting themselves at a competitive disadvantage by not using data the way their competitors do, is unrealistic.
2) If non-profit higher education competition heats up, will those campuses turn to these sorts of big data campaigns? You ask that as if they haven’t already. Maybe non-profit institutions don’t advertise the way that for-profits do; but I’d suggest you take a walk through the vendor floor at EDUCUASE this week and look for companies promising to improve the dynamic duo of “recruitment and retention” by collecting large amounts of data about students and using proprietary algorithms to evaluate and target students most likely to either accept admission or be at risk for dropping out. I’m currently looking at the data we have available to us about our students to determine their level of engagement, and thus perhaps track their likelihood of leaving the institution. Perhaps there’s still a difference of degree; but I think the mechanisms are the same across both for-profit and non-profit higher education.
3) Mark Wilson asks a powerful question. If all data massively collected in America is biased, how should we proceed? Is the data, in fact, biased, or is it just our interpretation? To be fair, O’Neil makes it clear that our attempts at “neutral” data collection are never able to be so, and any data analysis brings biases with it, but some data is just, in fact, objective data. We can probably sense the biases if we look but it will also be tempting to apply another model to the data to “measure” its “bias” and somehow counteract it, which itself will bring a bias into the data. But I don’t think we should (or even can) just start over, especially since we have books like O’Neil’s to help guide us through the impact.
Boy, I sure hope she ends the book with a cogent call to action, and a remedy for some of the issues she finds. If not, I guess we’re all doomed. See you next week!