That life is complicated may seem a banal expression of the obvious, but it is nonetheless a profound theoretical statement.
– Avery Gordon, Ghostly Matters
As a rogue scholar working in the software industry, I frequently find myself caught in the crossfire between the ed-tech industry and scholarly critiques of that industry, between the utopianism of Silicon Valley and the skepticism of the academy.
I am a teacher-scholar by calling, and so my heart mostly aligns with the academic critique side of this debate. It’s the professional tradition in which I was trained as a graduate student in English. But it’s also been my politics since well before I read any postcolonial theory, a feeling formed in my gut at the dinner table because power just didn’t seem as benevolent as the stories it told about itself made it appear.
To be frank, I’ve always had a huge crush on criticism: there’s nothing sexier than an elegant argument that slowly uncovers a hidden truth, plumbing the depths of what appeared flat.
In terms of the ed-tech space, the existence of a robust scholarly skepticism about technology has influenced my professional decisions, like leaving the for-profit Genius for the open source Hypothesis. I’ve found a community with many fellow teacher-scholars doing really important work to question the politics and ethics of education technologies, even though, because I work for a software company, though a non-profit one, I’m still a vendor. Sometimes, and rightly so, I’m the target of such interrogations. Yet I’m also increasingly uncomfortable in its embrace even when I’m not.
One of the foremost critics of technology today, Evgeny Morozov, has a very powerful critique of the driving force beyond much Silicon Valley innovation that he calls “technological solutionism.” The basic idea is that many entrepreneurs in this space view technology, their technology, as the one and only solution to a particular problem, or perhaps a whole range of problems.
I recently saw a presentation at the New Media Consortium by a boutique marketing firm that had sponsored the annual gathering. The CEO didn’t even talk about the specific problems that they were supposedly solving. He just promised solutions. Whatever the problem was, there could be an app for that. This struck me as an extreme and instructive example of technological solutionism: the solution was so good that the problem was irrelevant.
Of course, it’s not the solving of problems that is in and of itself problematic–one thing I think that many critics of technology forget is that they might (though certainly not always) agree with technologists about the underlying problems that some companies are trying to solve, even if their solutions are deeply problematic. What’s particularly dangerous is when solutions take over as the central players in the drama of progressive reform, ignoring or completely displacing the problems themselves, and more importantly those affected most directly by those problems. What we see more often than not, I think, is a good idea that does offer a kind of solution to a big problem, but in its over-eagerness, betrays itself somehow.
This is precisely the context in which the technological criticism I mention above becomes so important.
Yet, like technological solutionism, technological criticism can betray itself as well. And in this post I want to explore what might be the reverse of that hegemonic trend of technological solutionism: technological critiquism. I define technological critiquism as criticism that believes the one and only response to technological innovation is criticism. Technological critiquism is as single-mindedly focused on critiquing technology as a problem as technological solutionism is in viewing technology as a singular solution. I believe the same thing that happens with much solutionism, as I witnessed at NMC, can happen with technological critiquism: critique upstages the problem as the central character in the drama of cultural change.
To be clear, I am by no means suggesting we stop or even slow our criticism of technology, educational or otherwise. That work is absolutely vital. I hope to contribute to that movement in some small way with my thoughts here, far more so than I hope to defend the ed-tech industry.
I’m also very aware the Page Mill Road, like other scary roads, is paved with good intentions, and that intentionality exonerates no one in a court of law, though usually the punishment is somewhat mitigated.
And, to be fair, critiquism is the nature of many insurgent movements. It’s necessary when a group is not in a position of power. It’s a kind of devil’s advocacy meant to shake up a conversation or a culture. However, at a certain point in political movements–and we should debate whether we are there yet in this one–more complicated arguments need to be formulated for concrete change to occur.
In my observation, critique can sometimes become more of an end in and of itself rather than a means to an end. Just like solutionism’s marketing myopia, such critiques become enamored and blinded by their own brand. Such criticism basically reads like the same dystopian science fiction novel written over and over again: what seemed like a good idea at the time is now destroying us; the robots have risen up. One way I see this happening regularly is in the way that any mention of “data” is met with immediate skepticism. Data isn’t always bad! To suggest so is anti-intellectual, like arguing that all drugs are bad despite the uncontroversial benefits of something like Penicillin.
Let’s focus on “data” as a kind of boogeyword in the context of education technology. Solutionists argue that gathering and aggregating more data will solve all the major problems of the education system: like increase retention rates, for example. Critiquists argue that any use of data is a violation of student privacy and a reduction of humans to numbers.
In my opinion, both are right! There’s value in the data–value in the sense of helping real people achieve their life goals–but we need to be responsible with it. Those who see a value in data, need to make sure they are gathering data responsibly, with permission and transparency. And they need to make sure they are using that data thoughtfully and taking into consideration the limits of of such research. This seems obvious when it’s written out. But from my limited perspective on the discourse, it feels kind of radical to say. (Not that it hasn’t been said before and better.)
There’s something of an analogy for this problem of technological critiquism in today’s political landscape on the Left, or at least in the debates that surrounded the 2016 Democratic Primary. I have friends who hate Hillary Clinton and view her as a neo-imperialist. I have others who see her as one of the most important advocates for women’s rights on a global scale in the past thirty years. Again, in my opinion, both are probably right. But what do you do with that?
At the very least you have to acknowledge that it’s complicated. I’m concerned that at times technological critiquism loses the key aspect of what I love, and what I think is so urgent, about the basic act of criticism: its problematizing of narratives broadly accepted as true. It only goes so far to replace such narratives with new ones that argue the opposite with equal simplicity.