In Obedience to Authority: An Experimental View, Stanley Milgram coined the term “counteranthropomorphism” — the tendency we have to remove the humanity of people we can’t see. These may be people on the other side of a wall, as in Milgram’s famous (or infamous) experiments, or people mediated by technology in a virtual classroom. Our turn to digital solutionism has frustrated our attempts at imagining a humane future for higher education. The less we understand our tools, the more we are beholden to them. The more we imagine our tools as transparent or invisible, the less able we are to take ownership of them. It is essential that we consider our tools carefully and critically—that we empty all our LEGOs onto the table and sift through them before we start building. Some tools are decidedly less innocuous than others. And some tools can never be hacked to good use. Remote proctoring tools can’t ensure that students will not cheat. Turnitin won’t make students better writers. The LMS can’t ensure that students will learn. All will, however, ensure that students feel more thoroughly policed. All will ensure that students (and teachers) are more compliant.
Ultimately, the future of education is humans not tools, and our efforts at hacking, forking, and remixing education should all be aimed at making and guarding space for students and teachers. If there is a better sort of mechanism that we need for the work of digital pedagogy, it is a machine, an algorithm, a platform tuned not for delivering and assessing content, but for helping all of us listen better to students. But we can’t get to a place of listening to students if they don’t show up to the conversation because we’ve already excluded their voice in advance by creating environments hostile to them and their work.