When the Alexa/Google Home tsunami hit, I was determined to have nothing to do with it. How could it make sense to trade all your family privacy for the ability to vocalize your morning weather query? But children inevitably undermine the best intentions, and two months later, my 12 year-old had a Google Home Mini from his best friend for his birthday (“on sale, two for one!”).
He now wakes to Google’s cyber-cheerful alarm, grunts in lieu of a snooze button, and eventually wakes up to the dog’s impatient licking (still a flesh and blood dog, by the way). He asks Google to play his song list and a few minutes later it’s: “Hey Google, what’s the weather like today?” so he can decide what to wear. Then, the Mini gets ignored until nighttime when it’s “Hey Google, play the sound of a thunderstorm” to lull him to sleep.
Just a few months beyond his birthday, the novelty has worn off, the assistant is doing nothing he couldn’t already do without it, but it’s nevertheless a part of his daily routine–it has become invisible. This disappearing act is both evidence of good design (Google wants it to be a seamless part of our lives) and a point of concern (eg. how much of his life via data is being collected for later manipulation via marketing?).
One 20th Century philosopher wrote in depth about the technology disappearing act, and its deeper, less obvious, effects on our lives and our cultures. His name was Martin Heidegger.
Martin Heidegger (1889 – 1976) obediently conformed to many philosopher stereotypes by being a) interested in the meaning of existence, b) German, and c) “unreadable”. His claim to fame as a reluctant father of the “philosophy of modern technology” is amusing in that he preferred a rustic existence and kept to a forest lodge deep in the German woods – a kind of Thoreau with Nazi leanings.
Nevertheless, somehow, through the creative train wreck of his prose (characterized by double compound verbal extravaganzas), a kind of brilliance has shone through decade after decade that has managed to win him a place at the very top of the 20th century philosopher hall of fame. (And this in spite of the fact that no one really wanted him there for his failure to choose a decent political party during the war.)
What is more important for our purposes is that his philosophical work (which most agree can be treated as separable from his political failures) has been extremely influential.
Being and Time began an inquiry into the ways tools shape our definitions of the world and our very selves. His later work, The question concerning technology would expose the way modern technologies have forcibly defined what we have come to value about humanity.
Heidegger argued that, far from distantly observing tools and using them through conscious analysis, we act almost without thinking about the tool at all. We become a human-technology combo as our attention is placed entirely on the action the tool enables. Nearly a century before Google Home, Heidegger famously invoked the example of a hammer to describe this seamless experience that interweaves tool, person and action….
“The less we just stare at the hammer-thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become, and the more unveiledly is it encountered as that which it is—as equipment. The hammering itself uncovers the specific ‘manipulability’ of the hammer. The kind of Being which equipment possesses—in which it manifests itself in its own right—we call ‘readiness-to-hand’.”
– Heidegger in Being and Time
Indeed, this near-invisibility of the tool or technology is often seen as a kind of ideal. After all, when we become aware of a tool it’s usually because it’s failing. Something so easy you don’t even notice it’s there sounds pretty dreamy. However, could this invisibility have costs?
Technology as the “ultimate danger”
We usually think of “ultimate danger” as nuclear apocalypse or alien invasion, but Heidegger spoke of it at a more fundamental level to do with the very way in which we view the world and ourselves as humans.
Heidegger wasn’t so much implicating traditional technologies like windmills and bridges, which align themselves with the flow of nature (ie. the flow of the water, the direction of the wind) but industrialized or modern technologies which force human will onto nature, and he gave as examples, the dam and hydroelectric power plant that forcefully redefine and control a river. These manifest a “human will to power” that has subsequently led to humans viewing everything in their world as potential resources for exploitation.
Today we (in the Western world) casually speak of “inputs and outputs”. We characterize every aspect of the earth, from the waterways to the forests teaming with life as “resources” and “materials”. Heck, we even call humans “resources”.
Heidegger saw this as a result of the invisible infiltration of technological ways of thinking reshaping the very ways we value our lifeworld and humanity. What could be more dangerous than that which distorts and disconnects us from ourselves?
The solution? “Releasement” according to Heidegger: to accept that there is technology but keep a kind of distance from it. In other words, we need to find the will not to will to power. The will not to control, and to develop an openness to different interpretations of the world.
If the invisibility of technology is a danger because it prevents us from seeing the ways in which our actions and perceptions of self are shaped by it, then should we be designing for visibility? For consciousness?
Indeed, some designers, like Alan Dix, think so. In a discussion on Heidegger, Dix highlights the value of slowing down technological interactions, designing for reflection, and making the automated in a human-technology interaction conspicuous rather than hidden:
“there is also a merit in breaking this engagement, to encourage reflection and indeed the circumspection that Heidegger discusses. This ability to step out and be aware of what we are doing is precisely the quality that Schon recognizes as being critical for the ‘Reflective Practitioner’.”
– Alan Dix in Struggling with Heidegger
But what of a hammer with intentions?
What happens when the hammer becomes intelligent? When it can listen, respond and suggest? When it makes decisions and has motives, values and biases? Suddenly, the hammer can no longer be ‘impartial’ to our behavior, which just might make invisible technology a whole new kind of “ultimate danger”.
Would you be happy to leave a person in the corner of your living room and forget they were there?
Aristocrats have tried to make invisible technologies of their servants for centuries (thanks to which we can now enjoy the televised repercussions via UK TV). Would you happily leave a company sales rep or consumer psychologist to roost invisibly in your bedroom?
Purveyors of home assistants, wearable sensors and the Internet of Things benefit from products that become unquestioned habits–that mostly disappear from our experience. Can this benefit ever be sufficiently reciprocal? Can it reliably make the world a better place? Until we know for sure, perhaps we should take Heidegger’s and Dix’s suggestions to heart and cultivate some distance, awareness and greater visibility of our tools.