Ever since the 1970s, the promise of increased productivity through technology has been under intense scrutiny. It’s a promise that has pushed questions about nature and the role of technology in society into the hands of scholars, including anthropologists. For those working in industry – really, one of the few places where anthropologists can engage with technology the real, rather than technology the theory – the question always boils down to value. Whether it’s big data, AI, biotech, nanotechnology, robots, smart dust or driverless cars, the one question we’re always looking to answer is: What’s the value of a new technology?
Economically, the promise and gains of technological efficiency – particularly information technology – is known as the productivity paradox. Whether a paradox or a series of assumptions about the impact of technology on productivity, the question of the value of technology sparked heated debate among economists over the first wave of computerization. In 1987, current Emeritus Institute Professor of Economics at MIT, Robert Solow, put the question thus: “You can see the computer age everywhere but in the productivity statistics.”
But studies by economists such as Kevin Stiroh (2002) have shown that investments in new IT do, in most cases, generate positive effects. Certainly, the successful production of those effects depends on how quickly a company can locate valuable applications of its IT and how critically it can rethink and deliver on its value proposition to consumers. Such IT implementation processes take time, as organizational cultures need to adopt change and adapt to new ways of working, communicating and conducting everyday business. Yet, when and where they do that, these studies reveal that most IT investments do boost productivity.
Ultimately, the productivity paradox is – as first year anthropology students are prone to decry once they’ve got a few lectures under their belts – reductive. Even they know that it fails to consider the many other possible values that people ascribe to technology, such as power, freedom, choice or control and how they might relate to personal or cultural economic value, rather than the kind of economic values of the corporation striving for efficiency, increased profits and market share. Because when value tilts towards only efficiency, increased profits and market share, things can get scary. At least we think they can. Primed by decades of classic science fiction novels, we somehow feel it in our gut that when The Company gets big, maybe too big, it will unleash its capitalist fury on us all with job-stealing robots and mind-controlling A.I.
So where does this fear of the technological come from? Or this sense that, as scholars, we feel almost obliged to give technology a failing grade when it comes to its positive impact on our lives and productivity? The answer is 19th and 20th Century romanticism and contemporary articulations of it, like “technology is an intellectually and imaginatively degenerative force in the lives of our children.” Again, even those first-year anthropology students with a few lectures under their belts will tell you that it’s just intellectually flawed for scholars to continue to promote this storied, romantic idea of human as non-technological, as only truly or purely human in a kind of noble, non-techy savage kind of way.
Humanity has never been that way. As Marcel Mauss pointed out in 1934, our first technology was our biology, our very corporeality. Next, of course, we can all debate just how valuable stone flaking was to our species. In fact, the very crafts, arts or practices that our bodies and minds have afforded us to be or to become over the past million or so years are at the very core of humanity and what it means to be human.
Like flaking stone, IT, digitalization, big data, AI and all those bells and whistles we talk about as being on the horizon are technologies that work as extensions of our corporeal existence. They do or will form discrete moments in our material (and beyond) culture. As such, they are not by default against us – they are always working for us, or for somebody, as an extension or a material prosthesis of an intention.
Donna Haraway (1984/1991) said it: we are all cyborgs now. Whether our hands, mouths, eyes, ears, brains or whatever part of our physicality, what we can do through new technologies is most amazing to us because our networks, communications, tasks, platforms, activities or events are just extensions of our selves. Something like the digital technology which is so much a part of our daily lives today has become that which Marshall McLuhan described as an extended nervous system that enables us to wear all humankind as our skin.
And as the social scientists of humankind, we anthropologists need critique to explore and unearth the problems that technology might create. Yes, there might exist unfair value propositions that put us in an uneven, asymmetric power position with powerful companies that want to tie us into their much tighter, technologically enabled networks. Yes, there might be technologies that take away our freedom by enabling large-scale surveillance at individual level. And yes, there might be algorithms that do not work every time, everywhere, and for everybody. More than ever there is a role for critique, but we should not conflate political critique with flawed claims about humanity and technology. It is irresponsible to advance lazy arguments that can be disregarded as untrue and un-actionable.
Lazy critique – prevalent in the habits of armchair philosophers – does not help in solving these issues. This is why critique has seemingly run out of steam, as Bruno Latour (2004) wrote almost fifteen years ago. Lazy critique places value judgements on entire systems, generalising from one or two lived experiences, and ends up being imprecise. It may strike a chord with the reader’s gut, but it leads to inaction because a) it is often not based on facts, b) it does not provide enough specificity to enable action, such as which aspects of new technologies might be valuable to society/companies, and which may be cause for concern, and c) it is highly abstract so it does not help society or companies prioritise which areas they should rethink/reform in their deployment of new technologies.
No wonder, then, that critical thinking has turned against itself. We have entered an era in which lazy critique has become so deeply entrenched in our economy, politics and science that even well-educated people can critique and ‘relativize’ matters of fact such as climate change, evolution, even the collapse of Twin Towers without ethical concern. This is the cultural price for critique without thinking about the values we want to create in the world.
What we need is first to be more responsible with critique, and then to describe an alternative, more valuable world through our anthropological imagination. Now is the time for anthropologists to contribute more to visions of the future, but we’ll need to learn how to practice foresight. Now is the time for anthropologists to shape better value creation mechanisms and innovative value propositions, but we’ll need to polish up on our strategy skills. To do these and other things, anthropologists need to become more precise in critique, more creative and bold in suggesting valuable alternatives. They must forge ahead to participate in the critical re-design of positive futures with new technologies. And we need to reimagine the entire value proposition of the technological future.
Thus, we need to reimagine the value propositions of a future where technology plays its role(s) and ask: Where do we want to find ourselves?
Whether it’s big data, AI, biotech, nanotechnology, robots, smart dust, driverless cars or something few, if any, of us even know about yet, it’s time to engage as anthropologists, not as self-elected troglodytes comfy in our armchairs who sound a warning to our so-called critical thinking pals every time a new gizmo comes out that makes us uneasy. Next time when you feel that so-called critical thinking urge come alive, ask yourself three questions: How do my critical thinking faculties shape my approach to field research and how I seek human truths in the experience of technology? How do I shape what I have learned about humans and technology into insights that can inspire a culturally valuable point of view? How can I participate in shaping more culturally sensitive and, ultimately, valuable experiences for people using technology?
Answer those questions without hitting an existential technology panic button and you are well on your way.
Haraway, Donna (1991). “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs and Women: The Reinvention of Nature. Routledge.
Mauss, Marcel (1934). “Les techniques du corps,” Psychology Journal XXXII, 3–4, 15 March – 15 April 1936. Paper presented to the Society of Psychology on 17 May 1934.
McLuhan, Marshall (1964). Understanding Media: The Extensions of Man.
Stiroh, Kevin (2002). Information Technology and the U.S. Productivity Revival: What Do the Industry Data Say? American Economic Review 92, 5, 1559–1576.
Solow, Robert (1957). “Technical change and the aggregate production function.” Review of Economics and Statistics 39 (3): 312–320.
Latour, Bruno. “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry 30, 2, 225–248.