Rethinking the Future (again) –
It’s a strange thing when time catches up with a book and renders even the most brilliant one obsolete, or nearly so.
If I had read Kevin Kelly’s What Technology Wants when it first came out a decade ago, I would have agreed with every word. It is a brilliant work of scholarship that makes connections you might not think about, but which seem obvious when Kelly presents. He has a mind like a spider’s web that catches everything that approaches it. He thinks about technology like nobody else. But, in the last few months, after reading The Good Ancestor by Roman Krznaric and Humankind by Rutger Bergman, I’ve seen how trusted scientific research can be revisited, and even revised. When I read about the Kitty Genovese murder in Malcolm Gladwell’s The Tipping Point, I thought I was reading a tragic story about a woman whose cries for help went unheard by her neighbors and who was left to die alone. When revisited, it turns out this was not the case at all. Genovese died in the arms of a friend. In his book, Gladwell cited the study that popularized the “broken windows” theory of policing. If we can punish people for small crimes, the reasoning went in the “broken windows” theory, it will set an example and people won’t commit more serious crime. This isn’t true.
Bergman, in Humankind, digs deeper into the Genovese case than Gladwell did, and shows that the murdered woman was part of a community. When she was attacked, it did not go unremarked by her neighbors, some of whom did call the police. And he shows that broken windows in a neighborhood do not necessarily lead to more crime, though they can convince residents that they need more aggressive policing.
Authors cite the research available at the time, or become entangled in their confirmation bias, or can’t see any farther than their cohort. This is the case, maybe all three of those examples, with What Technology Wants. Kelly cites the research available to him at the time. It just doesn’t hold up. I present a few of those points here, not as a refutation of the entire book, but to suggest revisiting points Kelly couldn’t have known about at the time or didn’t fully examine. There’s no doubt that Kelly was a fan of technology, assuming, like many at the time he wrote the book, that it would solve all our problems. This, also, is not true. The solutions we need will come from humans, not from Bitcoin, for example, or the blockchain, or NFTs, or any extra-human source. We have to look deeper into ourselves, a point made eloquently by Krznaric in The Good Ancestor.
Another example: Kelly claimed that ancient peoples lived in a constant state of war. But in The Good Ancestor, Krznaric cites studies showing that matriarchal societies were peaceful and put a high value on equality.
And another: Kelly wrote that cameras everywhere would reduce the authority of the authorities. I suppose, when he was writing that, he thought that all police would wear body cameras and cell phone cameras carried in public would record all bad behavior. Partly true, yes? Phones have recorded the bad behavior of police, which has resulted in some much deserved convictions, and also the bad behavior of insurrectionists, which merely served to inflame other insurrectionists. The cameras of the authorities have increased the authority of the authority. Industry has helped. The example there is the Amazon-owned company RING, which provides home recordings to the police of anyone in the neighborhood who seems they don’t belong there.
The security cameras that are everywhere are good for the state. License plate ID cameras, facial recognition in airports, cameras in the public square — private companies have no worries about pooling this data and providing it to law enforcement. Kelly was wrong about this, too.
A big omission in the book: No mention of the cost of climate change. Progress is all good, no judgement, no environmental costs, no steady planetary destruction. How could a smart guy like Kelly have missed that? Maybe on purpose. He puts his boosterism about tech above quibbles about planetary heating. He celebrates how bloggers and social posters contribute to the collective catalogue of intelligence but doesn’t mention how Facebook and Google profit from our contributions. We donate our memories, ideas, our very lives. Google and Facebook sell them. Didn’t Kelly know they were doing that? When he wrote the book, the signs were already there. He didn’t mention any of that.
“The web is getting better every day,” he writes, which is a good argument if you’re making the tech companies feel good. For many of us, however, the web has become something we need to escape from, an attention-dominating, mind suck that us tracking us everywhere. “Better every day?” Depends, I guess, on your definition of better.
He is right about uplifting the web as a “planetary computer” that can erase the boundaries between the human intelligence network and machine intelligence network. It sounds awesome, at first. But it depends on who controls it, what it’s used for, and if it harms or helps us.
He sees technology as evolving itself as aggressively as it evolves us, even (he doesn’t include this part) if it means changing the planet to one we can’t live on anymore. Technology wants to make more of itself, he says, make us better. But this will come a cost to the planet that we cannot pay. How is that progress?