The Dark Side of World-Changing Technologies
This is the third installment of a five-part WorldPost series on the world beyond 2050. The series is adapted from the Nierenberg Prize Lecture by Lord Martin Rees in La Jolla, Calif. Part one is available here. Part two is here. Part four will be published next week.
There are numerous novel technologies that will change society and empower individuals -- but they have a dark side that's all too frequently overlooked.
Our world increasingly depends on elaborate networks: electric power grids, air traffic control, international finance, globally dispersed manufacturing and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns -- real-world analogues of what happened in 2008 to the financial system. Our cities would be paralyzed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days, causing the gravest havoc in the shambolic megacities of the developing world. And social media can spread panic and rumor and economic contagion literally at the speed of light.
To guard against the downsides of such an interconnected world plainly requires international collaboration. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.
Advances in microbiology -- diagnostics, vaccines and antibiotics -- offer prospects of containing pandemics. But the same research has controversial aspects. For instance, in 2012, a group in Wisconsinshowed that it was surprisingly easy to make the influenza virus both more virulent and transmissible. To some, this was a scary portent of things to come. In 2014 the U.S. government decided to cease funding these so-called "gain of function" experiments.
Meanwhile, the new CRISPR technique for gene editing is hugely promising, but there are ethical concerns raised by Chineseexperiments on human embryos and by unintended consequences of "gene drive" programs.
Back in the early days of recombinant DNA research, a group of biologists met in Asilomar, on the California coast, and agreed on guidelines on what experiments should and shouldn't be done. This seemingly encouraging precedent has triggered several meetings to discuss recent developments in the same spirit, notably an inter-academy gathering in Washington in December. But today, 40 years after Asilomar, the research community is far more broadly international and more influenced by commercial pressures. I'd worry that whatever regulations are imposed, on prudential or ethical grounds, can't be enforced worldwide any more than drug laws can. Whatever can be done will be done by someone, somewhere.
And that's a nightmare. Biotech involves small-scale dual-use equipment. Indeed, biohacking is burgeoning even as a hobby and competitive game.
We know all too well that technical expertise doesn't guarantee balanced rationality. The global village will have its village idiots and they'll have global range. The rising empowerment of tech-savvy groups or individuals with bio and cybertechnology will pose an intractable challenge to governments and aggravate the tension between freedom, privacy and security.
Concerns about bio-error and bio-terror are relatively near-term -- within 10 or 15 years. What about 2050 and beyond? The smartphone, the Web and their ancillaries would have seemed like magic even 20 years ago. So, looking several decades ahead, we must keep our minds open to transformative advances that may now seem like science fiction.
On the bio front, the great physicist Freeman Dyson conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. I'd guess that this is comfortably beyond the sci-fi fringe, but were even part of this scenario to come about, our ecology -- and even our species -- surely would not long survive unscathed.
And what about another transformative technology: the field of robotics and artificial intelligence?
It's been 20 years since IBM's Deep Blue beat Kasparov, the world chess champion. More recently, another IBM computer won a TV gameshow -- not the mindless kind featuring bubble-headed celebs (winning that would be a doddle), but one called Jeopardy that required wide knowledge, and crossword-style questions.
Computers use "brute force" methods. They learn to identify dogs, cats and human faces by "crunching" through millions of images -- not the way babies learn. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).
There's been exciting advances in what's called generalized machine learning -- Deep Mind (a small London company recently bought up by Google) created a machine that can figure out the rules of old Atari games without being told, and then play them better than humans.
But advances are patchy. Robots are still clumsier than a child in moving pieces on a real chessboard. They can't tie your shoelaces or cut your toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace.
Robots won't just take over manual work (indeed, plumbing and gardening will be among the hardest jobs to automate), but also routine legal work (conveyancing and such), medical diagnostics and even surgery.
Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google's driverless car discriminate whether it's a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect but will be better than the average driver. Machine errors will occur, but not as often as human error. When accidents do occur, however, they will create a legal minefield. Who should be held responsible -- the "driver," the owner or the designer?
The big social and economic question is this: Will this second machine age be like earlier disruptive technologies -- the car, for instance -- and create as many jobs as it destroys? Or is it really different this time?
The money earned by robots could generate huge wealth for an elite. Many have argued for the need for massive redistribution to ensure that everyone has at least a living wage. Some have argued we need to create and upgrade public service jobs where the human element is crucial and is now undervalued -- careers for young and old, custodians, gardeners in public parks and so on.
But let's look further ahead.
If robots could observe and interpret their environment as adeptly as we do, they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we do to other people.
Such machines pervade popular culture -- in movies like "Her," "Transcendence" and "Ex Machina."
What if a machine developed a mind of its own? Would it stay docile? Go rogue? If it could infiltrate the Internet -- and the Internet of Things -- it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes -- or even treat humans as an encumbrance.
Some AI pundits take this seriously and think the field already needs guidelines -- just as biotech does. But others regard these concerns as premature -- and worry less about artificial intelligence than about natural stupidity.
Be that as it may, it's likely that society will be transformed by autonomous robots even though the jury's out on whether they'll be idiot savants or display superhuman capabilities.
Even before that, there is disagreement about the route towards human-level intelligence. Some think we should emulate nature and reverse-engineer the human brain. Others say that's as misguided as designing a flying machine by copying how birds flap their wings. And philosophers debate whether "consciousness" is special to the wet, organic brains of humans, apes and dogs -- so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.
Ray Kurzweil, who now works at Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones -- an intelligence explosion. He thinks that humans could transcend biology by merging with computers. In old-style spiritualist parlance, they would "go over to the other side."
Kurzweil is the most prominent proponent of the concept of the singularity. But he's worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of cryonic enthusiasts based in California who were calling themselves the "society for the abolition of involuntary death." They would freeze your body, so that when immortality's on offer you can be resurrected or your brain downloaded. If you can't afford the full whack there's a cut-price option of having just your head frozen.
I told them I'd rather end my days in an English churchyard than a Californian refrigerator. They derided me as a "deathist" -- really old fashioned.
But of course research on aging is being seriously prioritized. Will the benefits be incremental? Or is aging a "disease" that can be cured? Dramatic life extension would plainly be a real wild card in population projections, with huge social ramifications. But it may happen, along with human enhancement in other forms.