The Dark Side of Science

The Dark Side of Science
Will nanomachines one day be launched into our bloodstreams to monitor health and combat disease? Or will “self-replicating nanobots” proliferate out of control until they completely overrun the planet? A runaway plague of rogue nanobots wouldn’t violate basic scientific laws, but that doesn’t make it realistic. This extreme outcome is not likely, but it’s not impossible either — and that’s exactly what critics of the technology are worried about.

It is foolhardy to venture predictions about what science will achieve this century. Scientific predictions have been notoriously awry in the past. In 1933 Lord Rutherford, the greatest nuclear expert of his time, famously dismissed as “moonshine” any practical relevance of nuclear energy. So in thinking about the future, we would do well to follow two guidelines. First, we should leave our minds open, or at least ajar, to concepts that now seem on the wild, speculative fringe of possibility. And second, we should remember that while new discoveries will offer marvelous prospects, they will also have a dark side. Some fear that nanotechnology could prove to be one of the 21st century’s darker technologies, as potentially disruptive and dangerous as nuclear weapons.

Nanotechnology is just one of a suite of advances — including biotechnology, genetics and robotics — about which some ethicists, politicians, consumer watchdogs and even a few scientists are concerned. They fear that these developments may have spin-offs so dangerous that, when the genie is out of the bottle, the outcome may be impossible to control. The long-term effects of genetically modified food, gene therapy and even the

radiation from mobile phones are just not known, critics argue, so why are we rushing to develop these potentially harmful

technologies?

The surest safeguard against such danger is to deny the world the basic science that underpins these advances. So, should scientists stop their research — even if it is in itself safe and ethical — simply because of unease about where it might lead? Should we go slow in some areas, or leave some doors of possibility permanently closed? Should we restrict science’s traditional freedom of inquiry and international openness?

In 1975, prominent molecular biologists did just that by proposing a moratorium on what were then novel types of gene splicing experiments. This moratorium soon came to seem unduly cautious, but that doesn’t mean that it was unwise at the time, since the risk was then genuinely uncertain. But it would be far harder to achieve anything similar today. The research community is much larger, and competition — enhanced by commercial pressures — is more intense.

To put effective brakes on a field of research would require international consensus. If one country alone imposed regulations, the most dynamic researchers and enterprising companies would simply move to another country, something that is happening already in stem cell research. And even if all governments agreed to halt research in a particular field, the chances of effective enforcement are slim. There will surely be a cloned baby at some point, for instance, regardless of the regulations.

But perhaps the most insurmountable problem is that most scientific discoveries can be applied both for good and for ill, and the specific uses of any single technology cannot be foreseen. The inventors of lasers, for example, had no idea that their work could be used in eye surgery. Today, the same techniques that could lead to voracious nanobots could also lead to effective new treatments for some of the world’s most intractable diseases.

The truth is, we simply don’t know where new technologies will lead, and we can never be fully secure against scientific error or scientific terror. Today’s advances offer tremendous possibilities and tremendous risks — and we’re just going to have to learn to live with both.

Share