Originally posted November 13, 2017
Here is an excerpt:
There has been a lot of focus on AI ethics, how to keep the technology safe, and it's kind of a polarized discussion like a lot of discussions nowadays. I've actually talked about both promise and peril for quite a long time. Technology is always going to be a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It's also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions: poverty, disease, and so on. Then alarm that these technologies can be destructive and cause even existential risks. And finally I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies because, despite the progress we've made—and that's a-whole-nother issue, people think things are getting worse but they're actually getting better—there's still a lot of human suffering to be overcome. It's only continued progress particularly in AI that's going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.
And there's a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies—how to keep these technologies safe. Now it's 40 years later. We are getting clinical impact of biotechnology. It's a trickle today, it'll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It's a good model for how to proceed.
The article is here.