Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Unintended Consequences. Show all posts
Showing posts with label Unintended Consequences. Show all posts

Friday, February 25, 2022

Public Deliberation about Gene Editing in the Wild

M. K. Gusmano, E. Kaebnick, et al. (2021).
Hastings Center Report
10.1002/hast.1318, 51, S2, (S34-S41).

Abstract

Genetic editing technologies have long been used to modify domesticated nonhuman animals and plants. Recently, attention and funding have also been directed toward projects for modifying nonhuman organisms in the shared environment—that is, in the “wild.” Interest in gene editing nonhuman organisms for wild release is motivated by a variety of goals, and such releases hold the possibility of significant, potentially transformative benefit. The technologies also pose risks and are often surrounded by a high uncertainty. Given the stakes, scientists and advisory bodies have called for public engagement in the science, ethics, and governance of gene editing research in nonhuman organisms. Most calls for public engagement lack details about how to design a broad public deliberation, including questions about participation, how to structure the conversations, how to report on the content, and how to link the deliberations to policy. We summarize the key design elements that can improve broad public deliberations about gene editing in the wild.

Here is the gist of the paper:

We draw on interdisciplinary scholarship in bioethics, political science, and public administration to move forward on this knot of conceptual, normative, and practical problems. When is broad public deliberation about gene editing in the wild necessary? And when it is required, how should it be done? These questions lead to a suite of further questions about, for example, the rationale and goals of deliberation, the features of these technologies that make public deliberation appropriate or inappropriate, the criteria by which “stakeholders” and “relevant publics” for these uses might be identified, how different approaches to public deliberation map onto the challenges posed by the technologies, how the topic to be deliberated upon should be framed, and how the outcomes of public deliberation can be meaningfully connected to policy-making.

Friday, August 30, 2019

The Technology of Kindness—How social media can rebuild our empathy—and why it must.

Jamil Zaki
Scientific American
Originally posted August 6, 2019

Here is an excerpt:

Technology also builds new communities around kindness. Consider the paradox of rare illnesses such as cystic fibrosis or myasthenia gravis. Each affects fewer than one in 1,000 people but there are many such conditions, meaning there are many people who suffer in ways their friends and neighbors don’t understand. Millions have turned to online forums, such as Facebook groups or the site RareConnect. In 2011 Priya Nambisan, a health policy expert, surveyed about 800 members of online health forums. Users reported that these groups offer helpful tips and information but also described them as heartfelt communities, full of compassion and commiseration.

allowing anyone to count on the kindness of strangers. These sites train users to provide empathetic social support and then unleash their goodwill on one another. Some express their struggles; others step in to provide support. Users find these platforms deeply soothing. In a 2015 survey, 7cups users described the kindness they received on the site to be as helpful as professional psychotherapy. Users on these sites also benefit from helping others. In a 2017 study, psychologist Bruce DorĂ© and his colleagues assigned people to use either Koko or another Web site and tested their subsequent well-being. Koko users’ levels of depression dropped after spending time on the site, especially when they used it to support others.

The info is here.

Monday, January 14, 2019

The Amazing Ways Artificial Intelligence Is Transforming Genomics and Gene Editing

Bernard Marr
Forbes.com
Originally posted November 16, 2018

Here is an excerpt:

Another thing experts are working to resolve in the process of gene editing is how to prevent off-target effects—when the tools mistakenly work on the wrong gene because it looks similar to the target gene.

Artificial intelligence and machine learning help make gene editing initiatives more accurate, cheaper and easier.

The future for AI and gene technology is expected to include pharmacogenomics, genetic screening tools for newborns, enhancements to agriculture and more. While we can't predict the future, one thing is for sure: AI and machine learning will accelerate our understanding of our own genetic makeup and those of other living organisms.

The info is here.

Tuesday, December 4, 2018

Letting tech firms frame the AI ethics debate is a mistake

Robert Hart
www.fastcompany.com
Originally posted November 2, 2018

Here is an excerpt:

Even many ethics-focused panel discussions–or manel discussions, as some call them–are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere–a kind of ethical colonialism, if you will.

The info is here.

Monday, October 22, 2018

Why the Gene Editors of Tomorrow Need to Study Ethics Today

Katie Palmer
www.wired.com
Originally posted September 18, 2018

Two years after biochemist Jennifer Doudna helped introduce the world to the gene-editing tool known as Crispr, a 14-year-old from New Jersey turned it loose on a petri dish full of lung cancer cells, disrupting their ability to multiply. “In high school, I was all on the Crispr bandwagon,” says Jiwoo Lee, who won top awards at the 2016 Intel International Science and Engineering Fair for her work. “I was like, Crisprize everything!” Just pick a snippet of genetic material, add one of a few cut-and-paste proteins, and you’re ready to edit genomes. These days, though, Lee describes her approach as “more conservative.” Now a sophomore at Stanford, she spent part of her first year studying not just the science of Crispr but also the societal discussion around it. “Maybe I matured a little bit,” she says.

Doudna and Lee recently met at the Innovative Genomics Institute in Berkeley to discuss Crispr’s ethical implications. “She’s so different than I was at that age,” Doudna says. “I feel like I was completely clueless.” For Lee’s generation, it is critically important to start these conversations “at as early a stage as possible,” Doudna adds. She warns of a future in which humans take charge of evolution—both their own and that of other species. “The potential to use gene editing in germ cells or embryos is very real,” she says. Both women believe Crispr may eventually transform clinical medicine; Lee even hopes to build her career in that area—but she’s cautious. “I think there’s a really slippery slope between therapy and enhancement,” Lee says. “Every culture defines disease differently.” One country’s public health campaign could be another’s eugenics.

The info is here.

Monday, August 20, 2018

Ethics and the pursuit of artificial intelligence

Daniel Wagner
South China Morning Post
Originally posted August 6, 2018

So many businesses and governments are scurrying to get into the artificial intelligence (AI) race that many appear to be losing sight of some important things that should matter along the way – such as legality, good governance, and ethics.

In the AI arena the stakes are extremely high and it is quickly becoming a free-for-all from data acquisition to the stealing of corporate and state secrets. The “rules of the road” are either being addressed along the way or not at all, since the legal regime governing who can do what to whom, and how, is either wholly inadequate or simply does not exist. As is the case in the cyber world, the law is well behind the curve.

Ethical questions abound with AI systems, raising questions about how machines recognise and process values and ethical paradigms. AI is certainly not unique among emerging technologies in creating ethical quandaries, but ethical questions in AI research and development present unique challenges in that they ask us to consider whether, when, and how machines should make decisions about human lives – and whose values should guide those decisions.

In a world filled with unintended consequences, will our collectively shared values fall by the wayside in an effort to reach AI supremacy? Will the notion of human accountability eventually disappear in an AI-dominated world? Could the commercial AI landscape evolve into a winner takes all arena in which only one firm or machine is left standing?

The information is here.

Saturday, March 10, 2018

Universities Rush to Roll Out Computer Science Ethics Courses

Natasha Singer
The New York Times
Originally posted February 12, 2018

Here is an excerpt:

“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”

The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura NorĂ©n, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.

But until recently, ethics did not seem relevant to many students.

The article is here.

Friday, February 2, 2018

Has Technology Lost Society's Trust?

Mustafa Suleyman
The RSA.org
Originally published January 8, 2018

Has technology lost society's trust? Mustafa Suleyman, co-founder and Head of Applied AI at DeepMind, considers what tech companies have got wrong, how to fix it and how technology companies can change the world for the better. (7 minute video)


Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Sunday, August 20, 2017

The ethics of creating GMO humans

The Editorial Board
The Los Angeles Times
Originally posted August 3, 2017

Here is an excerpt:

But there is also a great deal we still don’t know about how minor issues might become major ones as people pass on edited DNA to their offspring, and as people who have had some genes altered reproduce with people who have had other genes altered. We’ve seen how selectively breeding to produce one trait can unexpectedly produce other, less desirable outcomes. Remember how growers were able to create tomatoes that were more uniformly red, but in the process, they turned off the gene that gave tomatoes flavor?

Another major issue is the ethics of adjusting humans genetically to fit a favored outcome. Today it’s heritable disease, but what might be seen as undesirable traits in the future that people might want to eliminate? Short stature? Introverted personality? Klutziness?

To be sure, it’s not as though everyone is likely to line up for gene-edited offspring rather than just having babies, at least for the foreseeable future. The procedure can be performed only on in vitro embryos and requires precision timing.

The article is here.