Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Help. Show all posts
Showing posts with label Help. Show all posts

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Wednesday, October 28, 2020

Should we campaign against sex robots?

Danaher, J., Earp, B. D., & Sandberg, A. (forthcoming). 
In J. Danaher & N. McArthur (Eds.) 
Robot Sex: Social and Ethical Implications
Cambridge, MA: MIT Press.

Abstract: 

In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the
prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons
from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots.

Conclusion

Robots are going to form an increasingly integral part of human social life.  Sex robots are likely to be among them. Though the proponents of the CASR seem deeply concerned by this prospect, we have argued that there is nothing in the nature of sex robots themselves that warrants preemptive opposition to their development.  The arguments of the campaign itself are vague and premised on a misleading
analogy between sex robots and human sex work. Furthermore, drawing upon the example of the Campaign to Stop Killer Robots, we suggest that there are no bad-making properties of sex robots that give rise to similarly serious levels of concern.  The bad-making properties of sex robots are speculative and indirect: preventing their development may not prevent the problems from arising. Preventing the development of killer robots is very different: if you stop the robots you stop the prima facie harm.

In conclusion, we should preemptively campaign against robots when we have reason to think that a moral or practical harm caused by their use can best be avoided or reduced as a result of those efforts. By contrast, to engage in such a campaign as a way of fighting against—or preempting—indirect harms, whose ultimate source is not the technology itself but rather individual choices or broader social institutions, is likely to be a comparative waste of effort.

Monday, September 21, 2020

The ethics of pausing a vaccine trial in the midst of a pandemic

Patrick Skerrett
statnews.com
Originally posted 11 Sept 20

Here is an excerpt:

Is the process for clinical trials of vaccines different from the process for drug or device trials?

Mostly no. The principles, design, and basic structure of a vaccine trial are more or less the same as for a trial for a new medication. The research ethics considerations are also similar.

The big difference between the two is that the participants in a preventive vaccine trial are, by and large, healthy people — or at least they are people who don’t have the illness for which the agent being tested might be effective. That significantly heightens the risk-benefit calculus for the participants.

Of course, some people in a Covid-19 vaccine trial could personally benefit if they live in communities with a lot of Covid-19. But even then, they might never get it. That’s very different than a trial in which individuals have a condition, say melanoma or malignant hypertension, and they are taking part in a trial of a therapy that could improve or even cure their condition.

Does that affect when a company might stop a trial?

In every clinical trial, the data and safety monitoring board takes routine and prescheduled looks at the accumulated data. They are checking mainly for two things: signals of harm and evidence of effectiveness.

These boards will recommend stopping a trial if they see a signal of concern or harm. They may do the same thing if they see solid evidence that people in the active arm of the trial are doing far better than those in the control arm.

In both cases, the action is taken on behalf of those participating in the trial. But it is also taken to advance the interests of people who would get this intervention if it was to be made publicly available.

The current situation with AstraZeneca involves a signal of concern. The company’s first obligation is to the participants in the trial. It cannot ethically proceed with the trial if there is reason for concern, even based on the experience of one participant.

Friday, September 11, 2020

Why Being Kind Helps You, Too—Especially Now

Elizabeth Bernstein
The Wall Street Journal
Originally posted 11 August 20

Here is an excerpt:

Kindness can even change your brain, says Stephanie Preston, a psychology professor at the University of Michigan who studies the neural basis for empathy and altruism. When we’re kind, a part of the reward system called the nucleus accumbens activates—our brain responds the same way it would if we ate a piece of chocolate cake. In addition, when we see the response of the recipient of our kindness—when the person thanks us or smiles back—our brain releases oxytocin, the feel-good bonding hormone. This oxytocin boost makes the pleasure of the experience more lasting.

It feels so good that the brain craves more. “It’s an upward spiral—your brain learns it’s rewarding, so it motivates you to do it again,” Dr. Preston says.

Are certain acts of kindness better than others? Yes. If you want to reap the personal benefits, “you need to be sincere,” says Sara Konrath, a psychologist and associate professor at the Indiana University Lilly Family School of Philanthropy, where she runs a research lab that studies empathy and altruism.

It also helps to expect good results. A study published in the Journal of Positive Psychology in 2019 showed people who believed that kindness is good for them showed a greater increase in positive emotions, satisfaction with life and feelings of connection with others—as well as a greater decrease in negative emotions—than those who did not.

How can you be kind even when you may not feel like it? Make it a habit. Take stock of how you behave day to day. Are you trusting and generous? Or defensive and hostile? “Kindness is a lifestyle,” says Dr. Konrath.

Start by being kind to yourself—you’re going to burn out if you help everyone else and neglect your own needs. Remember that little acts add up: a smile, a phone call to a lonely friend, letting someone have the parking space. Understand the difference between being kind and being nice—kindness is genuinely helping or caring about someone; niceness is being polite. Don’t forget your loved ones. Kindness is not just for strangers.

The info is here.

Thursday, July 30, 2020

Structural Competency Meets Structural Racism: Race, Politics, and the Structure of Medical Knowledge

Jonathan M. Metzl and Dorothy E. Roberts
Virtual Mentor. 2014;16(9):674-690.
doi: 10.1001/virtualmentor.2014.16.9.spec1-1409.

Here is an excerpt:

The Clinical Implications of Addressing Race from a Structural Perspective

These brief case examples illustrate the complex ways that seemingly clinically relevant “cultural” characteristics and attitudes also reflect structural inequities, medical politics, legal codes, invisible discrimination, and socioeconomic disparities. Black men who appeared schizophrenic to medical practitioners did so in part because of the framing of new diagnostic codes. Lower-income persons who “refused” to eat well or exercise lived in neighborhoods without grocery stores or sidewalks. Black women who seemed to be uniquely harming their children by using crack cocaine while pregnant were victims of racial stereotyping, as well as of a selection bias in which decisions about which patients were reported to law enforcement depended on the racial and economic segregation of prenatal care. In this sense, approaches that attempt to address issues—such as the misdiagnosis of schizophrenia in black men, perceived diet “noncompliance” in minority populations, or the punishment of “crack mothers”—through a heuristic aimed solely at enhancing cross-cultural communication between doctors and patients, though surely well intentioned, will overlook the potentially pathologizing impact of structural factors set in motion long before patients or doctors enter exam rooms.

Structural factors impact majority populations as well as minority ones, and structures of privilege or opulence also influence expressions of illness and health. For instance, in the United States, research suggests that pediatricians disproportionately overdiagnose ADHD in white school-aged children. Until recently, medical researchers in many global locales assumed, wrongly, that eating disorders afflicted only affluent persons.

Yet of late, medicine and medical education have struggled most with addressing ways that structural forces impact and disadvantage communities of color. As sociologist Hannah Bradby rightly explains it, hypothesizing mechanisms that include the micro-processes of interactions between patients and professionals and the macro-processes of population-level inequalities is a missing step in our reasoning at present…. [A]s long as we see the solution to racism lying only in educating the individual, we fail to address the complexity of racism and risk alienating patients and physicians alike.

The info is here.

Wednesday, July 22, 2020

FCC Approves 988 as Suicide Hotline Number

Jennifer Weaver
KUTV.com
Originally posted 16 July 20

A three-digit number to connect to suicide prevention and mental health crisis counselors has been approved.

The Federal Communications Commission voted unanimously Thursday to make 988 the number people can call to be connected directly to the National Suicide Prevention Hotline.

Phone service providers have until July 2022 to implement the new number. The 10-digit number is currently 1-800-273-8255 (TALK).

Wednesday, July 15, 2020

COVID-19 is more than a public health challenge: it's a moral test

Thomas Reese
religionnews.com
Originally published 10 July 20

The time is already past to admit that the coronavirus pandemic in the United States is a moral crisis, not simply a public health and economic crisis.

While a certain amount of confusion back in February at the beginning of the crisis is understandable, today it is unforgivable. Bad leadership has cost thousands of lives and millions of jobs.

A large part of the failure has been in separating the economic crisis from the public health crisis when in fact they are intimately related. Until consumers and workers feel safe, the economy cannot revive. Nor should we take the stock market as the key measure of the country’s health, rather than the lives of ordinary people.

It can be difficult to see this as a moral crisis because what is needed is not heroic action, but simple acts that everyone must do. People simply need to wear masks, keep social distance and wash their hands. Employers need to provide working conditions where that is possible.

These are practices that public health experts have taught for decades. Too many in the United States have ignored them. Warnings about masks, for example, have been ignored.

For its part, government needs to enforce these measures, expand testing on a massive scale, do contact tracing and help people isolate themselves if they test positive. Instead, government, especially at the federal level, has failed. Businesses, especially bars, restaurants and entertainment venues, have remained open or been reopened too soon.

That it is possible to do the right thing and control the virus is obvious from the examples of South Korea, Thailand, New Zealand, China, Vietnam, most of Europe, New York, Massachusetts and Connecticut.

There is also the sin of presumption of those who trust in God to protect them from the virus while doing nothing themselves. Those who left it to the Lord forgot that “God helps those who help themselves.” There is also an arrogance in seeing ourselves as different from other mortals like us. Areas where people insisted they were somehow immune to this “blue” big-city virus have now been hit with comparable or worse infection rates.

The info is here.

Tuesday, July 7, 2020

Can COVID-19 re-invigorate ethics?

Louise Campbell
BMJ Blogs
Originally posted 26 May 20

The COVID-19 pandemic has catapulted ethics into the spotlight.  Questions previously deliberated about by small numbers of people interested in or affected by particular issues are now being posed with an unprecedented urgency right across the public domain.  One of the interesting facets of this development is the way in which the questions we are asking now draw attention, not just to the importance of ethics in public life, but to the very nature of ethics as practice, namely ethics as it is applied to specific societal and environmental concerns.

Some of these questions which have captured the public imagination were originally debated specifically within healthcare circles and at the level of health policy: what measures must be taken to prevent hospitals from becoming overwhelmed if there is a surge in the number of people requiring hospitalisation?  How will critical care resources such as ventilators be prioritised if need outstrips supply?  In a crisis situation, will older people or people with disabilities have the same opportunities to access scarce resources, even though they may have less chance of survival than people without age-related conditions or disabilities?  What level of risk should healthcare workers be expected to assume when treating patients in situations in which personal protective equipment may be inadequate or unavailable?   Have the rights of patients with chronic conditions been traded off against the need to prepare the health service to meet a demand which to date has not arisen?  Will the response to COVID-19 based on current evidence compromise the capacity of the health system to provide routine outpatient and non-emergency care to patients in the near future?

Other questions relate more broadly to the intersection between health and society: how do we calculate the harms of compelling entire populations to isolate themselves from loved ones and from their communities?  How do we balance these harms against the risks of giving people more autonomy to act responsibly?  What consideration is given to the fact that, in an unequal society, restrictions on liberty will affect certain social groups in disproportionate ways?  What does the catastrophic impact of COVID-19 on residents of nursing homes say about our priorities as a society and to what extent is their plight our collective responsibility?  What steps have been taken to protect marginalised communities who are at greater risk from an outbreak of infectious disease: for example, people who have no choice but to coexist in close proximity with one another in direct provision centres, in prison settings and on halting sites?

The info is here.

Sunday, June 21, 2020

Downloading COVID-19 contact tracing apps is a moral obligation

G. Owen Schaefer and Angela Ballantyne
BMJ Blogs
Originally posted 4 May 20

Should you download an app that could notify you if you had been in contact with someone who contracted COVID-19? Such apps are already available in countries such as Israel, Singapore, and Australia, with other countries like the UK and US soon to follow. Here, we explain why you might have an ethical obligation to use a tracing app during the COVID-19 pandemic, even in the face of privacy concerns.

(cut)

Vulnerability and unequal distribution of risk

Marginalized populations are both hardest hit by pandemics and often have the greatest reason to be sceptical of supposedly benign State surveillance. COVID-19 is a jarring reminder of global inequality, structural racism, gender inequity, entrenched ableism, and many other social divisions. During the SARS outbreak, Toronto struggled to adequately respond to the distinctive vulnerabilities of people who were homeless. In America, people of colour are at greatest risk in several dimensions – less able to act on public health advice such as social distancing, more likely to contract the virus, and more likely to die from severe COVID if they do get infected. When public health advice switched to recommending (or in some cases requiring) masks, some African Americans argued it was unsafe for them to cover their faces in public. People of colour in the US are at increased risk of state surveillance and police violence, in part because they are perceived to be threatening and violent. In New York City, black and Latino patients are dying from COVID-19 at twice the rate of non-Hispanic white people.

Marginalized populations have historically been harmed by State health surveillance. For example, indigenous populations have been the victims of State data collection to inform and implement segregation, dispossession of land, forced migration, as well as removal and ‘re‐education’ of their children. Stigma and discrimination have impeded the public health response to HIV/AIDS, as many countries still have HIV-specific laws that prosecute people living with HIV for a range of offences.  Surveillance is an important tool for implementing these laws. Marginalized populations therefore have good reasons to be sceptical of health related surveillance.

Thursday, May 21, 2020

The Difference Ethical Leadership Can Make in a Pandemic

Caterina Bulgarlla
ethicalsystems.org
Originally posted May 2, 2020

Here is an excerpt:

Since the personal costs of social isolation also depend on the behavior of others, the growing clamors to reopen the economy create a twofold risk. On the one hand, a rushed reopening may lead to new contagion; on the other, it may blunt the progress that has already been made toward mitigation. Not only can more people get sick, but many others—especially, lower-risk groups like the young—may start reevaluating whether it makes sense to sacrifice themselves in the absence of a shared strategy toward controlling the spread.

Self-sacrifice becomes less of a hard choice when everybody does his/her part. In the presence of a genuinely shared effort, not only are the costs of isolation more fairly spread, but it’s easier to appreciate that one’s personal interest is aligned with everyone else’s. Furthermore, if people consistently cooperate and shelter-in-place, progress toward mitigation is more likely to unfold in a steady and linear fashion, potentially creating a positive-feedback loop for all to see.

Ultimately, whether people cooperate or not has more to do with how they weigh the costs and benefits of cooperation than the objective value of those costs and benefits. Uncertainty—such as the uncertainty of whether one’s personal sacrifices truly matter—may lead people to view cooperation as a more costly choice, but trust may increase its value. Similarly, if the choice to cooperate is framed in terms of what one can gain—such as in “stay home to avoid getting sick”—rather than in terms of how every contribution is critical for the common good, people may act more selfishly.

For example, some may start pitting the risk of getting sick against the risk of economic loss and choose to risk infection. In contrast, if people are forced to evaluate whether they bear responsibility for the life of others, they may feel compelled to cooperate. When it comes to these types of dilemmas, cooperation is less likely to manifest if the decisions to be made are framed in business terms rather than in ethical ones.

The info is here.

Friday, May 1, 2020

During the Pandemic, the FCC Must Provide Internet for All

Gigi Sohn
Wired.com
Originally published 28 April 20

If anyone believed access to the internet was not essential prior to the Covid-19 pandemic, nobody is saying that today. With ongoing stay-at-home orders in most states, high-speed broadband internet access has become a necessity to learn, work, engage in commerce and culture, keep abreast of news about the virus, and stay connected to neighbors, friends, and family. Yet nearly a third of American households do not have this critical service, either because it is not available to them, or, as is more often the case, they cannot afford it.

Lifeline is a government program that seeks to ensure that all Americans are connected, regardless of income. Started by the Reagan administration and placed into law by Congress in 1996, Lifeline was expanded by the George W. Bush administration and expanded further during the Obama administration. The program provides a $9.25 a month subsidy per household to low-income Americans for phone and/or broadband service. Because the subsidy is so minimal, most Lifeline customers use it for mobile voice and data services.

The Federal Communications Commission sets Lifeline’s policies, including rules about who is eligible to receive the subsidy, its amount, and which companies can provide the service. Americans whose income is below a certain level or who receive government assistance—such as Medicaid, the Supplemental Nutrition Assistance Program, or SNAP, and Supplemental Security Income, or SSI—are eligible.

During this crisis, President Donald Trump’s FCC could make an enormous dent in the digital divide if it expanded Lifeline, even if just on a temporary basis. The FCC could increase the subsidy so that it can be used to pay for robust fixed internet access. It could also make Lifeline available to a broader subset of Americans, specifically the tens of millions who have just filed for unemployment benefits. But that’s unlikely to be a priority for this FCC and its chairman, Ajit Pai, who has spent nearly his entire tenure trying to destroy the program.

The info is here.

Friday, April 24, 2020

COVID-19 Is Making Moral Injury to Physicians Much Worse

Wendy Dean
Medscape.com
Originally published 1 April 20

Here is an excerpt:

Moral injury is also coming to the forefront as physicians consider rationing scarce resources with too little guidance. Which surgeries truly justify use of increasingly scarce PPE? A cardiac valve replacement? A lumpectomy? Repairing a torn ligament?

Each denial has profound impact on both the patients whose surgeries are delayed and the clinicians who decide their fates. Yet worse decisions may await clinicians. If, for example, New York City needs an additional 30,000 ventilators but receives only 500, physicians will be responsible for deciding which 29,500 patients will not be ventilated, virtually assuring their demise.

How will physicians make those decisions? How will they cope? The situation of finite resources will force an immediate pivot to assessing patients according to not only their individual needs but also to society's need for that patient's contribution. It will be a wrenching restructuring.

Here are the essential principles for mitigating the impact of moral injury in the context of COVID-19. (They are the same as recommendations in the time before COVID-19.)

1. Value physicians

a. Physicians are putting everything on the line. They're walking into a wildfire of a pandemic, wearing pajamas, with a peashooter in their holster. That takes a monumental amount of courage and deserves profound respect.

The info is here.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 
2020;46:205-211.

Abstract

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.

Wednesday, January 29, 2020

In 2020, let’s stop AI ethics-washing and actually do something

Karen Hao
technologyreview.com
Originally published 27 Dec 19

Here is an excerpt:

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation.

The info is here.

Wednesday, November 20, 2019

Super-precise new CRISPR tool could tackle a plethora of genetic diseases

CRISPR-Cas9 gene editing complex, illustration.Heidi Ledford
nature.com
Originally posted October 21, 2019

For all the ease with which the wildly popular CRISPR–Cas9 gene-editing tool alters genomes, it’s still somewhat clunky and prone to errors and unintended effects. Now, a recently developed alternative offers greater control over genome edits — an advance that could be particularly important for developing gene therapies.

The alternative method, called prime editing, improves the chances that researchers will end up with only the edits they want, instead of a mix of changes that they can’t predict. The tool, described in a study published on 21 October in Nature1, also reduces the ‘off-target’ effects that are a key challenge for some applications of the standard CRISPR–Cas9 system. That could make prime-editing-based gene therapies safer for use in people.

The tool also seems capable of making a wider variety of edits, which might one day allow it to be used to treat the many genetic diseases that have so far stymied gene-editors. David Liu, a chemical biologist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts and lead study author, estimates that prime editing might help researchers tackle nearly 90% of the more than 75,000 disease-associated DNA variants listed in ClinVar, a public database developed by the US National Institutes of Health.

The specificity of the changes that this latest tool is capable of could also make it easier for researchers to develop models of disease in the laboratory, or to study the function of specific genes, says Liu.

The info is here.

Tuesday, October 22, 2019

Is Editing the Genome for Climate Change Adaptation Ethically Justifiable?

Lisa Soleymani Lehmann
AMA J Ethics. 2017;19(12):1186-1192.

Abstract

As climate change progresses, we humans might have to inhabit a world for which we are increasingly maladapted. If we were able to identify genes that directly influence our ability to thrive in a changing climate, would it be ethically justifiable to edit the human genome to enhance our ability to adapt to this new environment? Should we use gene editing not only to prevent significant disease but also to enhance our ability to function in the world? Here I suggest a “4-S framework” for analyzing the justifiability of gene editing that includes these considerations: (1) safety, (2) significance of harm to be averted, (3) succeeding generations, and (4) social consequences.

Conclusion

Gene editing has unprecedented potential to improve human health. CRISPR/Cas9 has a specificity and simplicity that opens up wide possibilities. If we are unable to prevent serious negative health consequences of climate change through environmental and public health measures, gene editing could have a role in helping human beings adapt to new environmental conditions. Any decision to proceed should apply the 4-S framework.

The info is here.

Friday, October 18, 2019

The Koch-backed right-to-try law has been a bust, but still threatens our health

Michael Hiltzik
The Los Angeles Times
Originally posted September 17, 2019

The federal right-to-try law, signed by President Trump in May 2018 as a sop to right-wing interests, including the Koch brothers network, always was a cruel sham perpetrated on sufferers of intractably fatal diseases.

As we’ve reported, the law was promoted as a compassionate path to experimental treatments for those patients — but in fact was a cynical ploy aimed at emasculating the Food and Drug Administration in a way that would undermine public health and harm all patients.

Now that a year has passed since the law’s enactment, the assessments of how it has functioned are beginning to flow in. As NYU bioethicist Arthur Caplan observed to Ed Silverman’s Pharmalot blog, “the right to try remains a bust.”

His judgment is seconded by the veteran pseudoscience debunker David Gorski, who writes: “Right-to-try has been a spectacular failure thus far at getting terminally ill patients access to experimental drugs.”

That should come as no surprise, Gorski adds, because “right-to-try was never about helping terminally ill patients. ... It was always about ideology more than anything else. It was always about weakening the FDA’s ability to regulate drug approval.”

The info is here.

Friday, September 20, 2019

The crossroads between ethics and technology

Arrow indicating side road in mountain landscapeTehilla Shwartz Altshuler
Techcrunch.com
Originally posted August 6, 2019

Here is an excerpt:

The first relates to ethics. If anything is clear today in the world of technology, it is the need to include ethical concerns when developing, distributing, implementing and using technology. This is all the more important because in many domains there is no regulation or legislation to provide a clear definition of what may and may not be done. There is nothing intrinsic to technology that requires that it pursue only good ends. The mission of our generation is to ensure that technology works for our benefit and that it can help realize social ideals. The goal of these new technologies should not be to replicate power structures or other evils of the past. 

Startup nation should focus on fighting crime and improving autonomous vehicles and healthcare advancements. It shouldn’t be running extremist groups on Facebook, setting up “bot farms” and fakes, selling attackware and spyware, infringing on privacy and producing deepfake videos.

The second issue is the lack of transparency. The combination of individuals and companies that have worked for, and sometimes still work with, the security establishment frequently takes place behind a thick screen of concealment. These entities often evade answering challenging questions that result from the Israeli Freedom of Information law and even recourse to the military censor — a unique Israeli institution — to avoid such inquires.


Sunday, September 15, 2019

To Study the Brain, a Doctor Puts Himself Under the Knife

Adam Piore
MIT Technology Review
Originally published November 9, 2015

Here are two excerpts:

Kennedy became convinced that the way to take his research to the next level was to find a volunteer who could still speak. For almost a year he searched for a volunteer with ALS who still retained some vocal abilities, hoping to take the patient offshore for surgery. “I couldn’t get one. So after much thinking and pondering I decided to do it on myself,” he says. “I tried to talk myself out of it for years.”

The surgery took place in June 2014 at a 13-bed Belize City hospital a thousand miles south of his Georgia-based neurology practice and also far from the reach of the FDA. Prior to boarding his flight, Kennedy did all he could to prepare. At his small company, Neural Signals, he fabricated the electrodes the neurosurgeon would implant into his motor cortex—even chose the spot where he wanted them buried. He put aside enough money to support himself for a few months if the surgery went wrong. He had made sure his living will was in order and that his older son knew where he was.

(cut)

To some researchers, Kennedy’s decisions could be seen as unwise, even unethical. Yet there are cases where self-experiments have paid off. In 1984, an Australian doctor named Barry Marshall drank a beaker filled with bacteria in order to prove they caused stomach ulcers. He later won the Nobel Prize. “There’s been a long tradition of medical scientists experimenting on themselves, sometimes with good results and sometimes without such good results,” says Jonathan Wolpaw, a brain-computer interface researcher at the Wadsworth Center in New York. “It’s in that tradition. That’s probably all I should say without more information.”

The info is here.


Wednesday, September 11, 2019

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.