Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risks. Show all posts
Showing posts with label Risks. Show all posts

Thursday, June 27, 2024

When Therapists Lose Their Licenses, Some Turn to the Unregulated Life Coaching Industry Instead

Jessica Miller
Salt Lake Tribune
Originally published 17 June 24

A frustrated woman recently called the Utah official in charge of professional licensing, upset that his office couldn’t take action against a life coach she had seen. Mark Steinagel recalls the woman telling him: “I really think that we should be regulating life coaching. Because this person did a lot of damage to me.”

Reports about life coaches — who sell the promise of helping people achieve their personal or professional goals — come into Utah’s Division of Professional Licensing about once a month. But much of the time, Steinagel or his staff have to explain that there’s nothing they can do.

If the woman had been complaining about any of the therapist professions overseen by DOPL, Steinagel’s office might have been able to investigate and potentially order discipline, including fines.

But life coaches aren’t therapists and are mostly unregulated across the United States. They aren’t required to be trained in ethical boundaries the way therapists are, and there’s no universally accepted certification for those who work in the industry.

Here are some thoughts on the ethics of this trend:

The trend of therapists who have lost their licenses transitioning to the unregulated life coaching industry raises significant ethical concerns and risks. This shift allows individuals who have been deemed unfit to practice therapy to continue working with vulnerable clients without oversight or accountability. The lack of regulation in life coaching means that these practitioners can potentially continue harmful behaviors, misrepresent their qualifications, and exploit clients without facing the same consequences they would in the regulated therapy field.

This situation poses substantial risks to clients (and the integrity of coaching as profession). Clients seeking help may not understand the difference between regulated therapy and unregulated life coaching, potentially exposing themselves to practitioners who have previously violated ethical standards. The presence of discredited therapists in the life coaching industry can erode public trust in mental health services and coaching alike, potentially deterring individuals from seeking necessary help. Moreover, clients have limited legal recourse if they are harmed by an unregulated life coach, leaving them vulnerable to financial and emotional distress.

To address these concerns, there is a pressing need for regulatory measures in the life coaching industry, particularly concerning practitioners with a history of ethical violations in related fields. Such regulations could help maintain the integrity of coaching, protect vulnerable clients, and ensure that those seeking help receive services from qualified and ethical practitioners. Without such measures, the potential for harm remains significant, undermining the valuable work done by ethical professionals in both therapy and life coaching.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).


AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.

Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Sunday, October 22, 2023

What Is Psychological Safety?

Amy Gallo
Harvard Business Review
Originally posted 15 FEB 23

Here are two excerpts:

Why is psychological safety important?

First, psychological safety leads to team members feeling more engaged and motivated, because they feel that their contributions matter and that they’re able to speak up without fear of retribution. Second, it can lead to better decision-making, as people feel more comfortable voicing their opinions and concerns, which often leads to a more diverse range of perspectives being heard and considered. Third, it can foster a culture of continuous learning and improvement, as team members feel comfortable sharing their mistakes and learning from them. (This is what my boss was doing in the opening story.)

All of these benefits — the impact on a team’s performance, innovation, creativity, resilience, and learning — have been proven in research over the years, most notably in Edmondson’s original research and in a study done at Google. That research, known as Project Aristotle, aimed to understand the factors that impacted team effectiveness across Google. Using over 30 statistical models and hundreds of variables, that project concluded that who was on a team mattered less than how the team worked together. And the most important factor was psychological safety.

Further research has shown the incredible downsides of not having psychological safety, including negative impacts on employee well-being, including stress, burnout, and turnover, as well as on the overall performance of the organization.


How do you create psychological safety?

Edmondson is quick to point out that “it’s more magic than science” and it’s important for managers to remember this is “a climate that we co-create, sometimes in mysterious ways.”

Anyone who has worked on a team marked by silence and the inability to speak up, knows how hard it is to reverse that.

A lot of what goes into creating a psychologically safe environment are good management practices — things like establishing clear norms and expectations so there is a sense of predictability and fairness; encouraging open communication and actively listening to employees; making sure team members feel supported; and showing appreciation and humility when people do speak up.

There are a few additional tactics that Edmondson points to as well.

Here are some of my thoughts about psychological safety:
  • It is not the same as comfort. It is okay to feel uncomfortable sometimes, as long as you feel safe to take risks and speak up.
  • It is not about being friends with everyone on your team. It is about creating a respectful and inclusive environment where everyone feels like they can belong.
  • It takes time and effort to build psychological safety. It is not something that happens overnight.

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 


I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Saturday, October 23, 2021

Decision fatigue: Why it’s so hard to make up your mind these days, and how to make it easier

Stacy Colino
The Washington Post
Originally posted 22 Sept 21

Here is an excerpt:

Decision fatigue is more than just a feeling; it stems in part from changes in brain function. Research using functional magnetic resonance imaging has shown that there’s a sweet spot for brain function when it comes to making choices: When people were asked to choose from sets of six, 12 or 24 items, activity was highest in the striatum and the anterior cingulate cortex — both of which coordinate various aspects of cognition, including decision-making and impulse control — when the people faced 12 choices, which was perceived as “the right amount.”

Decision fatigue may make it harder to exercise self-control when it comes to eating, drinking, exercising or shopping. “Depleted people become more passive, which becomes bad for their decision-making,” says Roy Baumeister, a professor of psychology at the University of Queensland in Australia and author of  “Willpower: Rediscovering the Greatest Human Strength.” “They can be more impulsive. They may feel emotions more strongly. And they’re more susceptible to bias and more likely to postpone decision-making.”

In laboratory studies, researchers asked people to choose from an array of consumer goods or college course options or to simply think about the same options without making choices. They found that the choice-makers later experienced reduced self-control, including less physical stamina, greater procrastination and lower performance on tasks involving math calculations; the choice-contemplators didn’t experience these depletions.

Having insufficient information about the choices at hand may influence people’s susceptibility to decision fatigue. Experiencing high levels of stress and general fatigue can, too, Bufka says. And if you believe that the choices you make say something about who you are as a person, that can ratchet up the pressure, increasing your chances of being vulnerable to decision fatigue.

The suggestions include:

1. Sleep well
2. Make some choice automatic
3. Enlist a choice advisor
4. Given expectations a reality check
5. Pace yourself
6. Pay attention to feelings

Monday, March 8, 2021

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Stix, C., Maas, M.M.
AI Ethics (2021). 


Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

From the Conclusion

AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘near-term’ AI risks, and those focusing on ‘longer-term’ risks. This paper has sought to map the practical space for inter-community collaboration, with a view towards the practical development of AI policy.

As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an ‘incompletely theorized agreement’, for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework.

Wednesday, September 16, 2020

There are no good choices

Ezra Klein
Originally published 14 Sept 20

Here is an excerpt:

In America, our ideological conflicts are often understood as the tension between individual freedoms and collective actions. The failure of our pandemic response policy exposes the falseness of that frame. In the absence of effective state action, we, as individuals, find ourselves in prisons of risk, our every movement stalked by disease. We are anything but free; our only liberty is to choose among a menu of awful options. And faced with terrible choices, we are turning on each other, polarizing against one another. YouTube conspiracies and social media shaming are becoming our salves, the way we wrest a modicum of individual control over a crisis that has overwhelmed us as a collective.

“The burden of decision-making and risk in this pandemic has been fully transitioned from the top down to the individual,” says Dr. Julia Marcus, a Harvard epidemiologist. “It started with [responsibility] being transitioned to the states, which then transitioned it to the local school districts — If we’re talking about schools for the moment — and then down to the individual. You can see it in the way that people talk about personal responsibility, and the way that we see so much shaming about individual-level behavior.”

But in shifting so much responsibility to individuals, our government has revealed the limits of individualism.

The risk calculation that rules, and ruins, lives

Think of coronavirus risk like an equation. Here’s a rough version of it: The danger of an act = (the transmission risk of the activity) x (the local prevalence of Covid-19) / (by your area’s ability to control a new outbreak).

Individuals can control only a small portion of that equation. People can choose safer activities over riskier ones — though the language of choice too often obscures the reality that many have no economic choice save to work jobs that put them, and their families, in danger. But the local prevalence of Covid-19 and the capacity of authorities to track and squelch outbreaks are collective functions.

The info is here.

Tuesday, July 7, 2020

Can COVID-19 re-invigorate ethics?

Louise Campbell
BMJ Blogs
Originally posted 26 May 20

The COVID-19 pandemic has catapulted ethics into the spotlight.  Questions previously deliberated about by small numbers of people interested in or affected by particular issues are now being posed with an unprecedented urgency right across the public domain.  One of the interesting facets of this development is the way in which the questions we are asking now draw attention, not just to the importance of ethics in public life, but to the very nature of ethics as practice, namely ethics as it is applied to specific societal and environmental concerns.

Some of these questions which have captured the public imagination were originally debated specifically within healthcare circles and at the level of health policy: what measures must be taken to prevent hospitals from becoming overwhelmed if there is a surge in the number of people requiring hospitalisation?  How will critical care resources such as ventilators be prioritised if need outstrips supply?  In a crisis situation, will older people or people with disabilities have the same opportunities to access scarce resources, even though they may have less chance of survival than people without age-related conditions or disabilities?  What level of risk should healthcare workers be expected to assume when treating patients in situations in which personal protective equipment may be inadequate or unavailable?   Have the rights of patients with chronic conditions been traded off against the need to prepare the health service to meet a demand which to date has not arisen?  Will the response to COVID-19 based on current evidence compromise the capacity of the health system to provide routine outpatient and non-emergency care to patients in the near future?

Other questions relate more broadly to the intersection between health and society: how do we calculate the harms of compelling entire populations to isolate themselves from loved ones and from their communities?  How do we balance these harms against the risks of giving people more autonomy to act responsibly?  What consideration is given to the fact that, in an unequal society, restrictions on liberty will affect certain social groups in disproportionate ways?  What does the catastrophic impact of COVID-19 on residents of nursing homes say about our priorities as a society and to what extent is their plight our collective responsibility?  What steps have been taken to protect marginalised communities who are at greater risk from an outbreak of infectious disease: for example, people who have no choice but to coexist in close proximity with one another in direct provision centres, in prison settings and on halting sites?

The info is here.

Wednesday, May 20, 2020

Ethics of controlled human infection to study COVID-19

Shah, S.K, Miller, F.G., and others
Science  07 May 2020
DOI: 10.1126/science.abc1076


Development of an effective vaccine is the clearest path to controlling the coronavirus disease 2019 (COVID-19) pandemic. To accelerate vaccine development, some researchers are pursuing, and thousands of people have expressed interest in participating in, controlled human infection studies (CHIs) with severe acute respiratory syndrome–coronavirus 2 (SARS-CoV-2) (1, 2). In CHIs, a small number of participants are deliberately exposed to a pathogen to study infection and gather preliminary efficacy data on experimental vaccines or treatments. We have been developing a comprehensive, state-of-the-art ethical framework for CHIs that emphasizes their social value as fundamental to justifying these studies. The ethics of CHIs in general are underexplored (3, 4), and ethical examinations of SARS-CoV-2 CHIs have largely focused on whether the risks are acceptable and participants could give valid informed consent (1). The high social value of such CHIs has generally been assumed. Based on our framework, we agree on the ethical conditions for conducting SARS-CoV-2 CHIs (see the table). We differ on whether the social value of such CHIs is sufficient to justify the risks at present, given uncertainty about both in a rapidly evolving situation; yet we see none of our disagreements as insurmountable. We provide ethical guidance for research sponsors, communities, participants, and the essential independent reviewers considering SARS-CoV-2 CHIs.

The info is here.

Tuesday, April 28, 2020

What needs to happen before your boss can make you return to work

Mark Kaufman
Originally posted 24 April 20

Here is an excerpt:

But, there is a way for tens of millions of Americans to return to workplaces while significantly limiting how many people infect one another. It will require extraordinary efforts on the part of both employers and governments. This will feel weird, at first: Imagine regularly having your temperature taken at work, routinely getting tested for an infection or immunity, mandatory handwashing breaks, and perhaps even wearing a mask.

Yet, these are exceptional times. So restarting the economy and returning to workplace normalcy will require unparalleled efforts.

"This is truly unprecedented," said Christopher Hayes, a labor historian at the Rutgers School of Management and Labor Relations.

"This is like the 1918 flu and the Great Depression at the same time," Hayes said.

Yet unlike previous recessions and depressions over the last 100 years, most recently the Great Recession of 2008-2009, American workers must now ask themselves an unsettling question: "People now have to worry, ‘Is it safe to go to this job?’" said Hayes.

Right now, many employers aren't nearly prepared to tell workers in the U.S. to return to work and office spaces. To avoid infection, "the only tools you’ve got in your toolbox are the simple but hard-to-sustain public health tools like testing, contact tracing, and social distancing," explained Michael Gusmano, a health policy expert at the Rutgers School of Public Health.

"We’re not anywhere near a situation where you could claim that you can, with any credibility, send people back en masse now," Gusmano said.

The info is here.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 


In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Sunday, February 23, 2020

Burnout as an ethical issue in psychotherapy.

Simionato, G., Simpson, S., & Reid, C.
Psychotherapy, 56(4), 470–482.


Recent studies highlight a range of factors that place psychotherapists at risk of burnout. The aim of this study was to investigate the ethics issues linked to burnout among psychotherapists and to describe potentially effective ways of reducing vulnerability and preventing collateral damage. A purposive critical review of the literature was conducted to inform a narrative analysis. Differing burnout presentations elicit a wide range of ethics issues. High rates of burnout in the sector suggest systemic factors and the need for an ethics review of standard workplace practice. Burnout costs employers and taxpayers billions of dollars annually in heightened presenteeism and absenteeism. At a personal level, burnout has been linked to poorer physical and mental health outcomes for psychotherapists. Burnout has also been shown to interfere with clinical effectiveness and even contribute to misconduct. Hence, the ethical impact of burnout extends to our duty of care to clients and responsibilities to employers. A range of occupational and personal variables have been identified as vulnerability factors. A new 5-P model of prevention is proposed, which combines systemic and individually tailored responses as a means of offering the greatest potential for effective prevention, identification, and remediation. In addition to the significant economic impact and the impact on personal well-being, burnout in psychotherapists has the potential to directly and indirectly affect client care and standards of professional practice. Attending to the ethical risks associated with burnout is a priority for the profession, for service managers, and for each individual psychotherapist.

From the Conclusion:

Burnout is a common feature of unintentional misconduct among psychotherapists, often at the expense of client well-being, therapeutic progress, and successful client outcomes. Clinicians working in spite of burnout also incur personal and economic costs that compromise the principles of competence and beneficence outlined in ethical guidelines. This article has focused on a communitarian approach to identifying, understanding, and responding to the signs, symptoms, and risk factors in an attempt to harness ethical practice and foster successful careers in psychotherapy. The 5-P strength-based model illuminates the positive potential of workplaces that support wellbeing and prioritize ethical practice through providing an individualized responsiveness to the training, professional development, and support needs of staff. Further, in contrast to the majority of the literature that explores organizational factors leading to burnout and ethical missteps, the 5-P model also considers the personal characteristics that may contribute to burnout and the personal action that
psychotherapists can take to avoid burnout and unintentional misconduct.

The info is here.

Sunday, January 26, 2020

Why Boards Should Worry about Executives’ Off-the-Job Behavior

Harvard Business Review

January-February Issues 2020

Here is an excerpt:

In their most recent paper, the researchers looked at whether executives’ personal legal records—everything from traffic tickets to driving under the influence and assault—had any relation to their tendency to execute trades on the basis of confidential inside information. Using U.S. federal and state crime databases, criminal background checks, and private investigators, they identified firms that had simultaneously employed at least one executive with a record and at least one without a record during the period from 1986 to 2017. This yielded a sample of nearly 1,500 executives, including 503 CEOs. Examining executive trades of company stock, they found that those were more profitable for executives with a record than for others, suggesting that the former had made use of privileged information. The effect was greatest among executives with multiple offenses and those with serious violations (anything worse than a traffic ticket).

Could governance measures curb such activity? Many firms have “blackout” policies to deter improper trading. Because the existence of those policies is hard to determine (few companies publish data on them), the researchers used a common proxy: whether the bulk of trades by a firm’s officers occurred within 21 days after an earnings announcement (generally considered an allowable window). They compared the trades of executives with a record at companies with and without blackout policies, with sobering results: Although the policies mitigated abnormally profitable trades among traffic violators, they had no effect on the trades of serious offenders. The latter were likelier than others to trade during blackouts and to miss SEC reporting deadlines. They were also likelier to buy or sell before major announcements, such as of earnings or M&A, and in the three years before their companies went bankrupt—evidence similarly suggesting they had profited from inside information. “While strong governance can discipline minor offenders, it appears to be largely ineffective for executives with more-serious criminal infractions,” the researchers write.

The info is here.

Monday, January 20, 2020

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.


CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Friday, October 4, 2019

When Patients Request Unproven Treatments

Casey Humbyrd and Matthew Wynia
Originally posted March 25, 2019

Here is an excerpt:

Ethicists have made a variety of arguments about these injections. The primary arguments against them have focused on the perils of physicians becoming sellers of "snake oil," promising outlandish benefits and charging huge sums for treatments that might not work. The conflict of interest inherent in making money by providing an unproven therapy is a legitimate ethical concern. These treatments are very expensive and, as they are unproven, are rarely covered by insurance. As a result, some patients have turned to crowdfunding sites to pay for these questionable treatments.

But the profit motive may not be the most important ethical issue at stake. If it were removed, hypothetically, and physicians provided the injections at cost, would that make this practice more acceptable?

No. We believe that physicians who offer these injections are skipping the most important step in the ethical adoption of any new treatment modality: research that clarifies the benefits and risks. The costs of omitting that important step are much more than just monetary.

For the sake of argument, let's assume that stem cells are tremendously successful and that they heal arthritic joints, making them as good as new. By selling these injections to those who can pay before the treatment is backed by research, physicians are ensuring unavailability to patients who can't pay, because insurance won't cover unproven treatments.

The info is here.

Sunday, June 9, 2019

German ethics council expresses openness to eventual embryo editing

Sharon Begley
Originally posted May 13, 2019

Here is an excerpt:

The council’s openness to human germline editing was notable, however. Because of the Nazis’ eugenics programs and horrific human medical experiments, Germany has historically been even warier than other Western countries of medical technologies that might violate human dignity or could be exploited for eugenic purposes. The country’s 1990 Embryo Protection Act prohibits germline modifications for the purpose of reproduction.

“Germany has been very reluctant to get involved with anything that could lead to a re-introduction of eugenic practices in their society,” Annas said.

Despite that history, a large majority of the council called further development and possible use of germline editing “a legitimate ethical goal when aimed at avoiding or reducing genetically determined disease risks,” it said in a statement. If the procedure can be shown not to harm embryos or the children they become, it added, then altering a gene that otherwise causes a devastating illness such as cystic fibrosis or sickle cell is acceptable.

While some ethicists and others argue against embryo editing on the ground that it violates the embryos’ dignity, the German council wrote, “the question also arises as to whether the renunciation of germline intervention, which could spare the people concerned severe suffering, would not violate their human dignity, too.” Similarly, failing to intervene in order to spare a future child pain and suffering “would at least have to be justified,” the council said, echoing arguments that some families with a history of inherited diseases have.

The info is here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Monday, February 4, 2019

What “informed consent” really means

Stacy Weiner
Originally published January 19, 2019

Here is an excerpt:

Conflicts around consent

The informed consent process is not without its thornier aspects. At times, malpractice suits shift the landscape. For example, in a 2017 Pennsylvania case with possible implications in other states, the court ruled that the physician performing a procedure — not a delegate — must personally ensure that the patient understands the risks involved.

And sometimes, informed consent grabs headlines, as happened recently with allegations that medical students are performing pelvic exams on anesthetized women without consent.

That claim, Orlowski notes, relied on studies from more than 10 years ago, before such changes as more detailed consent forms. Typically, she says, students practice pelvic exams with special mannequins and standardized patients who are specifically trained for this purpose. When students and residents do perform pelvic exams on surgical patients, Orlowski adds, specific consent must be obtained first. “Performing pelvic examinations under anesthesia without patients’ consent is unethical and unacceptable,” she says.

In fact, the American College of Obstetricians and Gynecologists states that “pelvic examinations on an anesthetized woman … performed solely for teaching purposes should be performed only with her specific informed consent obtained before her surgery.”

Marie Walters, a student at Wright State University Boonshoft School of Medicine, says she was perplexed by the allegations, so she checked with fellow students at her school and elsewhere. Her explanation: medical students may not know that patients agreed to such exams. “Although students witness some consent processes, we’re likely not around when patients give consent for the surgeries we observe,” says Walters, who is a member of the AAMC Board of Directors. "We may be there just for the day of the surgery,” she notes.

The info is here.