Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 24, 2019

134 Activities to Add to Your Self-Care Plan

GoodTherapy.org Staff
www.goodtherapy.org
Originally posted June 13, 2015

At its most basic definition, self-care is any intentional action taken to meet an individual’s physical, mental, spiritual, or emotional needs. In short, it’s all the little ways we take care of ourselves to avoid a breakdown in those respective areas of health.

You may find that, at certain points, the world and the people in it place greater demands on your time, energy, and emotions than you might feel able to handle. This is precisely why self-care is so important. It is the routine maintenance you need do to function your best not only for others, but also for yourself.

GoodTherapy.org’s own business and administrative, web development, outreach and advertising, editorial and education, and support teams have compiled a massive list of some of their own personal self-care activities to offer some help for those struggling to come up with their own maintenance plan. Next time you find yourself saying, “I really need to do something for myself,” browse our list and pick something that speaks to you. Be silly, be caring to others, and make your self-care a priority! In most cases, taking care of yourself doesn’t even have to cost anything. And because self-care is as unique as the individual performing it, we’d love to invite you to comment and add any of your own personal self-care activities in the comments section below. Give back to your fellow readers and share some of the little ways you take care of yourself.

The list is here.

Note: Self-care enhances the possibility of competence practice.  Good self-care skills are important to promote ethical practice.

The Growing Marketplace For AI Ethics

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”

The info is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Psychiatry’s Incurable Hubris

Gary Greenberg
The Atlantic
April 2019 issue

Here is an excerpt:

The need to dispel widespread public doubt haunts another debacle that Harrington chronicles: the rise of the “chemical imbalance” theory of mental illness, especially depression. The idea was first advanced in the early 1950s, after scientists demonstrated the principles of chemical neurotransmission; it was supported by the discovery that consciousness-altering drugs such as LSD targeted serotonin and other neurotransmitters. The idea exploded into public view in the 1990s with the advent of direct-to-consumer advertising of prescription drugs, antidepressants in particular. Harrington documents ad campaigns for Prozac and Zoloft that assured wary customers the new medications were not simply treating patients’ symptoms by altering their consciousness, as recreational drugs might. Instead, the medications were billed as repairing an underlying biological problem.

The strategy worked brilliantly in the marketplace. But there was a catch. “Ironically, just as the public was embracing the ‘serotonin imbalance’ theory of depression,” Harrington writes, “researchers were forming a new consensus” about the idea behind that theory: It was “deeply flawed and probably outright wrong.” Stymied, drug companies have for now abandoned attempts to find new treatments for mental illness, continuing to peddle the old ones with the same claims. And the news has yet to reach, or at any rate affect, consumers. At last count, more than 12 percent of Americans ages 12 and older were taking antidepressants. The chemical-imbalance theory, like the revamped DSM, may fail as science, but as rhetoric it has turned out to be a wild success.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Friday, April 19, 2019

Leader's group-norm violations elicit intentions to leave the group – If the group-norm is not affirmed

Lara Ditrich, AdrianLüders, Eva Jonas, & Kai Sassenberg
Journal of Experimental Social Psychology
Available online 2 April 2019

Abstract

Group members, even central ones like group leaders, do not always adhere to their group's norms and show norm-violating behavior instead. Observers of this kind of behavior have been shown to react negatively in such situations, and in extreme cases, may even leave their group. The current work set out to test how this reaction might be prevented. We assumed that group-norm affirmations can buffer leaving intentions in response to group-norm violations and tested three potential mechanisms underlying the buffering effect of group-norm affirmations. To this end, we conducted three experiments in which we manipulated group-norm violations and group-norm affirmations. In Study 1, we found group-norm affirmations to buffer leaving intentions after group-norm violations. However, we did not find support for the assumption that group-norm affirmations change how a behavior is evaluated or preserve group members' identification with their group. Thus, neither of these variables can explain the buffering effect of group-norm affirmations. Studies 2 & 3 revealed that group-norm affirmations instead reduce perceived effectiveness of the norm-violator, which in turn predicted lower leaving intentions. The present findings will be discussed based on previous research investigating the consequences of norm violations.

The research is here.