Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 21, 2019

Trump Should Be Removed from Office

Trump Should Be Removed from OfficeMark Galli
Christianitytoday.com
Originally posted 19 Dec 19

Here is an excerpt:

But the facts in this instance are unambiguous: The president of the United States attempted to use his political power to coerce a foreign leader to harass and discredit one of the president’s political opponents. That is not only a violation of the Constitution; more importantly, it is profoundly immoral.

The reason many are not shocked about this is that this president has dumbed down the idea of morality in his administration. He has hired and fired a number of people who are now convicted criminals. He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud. His Twitter feed alone—with its habitual string of mischaracterizations, lies, and slanders—is a near perfect example of a human being who is morally lost and confused.

Trump’s evangelical supporters have pointed to his Supreme Court nominees, his defense of religious liberty, and his stewardship of the economy, among other things, as achievements that justify their support of the president. We believe the impeachment hearings have made it absolutely clear, in a way the Mueller investigation did not, that President Trump has abused his authority for personal gain and betrayed his constitutional oath. The impeachment hearings have illuminated the president’s moral deficiencies for all to see. This damages the institution of the presidency, damages the reputation of our country, and damages both the spirit and the future of our people. None of the president’s positives can balance the moral and political danger we face under a leader of such grossly immoral character.

The info is here.

Friday, December 20, 2019

Can Ethics be Taught? Evidence from Securities Exams and Investment Adviser Misconduct

Kowaleski, Z., Sutherland, A. and Vetter, F.
Available at SSRN
Posted 10 Oct 19

Abstract

We study the consequences of a 2010 change in the investment adviser qualification exam that
reallocated coverage from the rules and ethics section to the technical material section. Comparing advisers with the same employer in the same location and year, we find those passing the exam with more rules and ethics coverage are one-fourth less likely to commit misconduct. The exam change appears to affect advisers’ perception of acceptable conduct, and not just their awareness of specific rules or selection into the qualification. Those passing the rules and ethics-focused exam are more likely to depart employers experiencing scandals. Such departures also predict future scandals. Our paper offers the first archival evidence on how rules and ethics training affects conduct and labor market activity in the financial sector.

From the Conclusion

Overall, our results can be understood through the lens of Becker’s model of crime (1968, 1992). In this model, “many people are constrained by moral and ethical considerations, and did not commit crimes even when they were profitable and there was no danger of detection… The amount of crime is determined not only by the rationality and preferences of would-be criminals, but also by the economic and social environment created by… opportunities for employment, schooling, and training programs.” (Becker 1992, pp. 41-42). In our context, ethics training can affect an individual’s behavior by increasing the value of their reputation, as well as the psychological costs of committing misconduct. But such effects will be moderated by the employer’s culture, which affects the stigma of offenses, as well as the individual’s beliefs about appropriate conduct.

The research is here.

Study offers first large-sample evidence of the effect of ethics training on financial sector behavior

Image result for business ethicsShannon Roddel
phys.org
Originally posted 21 Nov 19


Here is an excerpt:

"Behavioral ethics research shows that business people often do not recognize when they are making ethical decisions," he says. "They approach these decisions by weighing costs and benefits, and by using emotion or intuition."

These results are consistent with the exam playing a "priming" role, where early exposure to rules and ethics material prepares the individual to behave appropriately later. Those passing the exam without prior misconduct appear to respond most to the amount of rules and ethics material covered on their exam. Those already engaging in misconduct, or having spent several years working in the securities industry, respond least or not at all.

The study also examines what happens when people with more ethics training find themselves surrounded by bad behavior, revealing these individuals are more likely to leave their jobs.

"We study this effect both across organizations and within Wells Fargo, during their account fraud scandal," Kowaleski explains. "That those with more ethics training are more likely to leave misbehaving organizations suggests the self-reinforcing nature of corporate culture."

The info is here.

Thursday, December 19, 2019

Holding Insurers Accountable for Parity in Coverage of Mental Health Treatment.

Paul S. Appelbaum and Joseph Parks
Psychiatric Services 
Originally posted 14 Nov 19

Despite a series of federal laws aimed at ensuring parity in insurance coverage of treatment for mental health and general health conditions, patients with mental disorders continue to face discrimination by insurers. This inequity is often due to overly restrictive utilization review criteria that fail to conform to accepted professional standards.

A recent class action challenge to the practices of the largest U.S. health insurer may represent an important step forward in judicial enforcement of parity laws.

Rejecting the insurer’s guidelines for coverage determinations as inconsistent with usual practices, the court enunciated eight principles that defined accepted standards of care.

In 2013, Natasha Wit, then 17 years old, was admitted to Monte Nido Vista, a residential treatment facility in California for women with eating disorders. At the time, she was said to be suffering from a severe eating disorder, with medical complications that included amenorrhea, adrenal and thyroid problems, vitamin deficiency, and gastrointestinal symptoms. She was also reported to be experiencing symptoms of depression and anxiety, obsessive-compulsive behaviors, and marked social isolation. Four days after admission, her insurer, United Behavioral Health (UBH), denied coverage for her stay on the basis that her “treatment does not meet the medical necessity criteria for residential mental health treatment per UBH Level of Care Guidelines for Residential Mental Health.” The reviewer suggested that she could safely be treated at a less restrictive level of care (1).

Ms. Wit’s difficulty in obtaining coverage from her health insurer for care that she and her treaters believed was medically necessary differed in only one respect from the similar experiences of thousands of patients around the country: her family was able to pay for the 2 months of residential treatment that UBH refused to cover.



Where AI and ethics meet

Stephen Fleischresser
Cosmos Magazine
Originally posted 18 Nov 19

Here is an excerpt:

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

The info is here.

Wednesday, December 18, 2019

Stop Blaming Mental Illness

Image result for mass shootings public health crisisAlan I. Leshner
Science  16 Aug 2019:
Vol. 365, Issue 6454, pp. 623

The United States is experiencing a public health epidemic of mass shootings and other forms of gun violence. A convenient response seems to be blaming mental illness; after all, “who in their right mind would do this?” This is utterly wrong. Mental illnesses, certainly severe mental illnesses, are not the major cause of mass shootings. It also is dangerously stigmatizing to people who suffer from these devastating disorders and can subject them to inappropriate restrictions. According to the National Council for Behavioral Health, the best estimates are that individuals with mental illnesses are responsible for less than 4% of all violent crimes in the United States, and less than a third of people who commit mass shootings are diagnosably mentally ill. Moreover, a large majority of individuals with mental illnesses are not at high risk for committing violent acts. Continuing to blame mental illness distracts from finding the real causes of mass shootings and addressing them directly.

Mental illness is, regrettably, a rather loosely defined and loosely used term, and this contributes to the problem. According to the American Psychiatric Association, “Mental illnesses are health conditions involving changes in emotion, thinking or behavior…associated with distress and/or problems functioning in social, work or family activities.” That broad definition can arguably be applied to many life stresses and situations. However, what most people likely mean when they attribute mass shootings to mental illness are what mental health professionals call “serious or severe mental illnesses,” such as schizophrenia, bipolar disorder, or major depression. Other frequently cited causes of mass shootings—hate, employee disgruntlement, being disaffected with society or disappointed with one's life—are not defined clinically as serious mental illnesses themselves. And because they have not been studied systematically, we do not know if these purported other causes really apply, let alone what to do about them if true.

The editorial is here.

Can Business Schools Have Ethical Cultures, Too?

Brian Gallagher
www.ethicalsystems.org
Originally posted 18 Nov 19

Here is an excerpt:

The informal aspects of an ethical culture are pretty intuitive. These include role models and heroes, norms, rituals, stories, and language. “The systems can be aligned to support ethical behavior (or unethical behavior),” Eury and Treviño write, “and the systems can be misaligned in a way that sends mixed messages, for instance, the organization’s code of conduct promotes one set of behaviors, but the organization’s norms encourage another set of behaviors.” Although Smeal hasn’t completely rid itself of unethical norms, it has fostered new ethical ones, like encouraging teachers to discuss the school’s honor code on the first day of class. Rituals can also serve as friendly reminders about the community’s values—during finals week, for example, the honor and integrity program organizes complimentary coffee breaks, and corporate sponsors support ethics case competitions. Eury and Treviño also write how one powerful story has taken hold at Smeal, about a time when the college’s MBA program, after it implemented the honor code, rejected nearly 50 applicants for plagiarism, and on the leadership integrity essay, no less. (Smeal was one of the first business schools to use plagiarism-detection software in its admissions program.)

Given the inherently high turnover rate at a school—and a diverse student population—it’s a constant challenge to get the community’s newcomers to aspire to meet Smeal’s honor and integrity standards. Since there’s no stopping students from graduating, Eury and Treviño stress the importance of having someone like Smeal’s honor and integrity director—someone who, at least part-time, focuses on fostering an ethical culture. “After the first leadership integrity director stepped down from her role, the college did not fill her position for a few years in part because of a coming change in deans,” Eury and Treviño write. The new Dean eventually hired an honor and integrity director who served in her role for 3-and-a-half years, but, after she accepted a new role in the college, the business school took close to 8 months to fill the role again. “In between each of these leadership changes, the community continued to change and grow, and without someone constantly ‘tending to the ethical culture garden,’ as we like to say, the ‘weeds’ will begin to grow,” Eury and Treviño write. Having an honor and integrity director makes an “important symbolic statement about the college’s commitment to tending the culture but it also makes a more substantive contribution to doing so.”

The info is here.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19

Abstract

Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.

(cut)

But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.

Create an Ethics Committee to Keep Your AI Initiative in Check

Steven Tiell
Harvard Business Review
Originally posted 15 Nov 19

Here is an excerpt:

Establishing this level of ethical governance is critical to helping executives mitigate downside risks, because addressing AI bias can be extremely complex. Data scientists and software engineers have biases just like everyone else, and when they allow these biases to creep into algorithms or the data sets used to train them — however unintentionally — it can leave those subjected to the AI feeling like they have been treated unfairly. But eliminating bias to make fair decisions is not a straightforward equation.

While many colloquial definitions of “bias” involve “fairness,” there is an important distinction between the two. Bias is a feature of statistical models, while fairness is a judgment against the values of a community. Shared understandings of fairness are different across cultures. But the most critical thing to understand is their relationship. The gut feeling may be that fairness requires a lack of bias, but in fact, data scientists must often introduce bias in order to achieve fairness.

Consider a model built to streamline hiring or promotions. If the algorithm learns from historic data, where women have been under-represented in the workforce, myriad biases against women will emerge in the model. To correct for this, data scientists might choose to introduce bias — balancing gender representation in historic data, creating synthetic data to fill in gaps, or correcting for balanced treatment (fairness) in the application of data-informed decisions. In many cases, there’s no possible way to be both unbiased and fair.

An Ethics Committee can help to not only maintain an organization’s values-based intentions, but can increase transparency into how they use AI. Even when it’s addressed, AI bias can still be maddening and frustrating for end users, and most companies deploying AIs today are subjecting people to it without giving them much agency in the process. Consider the experience of using a mapping app. When travelers are simply told which route to take, it is an experience stripped of agency; but when users are offered a set of alternate routes, they feel more confident in the selected route because they enjoyed more agency, or self-determination, in choosing it. Maximizing agency when AI is being used is another safeguard strong governance can help to ensure.

The info is here.