Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Monday, December 30, 2019

Privacy: Where Security and Ethics Miss the Mark

privacyJason Paul Kazarian
Originally posted 29 Nov 19

Here is an excerpt:

Without question, we as a society have changed course. The unfettered internet has had its day. Going forward, more and more private companies will be subject to increasingly demanding privacy legislation.

Is this a bad thing? Something nefarious? Probably not. Just as we have always expected privacy in our physical lives, we now expect privacy in our digital lives as well. And businesses are adjusting toward our expectations.

One visible adjustment is more disclosure about exactly what private data a business collects and why. Privacy policies are easier to understand, as well as more comprehensive. Most websites warn visitors about the storage of private data in “cookies.” Many sites additionally grant visitors the ability to turn off such cookies except those technically necessary for the site’s operation.

Another visible adjustment is the widespread use of multi-factor authentication. Many sites, especially those involving credit, finance or shopping, validate login with a token sent by email, text or voice. These sites then verify the authorized user is logging in, which helps avoid leaking private data.

Perhaps the biggest adjustment is not visible: encryption of private data. More businesses now operate on otherwise meaningless cipher substitutes (the output of an encryption function) in place of sensitive data such as customer account numbers, birth dates, email or street addresses, member names and so on. This protects customers from breaches where private data is exploited via an all-too-common breach.

The info is here.

23 and Baby

Tanya Lewis
Originally posted 4 Dec 19

Here are two excerpts:

Proponents say that genetic testing of newborns can help diagnose a life-threatening childhood-onset disease in urgent cases and could dramatically increase the number of genetic conditions all babies are screened for at birth, enabling earlier diagnosis and treatment. It could also inform parents of conditions they could pass on to future children or of their own risk of adult-onset diseases. Genetic testing could detect hundreds or even thousands of diseases, an order of magnitude more than current heel-stick blood tests—which all babies born in the U.S. undergo at birth—or confirm results from such a test.

But others caution that genetic tests may do more harm than good. They could miss some diseases that heel-stick testing can detect and produce false positives for others, causing anxiety and leading to unnecessary follow-up testing. Sequencing children’s DNA also raises issues of consent and the prospect of genetic discrimination.

Regardless of these concerns, newborn genetic testing is already here, and it is likely to become only more common. But is the technology sophisticated enough to be truly useful for most babies? And are families—and society—ready for that information?


Then there’s the issue of privacy. If the child’s genetic information is stored on file, who has access to it? If the information becomes public, it could lead to discrimination by employers or insurance companies. The Genetic Information Nondiscrimination Act (GINA), passed in 2008, prohibits such discrimination. But GINA does not apply to employers with fewer than 15 employees and does not cover insurance for long-term care, life or disability. It also does not apply to people employed and insured by the military’s Tricare system, such as Rylan Gorby. When his son’s genome was sequenced, researchers also obtained permission to sequence Rylan’s genome, to determine if he was a carrier for the rare hemoglobin condition. Because it manifests itself only in childhood, Gorby decided taking the test was worth the risk of possible discrimination.

The info is here.

Sunday, December 29, 2019

It Loves Me, It Loves Me Not Is It Morally Problematic to Design Sex Robots that Appear to Love Their Owners?

Sven Nyholm and Lily Eva Frank
Techné: Research in Philosophy and Technology
DOI: 10.5840/techne2019122110


Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artifacts, we can expect that designers will have some success and that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so.

Saturday, December 28, 2019

Chinese residents worry about rise of facial recognition

Sam Shead
Originally posted 5 Dec 19

Here is an excerpt:

China has more facial recognition cameras than any other country and they are often hard to avoid.

Earlier this week, local reports said that Zhengzhou, the capital of the northeastern Henan province, had become the first Chinese city to roll the tech out across all its subway train stations.

Commuters can use the technology to automatically authorise payments instead of scanning a QR code on their phones. For now, it is a voluntary option, said the China Daily.

Earlier this month, university professor Guo Bing announced he was suing Hangzhou Safari Park for enforcing facial recognition.

Prof Guo, a season ticket holder at the park, had used his fingerprint to enter for years, but was no longer able to do so.

The case was covered in the government-owned media, indicating that the Chinese Communist Party is willing for the private use of the technology to be discussed and debated by the public.

The info is here.

Friday, December 27, 2019

Affordable treatment for mental illness and substance abuse gets harder to find

Image result for mental health parityJenny Gold
The Washington Post
Originally published 1 Dec 19

Here is an excerpt:

A report published by Milliman, a risk management and health-care consulting company, found that patients were dramatically more likely to resort to out-of-network providers for mental health and substance abuse treatment than for other conditions. The disparities have grown since Milliman published a similarly grim study two years ago.

The latest study examined the claims data of 37 million individuals with commercial preferred provider organization’s health insurance plans in all 50 states from 2013 to 2017.

Among the findings:

●People seeking inpatient care for behavioral health issues were 5.2 times more likely to be relegated to an out-of-network provider than for medical or surgical care in 2017, up from 2.8 times in 2013.

●For substance abuse treatment, the numbers were even worse: Treatment at an inpatient facility was 10 times more likely to be provided out-of-network — up from 4.7 times in 2013.

●In 2017, a child was 10 times more likely to go out-of-network for a behavioral health office visit than for a primary care office visit.

●Spending for all types of substance abuse treatment was just 0.9 percent of total health-care spending in 2017. Mental health treatment accounted for 2.4 percent of total spending.

In 2017, 70,237 Americans died of drug overdoses, and 47,173 from suicide, according to the Centers for Disease Control and Prevention. In 2018, nearly 20 percent of adults — more than 47 million people — experienced a mental illness, according to the National Alliance on Mental Illness.

“I thought maybe we would have seen some progress here. It’s very depressing to see that it’s actually gotten worse,” said Henry Harbin, former chief executive of Magellan Health, a managed behavioral health-care company, and adviser to the Bowman Family Foundation, which commissioned the report. “Employers and insurance plans need to quadruple their efforts.”

The info is here.

Thursday, December 26, 2019

Is virtue signalling a perversion of morality?

<p><em>Photo courtesy Wikimedia</em></p>Neil Levy
Originally posted 29 Nov 19

Here is an excerpt:

If such virtue signalling is a central – and justifying – function of public moral discourse, then the claim that it perverts this discourse is false. What about the hypocrisy claim?

The accusation that virtue signalling is hypocritical might be cashed out in two different ways. We might mean that virtue signallers are really concerned with displaying themselves in the best light – and not with climate change, animal welfare or what have you. That is, we might question their motives. In their recent paper, the management scholars Jillian Jordan and David Rand asked if people would virtue signal when no one was watching. They found that their participants’ responses were sensitive to opportunities for signalling: after a moral violation was committed, the reported degree of moral outrage was reduced when the participants had better opportunities to signal virtue. But the entire experiment was anonymous, so no one could link moral outrage to specific individuals. This suggests that, while virtue signalling is part (but only part) of the explanation for why we feel certain emotions, we nevertheless genuinely feel them, and we don’t express them just because we’re virtue signalling.

The second way of cashing out the hypocrisy accusation is the thought that virtue signallers might actually lack the virtue that they try to display. Dishonest signalling is also widespread in evolution. For instance, some animals mimic the honest signal that others give of being poisonous or venomous – hoverflies that imitate wasps, for example. It’s likely that some human virtue signallers are engaged in dishonest mimicry too. But dishonest signalling is worth engaging in only when there are sufficiently many honest signallers for it make sense to take such signals into account.

The info is here.

Wednesday, December 25, 2019

Convict Trump: The Constitution is more important than abortion

Paul Miller
The Christian Post
Originally posted 22 Dec 19

Christians should advocate for President Donald J. Trump’s conviction and removal from office by the Senate. While Trump has an excellent record of appointing conservative judges and advancing a prolife agenda, his criminal conduct endangers the Constitution. The Constitution is more important than the prolife cause because without the Constitution, prolife advocacy would be meaningless.

The fact that we live in a democratic republic is what enables us to turn our prolife convictions from private opinion into public advocacy. In other systems of government, the government does not care what its citizens think or believe. Only when the government is forced to take counsel from its citizens through elections, representation, and majoritarian rule do our opinions count.

Our democratic Constitution — adopted to “secure the blessings of liberty” for all Americans — is what guarantees that our voice matters. Without it, we can talk about the evils of abortion until we are blue in the face and it will never affect abortion policy one iota. The Constitution — with its guarantees of free speech, free assembly, the right to petition the government, regular elections, and the peaceful transfer of power — is the only thing that forces the government to listen to us.

Trump’s behavior is a threat to our Constitutional order. The facts behind his impeachment show that he abused a position of public trust for private gain, the definition of corruption and abuse of power. More worryingly, he refused to comply with Congress’s power to investigate his conduct, a fundamental breach of the checks and balances that is the bedrock of our Constitutional order.

The info is here.

Deliver Us From A.I.? This Priest-Led Network Aims to Shepherd Silicon Valley Tech Ethics

Rebecca Heilweil
Originally posted 24 Nov 19

Here is an excerpt:

When asked about engaging leaders in atheist- and liberal-leaning Silicon Valley, Salobir says that, even if they’re not religious, many do seek meaning in their work. “They dedicate all their time, all their money, all their energy to build a startup—it has to be meaningful," he says. "If it’s not, what is the point of waking up every morning and working so much?"

It's the kind of work that has Salobir finding inspiration in John the Baptist. “He’s the one who connects," he says. "He’s the one who puts people in touch.” 

There are other Vatican-affiliated groups interested in the impact of emerging technologies, Green says. He points to pontifical academies that have—or will—host conferences on topics including robotics and artificial intelligence. This past September, the Pontifical Council for Culture and the Dicastery for Promoting Integral Human Development came together to host a conference on the common good in the digital age that featured Silicon Valley leaders like Reid Hoffman and representatives from Facebook and Mozilla.

But Green says Optic is somewhat unique in its focus on establishing a reciprocal relationship with the technology industry. “It’s not just that the Church is going to get good information here, but [that] the technologists are going to feel like they’re also being benefitted," he says.

They’re getting the opportunity to think about technology in a way that they haven’t been thinking about it before, Green adds. “It’s a mutually beneficial relationship.”

The info is here.

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Monday, December 23, 2019

Will The Future of Work Be Ethical?

Greg Epstein
Interview at TechCrunch.com
Originally posted 28 Nov 19

Here is an excerpt:

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

The info is here.

Ideological differences in the expanse of the moral circle

Adam Waytz, Ravi Iyer, Liane Young,
Jonathan Haidt & Jesse Graham
Nature Communications volume 10, 
Article number: 4389 (2019)


Do clashes between ideologies reflect policy differences or something more fundamental? The present research suggests they reflect core psychological differences such that liberals express compassion toward less structured and more encompassing entities (i.e., universalism), whereas conservatives express compassion toward more well-defined and less encompassing entities (i.e., parochialism). Here we report seven studies illustrating universalist versus parochial differences in compassion. Studies 1a-1c show that liberals, relative to conservatives, express greater moral concern toward friends relative to family, and the world relative to the nation. Studies 2a-2b demonstrate these universalist versus parochial preferences extend toward simple shapes depicted as proxies for loose versus tight social circles. Using stimuli devoid of political relevance demonstrates that the universalist-parochialist distinction does not simply reflect differing policy preferences. Studies 3a-3b indicate these universalist versus parochial tendencies extend to humans versus nonhumans more generally, demonstrating the breadth of these psychological differences.


Seven studies demonstrated that liberals relative to conservatives exhibit universalism relative to parochialism. This difference manifested in conservatives exhibiting greater concern and preference for family relative to friends, the nation relative to the world, tight relative to loose perceptual structures devoid of social content, and humans relative to nonhumans.

Others have identified this universalist–parochial distinction, with Haidt, for example, noting “Liberals…are more universalistic…Conservatives, in contrast, are more parochial—concerned about their groups, rather than all of humanity.” The present findings comprehensively support this distinction empirically, explicitly demonstrating the relationship between ideology and universalism versus parochialism, assessing judgments of multiple social circles, and providing converging evidence across diverse measures.

The research is here.

Sunday, December 22, 2019

What jobs are affected by AI? Better-paid, better-educated workers face the most exposure

M. Muro, J. Whiton, & R. Maxim
Originally posted 20 Nov 19

Here is an excerpt:

AI could affect work in virtually every occupational group. However, whereas research on automation’s robotics and software continues to show that less-educated, lower-wage workers may be most exposed to displacement, the present analysis suggests that better-educated, better-paid workers (along with manufacturing and production workers) will be the most affected by the new AI technologies, with some exceptions.

Our analysis shows that workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree. Holders of bachelor’s degrees will be the most exposed by education level, more than five times as exposed to AI than workers with just a high school degree.

Our analysis shows that AI will be a significant factor in the future work lives of relatively well-paid managers, supervisors, and analysts. Also exposed are factory workers, who are increasingly well-educated in many occupations as well as heavily involved with AI on the shop floor. AI may be much less of a factor in the work of most lower-paid service workers.

Men, who are overrepresented in both analytic-technical and professional roles (as well as production), work in occupations with much higher AI exposure scores. Meanwhile, women’s heavy involvement in “interpersonal” education, health care support, and personal care services appears to shelter them. This both tracks with and accentuates the finding from our earlier automation analysis.

The info is here.

Saturday, December 21, 2019

Trump Should Be Removed from Office

Trump Should Be Removed from OfficeMark Galli
Originally posted 19 Dec 19

Here is an excerpt:

But the facts in this instance are unambiguous: The president of the United States attempted to use his political power to coerce a foreign leader to harass and discredit one of the president’s political opponents. That is not only a violation of the Constitution; more importantly, it is profoundly immoral.

The reason many are not shocked about this is that this president has dumbed down the idea of morality in his administration. He has hired and fired a number of people who are now convicted criminals. He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud. His Twitter feed alone—with its habitual string of mischaracterizations, lies, and slanders—is a near perfect example of a human being who is morally lost and confused.

Trump’s evangelical supporters have pointed to his Supreme Court nominees, his defense of religious liberty, and his stewardship of the economy, among other things, as achievements that justify their support of the president. We believe the impeachment hearings have made it absolutely clear, in a way the Mueller investigation did not, that President Trump has abused his authority for personal gain and betrayed his constitutional oath. The impeachment hearings have illuminated the president’s moral deficiencies for all to see. This damages the institution of the presidency, damages the reputation of our country, and damages both the spirit and the future of our people. None of the president’s positives can balance the moral and political danger we face under a leader of such grossly immoral character.

The info is here.

Friday, December 20, 2019

Can Ethics be Taught? Evidence from Securities Exams and Investment Adviser Misconduct

Kowaleski, Z., Sutherland, A. and Vetter, F.
Available at SSRN
Posted 10 Oct 19


We study the consequences of a 2010 change in the investment adviser qualification exam that
reallocated coverage from the rules and ethics section to the technical material section. Comparing advisers with the same employer in the same location and year, we find those passing the exam with more rules and ethics coverage are one-fourth less likely to commit misconduct. The exam change appears to affect advisers’ perception of acceptable conduct, and not just their awareness of specific rules or selection into the qualification. Those passing the rules and ethics-focused exam are more likely to depart employers experiencing scandals. Such departures also predict future scandals. Our paper offers the first archival evidence on how rules and ethics training affects conduct and labor market activity in the financial sector.

From the Conclusion

Overall, our results can be understood through the lens of Becker’s model of crime (1968, 1992). In this model, “many people are constrained by moral and ethical considerations, and did not commit crimes even when they were profitable and there was no danger of detection… The amount of crime is determined not only by the rationality and preferences of would-be criminals, but also by the economic and social environment created by… opportunities for employment, schooling, and training programs.” (Becker 1992, pp. 41-42). In our context, ethics training can affect an individual’s behavior by increasing the value of their reputation, as well as the psychological costs of committing misconduct. But such effects will be moderated by the employer’s culture, which affects the stigma of offenses, as well as the individual’s beliefs about appropriate conduct.

The research is here.

Study offers first large-sample evidence of the effect of ethics training on financial sector behavior

Image result for business ethicsShannon Roddel
Originally posted 21 Nov 19

Here is an excerpt:

"Behavioral ethics research shows that business people often do not recognize when they are making ethical decisions," he says. "They approach these decisions by weighing costs and benefits, and by using emotion or intuition."

These results are consistent with the exam playing a "priming" role, where early exposure to rules and ethics material prepares the individual to behave appropriately later. Those passing the exam without prior misconduct appear to respond most to the amount of rules and ethics material covered on their exam. Those already engaging in misconduct, or having spent several years working in the securities industry, respond least or not at all.

The study also examines what happens when people with more ethics training find themselves surrounded by bad behavior, revealing these individuals are more likely to leave their jobs.

"We study this effect both across organizations and within Wells Fargo, during their account fraud scandal," Kowaleski explains. "That those with more ethics training are more likely to leave misbehaving organizations suggests the self-reinforcing nature of corporate culture."

The info is here.

Thursday, December 19, 2019

Holding Insurers Accountable for Parity in Coverage of Mental Health Treatment.

Paul S. Appelbaum and Joseph Parks
Psychiatric Services 
Originally posted 14 Nov 19

Despite a series of federal laws aimed at ensuring parity in insurance coverage of treatment for mental health and general health conditions, patients with mental disorders continue to face discrimination by insurers. This inequity is often due to overly restrictive utilization review criteria that fail to conform to accepted professional standards.

A recent class action challenge to the practices of the largest U.S. health insurer may represent an important step forward in judicial enforcement of parity laws.

Rejecting the insurer’s guidelines for coverage determinations as inconsistent with usual practices, the court enunciated eight principles that defined accepted standards of care.

In 2013, Natasha Wit, then 17 years old, was admitted to Monte Nido Vista, a residential treatment facility in California for women with eating disorders. At the time, she was said to be suffering from a severe eating disorder, with medical complications that included amenorrhea, adrenal and thyroid problems, vitamin deficiency, and gastrointestinal symptoms. She was also reported to be experiencing symptoms of depression and anxiety, obsessive-compulsive behaviors, and marked social isolation. Four days after admission, her insurer, United Behavioral Health (UBH), denied coverage for her stay on the basis that her “treatment does not meet the medical necessity criteria for residential mental health treatment per UBH Level of Care Guidelines for Residential Mental Health.” The reviewer suggested that she could safely be treated at a less restrictive level of care (1).

Ms. Wit’s difficulty in obtaining coverage from her health insurer for care that she and her treaters believed was medically necessary differed in only one respect from the similar experiences of thousands of patients around the country: her family was able to pay for the 2 months of residential treatment that UBH refused to cover.

Where AI and ethics meet

Stephen Fleischresser
Cosmos Magazine
Originally posted 18 Nov 19

Here is an excerpt:

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

The info is here.

Wednesday, December 18, 2019

Stop Blaming Mental Illness

Image result for mass shootings public health crisisAlan I. Leshner
Science  16 Aug 2019:
Vol. 365, Issue 6454, pp. 623

The United States is experiencing a public health epidemic of mass shootings and other forms of gun violence. A convenient response seems to be blaming mental illness; after all, “who in their right mind would do this?” This is utterly wrong. Mental illnesses, certainly severe mental illnesses, are not the major cause of mass shootings. It also is dangerously stigmatizing to people who suffer from these devastating disorders and can subject them to inappropriate restrictions. According to the National Council for Behavioral Health, the best estimates are that individuals with mental illnesses are responsible for less than 4% of all violent crimes in the United States, and less than a third of people who commit mass shootings are diagnosably mentally ill. Moreover, a large majority of individuals with mental illnesses are not at high risk for committing violent acts. Continuing to blame mental illness distracts from finding the real causes of mass shootings and addressing them directly.

Mental illness is, regrettably, a rather loosely defined and loosely used term, and this contributes to the problem. According to the American Psychiatric Association, “Mental illnesses are health conditions involving changes in emotion, thinking or behavior…associated with distress and/or problems functioning in social, work or family activities.” That broad definition can arguably be applied to many life stresses and situations. However, what most people likely mean when they attribute mass shootings to mental illness are what mental health professionals call “serious or severe mental illnesses,” such as schizophrenia, bipolar disorder, or major depression. Other frequently cited causes of mass shootings—hate, employee disgruntlement, being disaffected with society or disappointed with one's life—are not defined clinically as serious mental illnesses themselves. And because they have not been studied systematically, we do not know if these purported other causes really apply, let alone what to do about them if true.

The editorial is here.

Can Business Schools Have Ethical Cultures, Too?

Brian Gallagher
Originally posted 18 Nov 19

Here is an excerpt:

The informal aspects of an ethical culture are pretty intuitive. These include role models and heroes, norms, rituals, stories, and language. “The systems can be aligned to support ethical behavior (or unethical behavior),” Eury and Treviño write, “and the systems can be misaligned in a way that sends mixed messages, for instance, the organization’s code of conduct promotes one set of behaviors, but the organization’s norms encourage another set of behaviors.” Although Smeal hasn’t completely rid itself of unethical norms, it has fostered new ethical ones, like encouraging teachers to discuss the school’s honor code on the first day of class. Rituals can also serve as friendly reminders about the community’s values—during finals week, for example, the honor and integrity program organizes complimentary coffee breaks, and corporate sponsors support ethics case competitions. Eury and Treviño also write how one powerful story has taken hold at Smeal, about a time when the college’s MBA program, after it implemented the honor code, rejected nearly 50 applicants for plagiarism, and on the leadership integrity essay, no less. (Smeal was one of the first business schools to use plagiarism-detection software in its admissions program.)

Given the inherently high turnover rate at a school—and a diverse student population—it’s a constant challenge to get the community’s newcomers to aspire to meet Smeal’s honor and integrity standards. Since there’s no stopping students from graduating, Eury and Treviño stress the importance of having someone like Smeal’s honor and integrity director—someone who, at least part-time, focuses on fostering an ethical culture. “After the first leadership integrity director stepped down from her role, the college did not fill her position for a few years in part because of a coming change in deans,” Eury and Treviño write. The new Dean eventually hired an honor and integrity director who served in her role for 3-and-a-half years, but, after she accepted a new role in the college, the business school took close to 8 months to fill the role again. “In between each of these leadership changes, the community continued to change and grow, and without someone constantly ‘tending to the ethical culture garden,’ as we like to say, the ‘weeds’ will begin to grow,” Eury and Treviño write. Having an honor and integrity director makes an “important symbolic statement about the college’s commitment to tending the culture but it also makes a more substantive contribution to doing so.”

The info is here.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19


Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.