Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, July 31, 2018

Fostering Discussion When Teaching Abortion and Other Morally and Spiritually Charged Topics

Louise P. King and Alan Penzias
AMA Journal of Ethics. July 2018, Volume 20, Number 7: 637-642.

Abstract

Best practices for teaching morally and spiritually charged topics, such as abortion, to those early in their medical training are elusive at best, especially in our current political climate. Here we advocate that our duty as educators requires that we explore these topics in a supportive environment. In particular, we must model respectful discourse for our learners in these difficult areas.

How to Approach Difficult Conversations

When working with learners early in their medical training, educators can find that best practices for discussion of morally and spiritually charged topics are elusive. In this article, we address how to meaningfully discuss and explore students’ conscientious objection to participation in a particular procedure. In particular, we consider the following questions: When, if ever, is it justifiable to define a good outcome of such teaching as changing students’ minds about their health practice beliefs, and when, if ever, is it appropriate to illuminate the negative impacts their health practice beliefs can have on patients?

The information is here.

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philos. Technol.
Accepted May 22, 2018

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The information is here.

Monday, July 30, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Giovanni Luca Ciampaglia & Filippo Mencze
Scientific American
Originally published June 21, 2018

Here is an excerpt:

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias.

The information is here.

Mental health practitioners’ reported barriers to prescription of exercise for mental health consumers

KirstenWay, Lee Kannis-Dymand, Michele Lastella, Geoff P. Lovell
Mental Health and Physical Activity
Volume 14, March 2018, Pages 52-60

Abstract

Exercise is an effective evidenced-based intervention for a range of mental health conditions, however sparse research has investigated the exercise prescription behaviours of mental health practitioners as a collective, and the barriers faced in prescribing exercise for mental health. A self-report survey was completed online by 325 mental health practitioners to identify how often they prescribe exercise for various conditions and explore their perceived barriers to exercise prescription for mental health through thematic analysis. Over 70% of the sample reported prescribing exercise regularly for depression, stress, and anxiety; however infrequent rates of prescription were reported for conditions of schizophrenia, bipolar and related disorders, and substance-related disorders. Using thematic analysis 374 statements on mental health practitioners' perceived barriers to exercise prescription were grouped into 22 initial themes and then six higher-order themes. Reported barriers to exercise prescription mostly revolved around clients' practical barriers and perspectives (41.7%) and the practitioners' knowledge and perspectives (33.2%). Of these two main themes regarding perceived barriers to exercise prescription in mental health, a lack of training (14.7%) and the client's disinclination (12.6%) were initial themes which reoccurred considerably more often than others. General practitioners, mental health nurses, and mental health managers also frequently cited barriers related to a lack of organisational support and resources. Barriers to the prescription of exercise such as lack of training and client's disinclination need to be addressed in order to overcome challenges which restrict the prescription of exercise as a therapeutic intervention.

The research is here.

Sunday, July 29, 2018

White House Ethics Lawyer Finally Reaches His Breaking Point

And give up all this?
Bess Levin
Vanity Fair
Originally posted July 26, 2018

Here is an excerpt:

Politico reports that Passantino, one of the top lawyers in the White House, has plans to quit the administration by the end of the summer, leaving “a huge hole in the White House’s legal operation.” Despite the blow his loss will represent, it’s unlikely anyone will be able to convince him to stay and take one for the team, given he’s been working in what Passantino allies see as an “impossible” job. To recap: Passantino’s primary charge—the president—has refused to follow precedent and release his tax returns, and has held onto his business assets while in office. His son Eric, who runs said business along with Don Jr., says he gives his dad quarterly financial updates. He’s got a hotel down the road from the White House where foreign governments regularly stay as a way to kiss the ring. Two of his top advisers—his daughter and son-in-law—earned at least $82 million in outside income last year while serving in government. His Cabinet secretaries regularly compete with each other for the title of Most Blatantly Corrupt Trump Official. And Passantino is supposed to be “the clean-up guy” for all of it, a close adviser to the White House joked to Politico, which they can do because they’re not the one with a gig that would make even the most hardened Washington veteran cry.

The info is here.

Saturday, July 28, 2018

Costs, needs, and integration efforts shape helping behavior toward refugees

Robert Böhm, Maik M. P. Theelen, Hannes Rusch, and Paul A. M. Van Lange
PNAS June 25, 2018. 201805601; published ahead of print June 25, 2018

Abstract

Recent political instabilities and conflicts around the world have drastically increased the number of people seeking refuge. The challenges associated with the large number of arriving refugees have revealed a deep divide among the citizens of host countries: one group welcomes refugees, whereas another rejects them. Our research aim is to identify factors that help us understand host citizens’ (un)willingness to help refugees. We devise an economic game that captures the basic structural properties of the refugee situation. We use it to investigate both economic and psychological determinants of citizens’ prosocial behavior toward refugees. In three controlled laboratory studies, we find that helping refugees becomes less likely when it is individually costly to the citizens. At the same time, helping becomes more likely with the refugees’ neediness: helping increases when it prevents a loss rather than generates a gain for the refugees. Moreover, particularly citizens with higher degrees of prosocial orientation are willing to provide help at a personal cost. When refugees have to exert a minimum level of effort to be eligible for support by the citizens, these mandatory “integration efforts” further increase prosocial citizens’ willingness to help. Our results underscore that economic factors play a key role in shaping individual refugee helping behavior but also show that psychological factors modulate how individuals respond to them. Moreover, our economic game is a useful complement to correlational survey measures and can be used for pretesting policy measures aimed at promoting prosocial behavior toward refugees.

The research is here.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Thursday, July 26, 2018

Virtuous technology

Mustafa Suleyman
medium.com
Originally published June 26, 2018

Hereis an excerpt:

There are at least three important asymmetries between the world of tech and the world itself. First, the asymmetry between people who develop technologies and the communities who use them. Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. As we have seen in other fields, this risks a disconnect between the inner workings of organisations and the societies they seek to serve.

This is an urgent problem. Women and minority groups remain badly underrepresented, and leaders need to be proactive in breaking the mould. The recent spotlight on these issues has meant that more people are aware of the need for workplace cultures to change, but these underlying inequalities also make their way into our companies in more insidious ways. Technology is not value neutral — it reflects the biases of its creators — and must be built and shaped by diverse communities if we are to minimise the risk of unintended harms.

Second, there is an asymmetry of information regarding how technology actually works, and the impact that digital systems have on everyday life. Ethical outcomes in tech depend on far more than algorithms and data: they depend on the quality of societal debate and genuine accountability.

The information is here.

Number of Canadians choosing medically assisted death jumps 30%

Kathleen Harris
www.cbc.ca
Originally posted June 21, 2018

There were 1,523 medically assisted deaths in Canada in the last six-month reporting period — a nearly 30 per cent increase over the previous six months.

Cancer was the most common underlying medical condition in reported assisted death cases, cited in about 65 per cent of all medically assisted deaths, according to the report from Health Canada.

Using data from Statistics Canada, the report shows medically assisted deaths accounted for 1.07 per cent of all deaths in the country over those six months. That is consistent with reports from other countries that have assisted death regimes, where the figure ranges from 0.3 to four per cent.

The information is here.

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Tuesday, July 24, 2018

Amazon, Google and Microsoft Employee AI Ethics Are Best Hope For Humanity

Paul Armstrong
Forbes.com
Originally posted June 26, 2018

Here is an excerpt:

Google recently lost the 'Don't be Evil' from its Code of Conduct documents but what were once guiding words now appear to be afterthoughts, and they aren't alone. From drone use to deals with the immigration services, large tech companies are looking to monetise their creations and who can blame them - projects can cost double digit millions as companies look to maintain an edge in a continually evolving marketplace. Employees are not without a conscience it seems, and as talent becomes the one thing that companies need in this war, that power needs to wielded, or we risk runaway train scenarios. If you want an idea of where things could go read this.

China is using AI software and facial recognition to determine who can travel, using what and where. You might think this is a ways away from being used on US or UK soil, but you'd be wrong. London has cameras on pretty much all streets, and the US has Amazon's Rekognition (Orlando just abandoned its use, but other tests remain active). Employees need to be the conscious of large entities and not only the ACLU or civil liberties inclined. From racist AI to faked video using machine learning to create better fakes, how you form technology matters as much as the why. Google has already mastered the technology to convince a human it is not talking to a robot thanks to um's and ah's - Google's next job is to convince us that is a good thing.

The information is here.

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Monday, July 23, 2018

St. Cloud psychologist gets 3-plus years for sex with client

Nora G. Hertel
Saint Cloud Times 
Originally published June 14, 2018

Psychologist Eric Felsch will spend more than three years in prison for having sex with a patient in 2011.

Stearns County Judge Andrew Pearson sentenced Felsch Thursday to 41 months in prison for third-degree criminal sexual conduct, a felony. He pleaded guilty to the charge in April.

Felsch, 46, has a St. Cloud address.

It is against Minnesota law for a psychotherapist to have sex with a patient during or outside of a therapy session. A defendant facing that charge cannot defend himself by saying the victim consented to the sexual activity.

Sex with clients is also against ethical codes taught to psychologists.

The information is here.

A psychologist in Pennsylvania can face criminal charges for engaging in sexual relationships with a current patient.

Assessing the contextual stability of moral foundations: Evidence from a survey experiment

David Ciuk
Research and Politics
First Published June 20, 2018

Abstract

Moral foundations theory (MFT) claims that individuals use their intuitions on five “virtues” as guidelines for moral judgment, and recent research makes the case that these intuitions cause people to adopt important political attitudes, including partisanship and ideology. New work in political science, however, demonstrates not only that the causal effect of moral foundations on these political predispositions is weaker than once thought, but it also opens the door to the possibility that causality runs in the opposite direction—from political predispositions to moral foundations. In this manuscript, I build on this new work and test the extent to which partisan and ideological considerations cause individuals’ moral foundations to shift in predictable ways. The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong. I conclude that small shifts in political context do cause MFT measures to move, and, to close, I discuss the need for continued theoretical development in MFT as well as an increased attention to measurement.

The research is here.

Sunday, July 22, 2018

Are free will believers nicer people? (Four studies suggest not)

Damien Crone and Neil Levy
Preprint
Created January 10, 2018

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggest that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N =921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB – moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

The research is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.

The Psychology of Offering an Apology: Understanding the Barriers to Apologizing and How to Overcome Them

Karina Schumann
Current Directions in Psychological Science 
Vol 27, Issue 2, pp. 74 - 78
First Published March 8, 2018

Abstract

After committing an offense, a transgressor faces an important decision regarding whether and how to apologize to the person who was harmed. The actions he or she chooses to take after committing an offense can have dramatic implications for the victim, the transgressor, and their relationship. Although high-quality apologies are extremely effective at promoting reconciliation, transgressors often choose to offer a perfunctory apology, withhold an apology, or respond defensively to the victim. Why might this be? In this article, I propose three major barriers to offering high-quality apologies: (a) low concern for the victim or relationship, (b) perceived threat to the transgressor’s self-image, and (c) perceived apology ineffectiveness. I review recent research examining how these barriers affect transgressors’ apology behavior and describe insights this emerging work provides for developing methods to move transgressors toward more reparative behavior. Finally, I discuss important directions for future research.

The article is here.

Thursday, July 19, 2018

Ethics Policies Don't Build Ethical Cultures

Dori Meinert
www.shrm.org
Originally posted June 19, 2018

Here is an excerpt:

Most people think they would never voluntarily commit an unethical or illegal act. But when Gallagher asked how many people in the audience had ever received a speeding ticket, numerous hands were raised. Similarly, employees rationalize their misuse of company supplies all the time, such as shopping online on their company-issued computer during work hours.

"It's easy to make unethical choices when they are socially acceptable," he said.

But those seemingly small choices can start people down a slippery slope.

Be on the Lookout for Triggers

No one plans to destroy their career by breaking the law or violating their company's ethics policy. There are usually personal stressors that push them over the edge, triggering a "fight or flight" response. At that point, they're not thinking rationally, Gallagher said.

Financial problems, relationship problems or health issues are the most common emotional stressors, he said.

"If you're going to be an ethical leader, are you paying attention to your employees' emotional triggers?"

The information is here.

The developmental origins of moral concern: An examination of moral boundary decision making throughout childhood

Neldner K, Crimston D, Wilks M, Redshaw J, Nielsen M (2018)
PLoS ONE 13(5): e0197819. https://doi.org/10.1371/journal.pone.0197819

Abstract
Prominent theorists have made the argument that modern humans express moral concern for a greater number of entities than at any other time in our past. Moreover, adults show stable patterns in the degrees of concern they afford certain entities over others, yet it remains unknown when and how these patterns of moral decision-making manifest in development.  Children aged 4 to 10 years (N = 151) placed 24 pictures of human, animal, and environmental entities on a stratified circle representing three levels of moral concern. Although younger and older children expressed similar overall levels of moral concern, older children demonstrated a more graded understanding of concern by including more entities within the outer reaches of their moral circles (i.e., they were less likely to view moral inclusion as a simple in vs. out binary decision). With age children extended greater concern to humans than other forms of life, and more concern to vulnerable groups, such as the sick and disabled.  Notably, children’s level of concern for human entities predicted their prosocial
behavior. The current research provides novel insights into the development of our moral reasoning and its structure within childhood.

The paper is here.

Wednesday, July 18, 2018

Can Employees Force A Company To Be More Ethical?

Enrique Dans
Forbes.com
Originally posted June 19, 2018

Here is the conclusion:

Whatever the outcome, it now seems increasingly clear that if you do not agree you’re your company’s practices, if they breach basic ethics, you should listen to your conscience and make your voice heard. Which is all fine and good in a rapidly expanding technology sector such as the United States where you are likely to find another job quickly, but what about in other sectors, or in countries with higher unemployment rates or where government and industry are more closely aligned?

Can we and should we put a price on our principles? Is having a conscience the unique preserve of the wealthy and highly skilled? Obviously not, and it is good news that some employees at US companies are setting a precedent. If companies are not going to behave ethically of their own volition, at least we can count on their employees to embarrass them into doing so. Perhaps other countries and companies will follow suit…

The article is here.

Why are Americans so sad?

Monica H. Swahn
quartz.com
Originally published June 16, 2018

Suicide rates in the US have increased nearly 30% in less than 20 years, the Centers for Disease Control and Prevention reported June 7. These mind-numbing statistics were released the same week two very famous, successful and beloved people committed suicide—Kate Spade, a tremendous entrepreneur, trendsetter and fashion icon, and Anthony Bourdain, a distinguished chef and world traveler who took us on gastronomic journeys to all corners of the world through his TV shows.

Their tragic deaths, and others like them, have brought new awareness to the rapidly growing public health problem of suicide in the US. These deaths have renewed the country’s conversation about the scope of the problem. The sad truth is that suicide is the 10th leading cause of death among all Americans, and among youth and young adults, suicide is the third leading cause of death.

I believe it’s time for us to pause and to ask the question why? Why are the suicide rates increasing so fast? And, are the increasing suicide rates linked to the seeming increase in demand for drugs such as marijuana, opioids and psychiatric medicine? As a public health researcher and epidemiologist who has studied these issues for a long time, I think there may be deeper issues to explore.

Suicide: more than a mental health issue

Suicide prevention is usually focused on the individual and within the context of mental health illness, which is a very limited approach. Typically, suicide is described as an outcome of depression, anxiety, and other mental health concerns including substance use. And, these should not be trivialized; these conditions can be debilitating and life-threatening and should receive treatment. (If you or someone you know need help, call the National Suicide Prevention Lifeline at 1-800-273-8255).

The info is here.

Tuesday, July 17, 2018

Social observation increases deontological judgments in moral dilemmas

Minwoo Leea, Sunhae Sul, Hackjin Kim
Evolution and Human Behavior
Available online 18 June 2018

Abstract

A concern for positive reputation is one of the core motivations underlying various social behaviors in humans. The present study investigated how experimentally induced reputation concern modulates judgments in moral dilemmas. In a mixed-design experiment, participants were randomly assigned to the observed vs. the control group and responded to a series of trolley-type moral dilemmas either in the presence or absence of observers, respectively. While no significant baseline difference in personality traits and moral decision style were found across two groups of participants, our analyses revealed that social observation promoted deontological judgments especially for moral dilemmas involving direct physical harm (i.e., the personal moral dilemmas), yet with an overall decrease in decision confidence and significant prolongation of reaction time. Moreover, participants in the observed group, but not in the control group, showed the increased sensitivities towards warmth vs. competence traits words in the lexical decision task performed after the moral dilemma task. Our findings suggest that reputation concern, once triggered by the presence of potentially judgmental others, could activate a culturally dominant norm of warmth in various social contexts. This could, in turn, induce a series of goal-directed processes for self-presentation of warmth, leading to increased deontological judgments in moral dilemmas. The results of the present study provide insights into the reputational consequences of moral decisions that merit further exploration.

The article is here.

The Rise of the Robots and the Crisis of Moral Patiency

John Danaher
Pre-publication version of AI and Society

Abstract

This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.

The paper is here.

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Mind-body practices and the self: yoga and meditation do not quiet the ego, but instead boost self-enhancement

Gebauer, Jochen, Nehrlich, A.D., Stahlberg, D., et al.
Psychological Science, 1-22. (In Press)

Abstract

Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.

The paper can be downloaded here.

Sunday, July 15, 2018

Should the police be allowed to use genetic information in public databases to track down criminals?

Bob Yirka
Phys.org
Originally posted June 8, 2018

Here is an excerpt:

The authors point out that there is no law forbidding what the police did—the genetic profiles came from people who willingly and of their own accord gave up their DNA data. But should there be? If you send a swab to Ancestry.com, for example, should the genetic profile they create be off-limits to anyone but you and them? It is doubtful that many who take such actions fully consider the ways in which their profile might be used. Most such companies routinely sell their data to pharmaceutical companies or others looking to use the data to make a profit, for example. Should they also be compelled to give up such data due to a court order? The authors suggest that if the public wants their DNA information to remain private, they need to contact their representatives and demand that legislation that lays out specific rules for data housed in public databases.

The article is here.

Saturday, July 14, 2018

10 Ways to Avoid False Memories

Christopher Chabris and Daniel Simons
Slate.com
Originally posted February 10, 2018

Here is an excerpt:

No one has, to our knowledge, tried to implant a false memory of being shot down in a helicopter. But researchers have repeatedly created other kinds of entirely false memory in the laboratory. Most famously, Elizabeth Loftus and Jacqueline Pickrell successfully convinced people that, as children, they had once been lost in a shopping mall. In another study, researchers Kimberly Wade, Maryanne Garry, Don Read, and Stephen Lindsay showed people a Photoshopped image of themselves as children, standing in the basket of a hot air balloon. Half of the participants later had either complete or partial false memories, sometimes “remembering” additional details from this event—an event that they never experienced. In a newly published study, Julia Shaw and Stephen Porter used structured interviews to convince 70 percent of their college student participants that they had committed a crime as an adolescent (theft, assault, or assault with a weapon) and that the crime had resulted in police contact. And outside the laboratory, people have fabricated rich and detailed memories of things that we can be almost 100 percent certain did not happen, such as having been abducted and impregnated by aliens.

Even memories for highly emotional events—like the Challenger explosion or the 9/11 attacks—can mutate substantially. As time passes, we can lose the link between things we’ve experienced and the details surrounding them; we remember the gist of a story, but we might not recall whether we experienced the events or just heard about them from someone else. We all experience this failure of “source memory” in small ways: Maybe you tell a friend a great joke that you heard recently, only to learn that he’s the one who told it to you. Or you recall having slammed your hand in a car door as a child, only to get into an argument over whether it happened instead to your sister. People sometimes even tell false stories directly to the people who actually experienced the original events, something that is hard to explain as intentional lying. (Just last month, Brian Williams let his exaggerated war story be told at a public event honoring one of the soldiers who had been there.)

The information is here.

Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.

(cut)

But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.

Could Moral Enhancement Interventions be Medically Indicated?

Sarah Carter
Health Care Analysis
December 2017, Volume 25, Issue 4, pp 338–353

Abstract

This paper explores the position that moral enhancement interventions could be medically indicated (and so considered therapeutic) in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental disorder (Moral Deficiency Disorder) are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent.

The paper is here.

Tuesday, July 10, 2018

The Artificial Intelligence Ethics Committee

Zara Stone
Forbes.com
Originally published June 11, 2018

Here is an excerpt:

Back to the ethics problem: Some sort of bias is sadly inevitable in programming. “We humans all have a bias,” said computer scientist Ehsan Hoque, who leads the Human-Computer Interaction Lab at Rochester University. “There’s a study where judges make more favorable decisions after a lunch break. Machines have an inherent bias (as they are built by humans) so we need to empower users in ways to make decisions.”

For instance, Walworth's way of empowering his choices is by being conscious about what AI algorithms show him. “I recommend you do things that are counterintuitive,” he said. “For instance, read a spectrum of news, everything from Fox to CNN and The New York Times to combat the algorithm that decides what you see.” Use the Cambridge Analytica election scandal as an example here. Algorithms dictated what you’d see, how you’d see it and if more of the same got shown to you, and were manipulated by Cambridge Analytica to sway voters.

The move to a consciousness of ethical AI  is both a top-down and bottoms up approach. “There’s a rising field of impact investing,” explained Walworth. “Investors and shareholders are demanding something higher than the bottom line, some accountability with the way they spend and invest money.”

The article is here.

Google to disclose ethical framework on use of AI

Richard Walters
The Financial Times
Originally published June 3, 2018

Here is an excerpt:

However, Google already uses AI in other ways that have drawn criticism, leading experts in the field and consumer activists to call on it to set far more stringent ethical guidelines that go well beyond not working with the military.

Stuart Russell, a professor of AI at the University of California, Berkeley, pointed to the company’s image search feature as an example of a widely used service that perpetuates preconceptions about the world based on the data in Google’s search index. For instance, a search for “CEOs” returns almost all white faces, he said.

“Google has a particular responsibility in this area because the output of its algorithms is so pervasive in the online world,” he said. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”

The information is here.

Monday, July 9, 2018

Technology and culture: Differences between the APA and ACA ethical codes

Firmin, M.W., DeWitt, K., Shell, A.L. et al.
Curr Psychol (2018). https://doi.org/10.1007/s12144-018-9874-y

Abstract

We conducted a section-by-section and line-by-line comparison of the ethical codes published by the American Psychological Association (APA) and the American Counseling Association (ACA). Overall, 144 differences exist between the two codes and, here we focus on two constructs where 36 significant differences exist: technology and culture. Of this number, three differences were direct conflicts between the APA and ACA ethical codes’ expectations for technology and cultural behavior. The other 33 differences were omissions in the APA code, meaning that specific elements in the ACA code were explicitly absent from the APA code altogether. Of the 36 total differences pertaining to technology and culture in the two codes, 27 differences relate to technology and APA does not address 25 of these 27 technology differences. Of the 36 total differences pertaining to technology and culture, nine differences relate to culture and APA does not address eight of these issues.

The information is here.

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Sunday, July 8, 2018

A Son’s Race to Give His Dying Father Artificial Immortality

James Vlahos
wired.com
Originally posted July 18, 2017

Here is an excerpt:

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

The article is here.

Yes, I saw the Black Mirror episode using a similar theme.

Saturday, July 7, 2018

Making better decisions in groups

Dan Bang, Chris D. Frith
Published 16 August 2017.
DOI: 10.1098/rsos.170193

Abstract

We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.

The article is here.

Friday, July 6, 2018

Can we collaborate with robots or will they take our place at work?

TU/e Research Project
ethicsandtechnology.eu

Here is an excerpt:

Finding ways to collaborate with robots

In this project, the aim is to understand how robotisation in logistics can be advanced whilst maintaining workers’ sense of meaning in work and general well-being, thereby preventing or undoing resilience towards robotisation. Sven Nyholm says: “People typically find work meaningful if they work within a well-functioning team or if they view their work as serving some larger purpose beyond themselves. Could human-robot collaborations be experienced as team-work? Would it be any kind of mistake to view a robot as a colleague? The thought of having a robot as a collaborator can seem a little weird. And yes, the increasingly robotized work environment is scary, but it is exciting at the same time. The further robotisation at work could give workers new important responsibilities and skills, which can in turn strengthen the feeling of doing meaningful work”.

The information in here.

People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

Tom Stafford
Blog Post: Research Digest
Originally posted May 31, 2018

Here is an excerpt:

Finally and more promisingly, the researchers found some evidence that belief superiority can be dented by feedback. If participants were told that people with beliefs like theirs tended to score poorly on topic knowledge, or if they were directly told that their score on the topic knowledge quiz was low, this not only reduced their belief superiority, it also caused them to seek out the kind of challenging information they had previously neglected in the headlines task (though the evidence for this behavioural effect was mixed).

The studies all involved participants accessed via Amazon’s Mechanical Turk, allowing the researchers to work with large samples of Americans for each experiment. Their findings mirror the well-known Dunning-Kruger effect – Kruger and Dunning showed that for domains such as judgments of grammar, humour or logic, the most skilled tend to underestimate their ability, while the least skilled overestimate it. Hall and Raimi’s research extends this to the realm of political opinions (where objective assessment of correctness is not available), showing that the belief your opinion is better than other people’s tends to be associated with overestimation of your relevant knowledge.

The article is here.

Thursday, July 5, 2018

Crispr Fans Fight for Egalitarian Access to Gene Editing

Megan Molteni
Wired.com
Originally posted June 6, 2018

Here is an excerpt:

Like any technology, the applications of gene editing tech will be shaped by the values of the societies that wield it. Which is why a conversation about equitable access to Crispr quickly becomes a conversation about redistributing some of the wealth and education that has been increasingly concentrated in smaller and smaller swaths of the population over the past three decades. Today the richest 1 percent of US families control a record-high 38.6 percent of the country’s wealth. The fear is that Crispr won’t disrupt current inequalities, it’ll just perpetuate them.

(cut)

CrisprCon excels at providing a platform to raise these kinds of big picture problems and moral quagmires. But in its second year, it was still light on solutions. The most concrete examples came from a panel of people pursuing ecotechnologies—genetic methods for changing, controlling, or even exterminating species in the wild (disclosure: I moderated the panel).

The information is here.

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Wednesday, July 4, 2018

Curiosity and What Equality Really Means

Atul Gawande
The New Yorker
Originally published June 2, 2018

Here is an excerpt:

We’ve divided the world into us versus them—an ever-shrinking population of good people against bad ones. But it’s not a dichotomy. People can be doers of good in many circumstances. And they can be doers of bad in others. It’s true of all of us. We are not sufficiently described by the best thing we have ever done, nor are we sufficiently described by the worst thing we have ever done. We are all of it.

Regarding people as having lives of equal worth means recognizing each as having a common core of humanity. Without being open to their humanity, it is impossible to provide good care to people—to insure, for instance, that you’ve given them enough anesthetic before doing a procedure. To see their humanity, you must put yourself in their shoes. That requires a willingness to ask people what it’s like in those shoes. It requires curiosity about others and the world beyond your boarding zone.

We are in a dangerous moment because every kind of curiosity is under attack—scientific curiosity, journalistic curiosity, artistic curiosity, cultural curiosity. This is what happens when the abiding emotions have become anger and fear. Underneath that anger and fear are often legitimate feelings of being ignored and unheard—a sense, for many, that others don’t care what it’s like in their shoes. So why offer curiosity to anyone else?

Once we lose the desire to understand—to be surprised, to listen and bear witness—we lose our humanity. Among the most important capacities that you take with you today is your curiosity. You must guard it, for curiosity is the beginning of empathy. When others say that someone is evil or crazy, or even a hero or an angel, they are usually trying to shut off curiosity. Don’t let them. We are all capable of heroic and of evil things. No one and nothing that you encounter in your life and career will be simply heroic or evil. Virtue is a capacity. It can always be lost or gained. That potential is why all of our lives are of equal worth.

The article is here.

Tuesday, July 3, 2018

What does a portrait of Erica the android tell us about being human?

Nigel Warburton
The Guardian
Originally posted September 9, 2017

Here are two excerpts:

Another traditional answer to the question of what makes us so different, popular for millennia, has been that humans have a non-physical soul, one that inhabits the body but is distinct from it, an ethereal ghostly wisp that floats free at death to enjoy an after-life which may include reunion with other souls, or perhaps a new body to inhabit. To many of us, this is wishful thinking on an industrial scale. It is no surprise that survey results published last week indicate that a clear majority of Britons (53%) describe themselves as non-religious, with a higher percentage of younger people taking this enlightened attitude. In contrast, 70% of Americans still describe themselves as Christians, and a significant number of those have decidedly unscientific views about human origins. Many, along with St Augustine, believe that Adam and Eve were literally the first humans, and that everything was created in seven days.

(cut)

Today a combination of evolutionary biology and neuroscience gives us more plausible accounts of what we are than Descartes did. These accounts are not comforting. They reverse the priority and emphasise that we are animals and provide no evidence for our non-physical existence. Far from it. Nor are they in any sense complete, though there has been great progress. Since Charles Darwin disabused us of the notion that human beings are radically different in kind from other apes by outlining in broad terms the probable mechanics of evolution, evolutionary psychologists have been refining their hypotheses about how we became this kind of animal and not another, why we were able to surpass other species in our use of tools, communication through language and images, and ability to pass on our cultural discoveries from generation to generation.

The article is here.

Monday, July 2, 2018

Eugenics never went away

Robert A Wilson
aeon.com
Originally posted June 5, 2018

Here is an excerpt:

Eugenics survivors are those who have lived through eugenic interventions, which typically begin with being categorised as less than fully human – as ‘feeble-minded’, as belonging to a racialised ethnic group assumed to be inferior, or as having a medical condition, such as epilepsy, presumed to be heritable. That categorisation enters them into a eugenics pipeline.

Each such pipeline has a distinctive shape. The Alberta pipeline involved institutionalisation at training schools for the ‘feeble-minded’ or mentally deficient, followed by a recommendation of sterilisation by a medical superintendent, which was then approved by the Eugenics Board, and executed without consent. Alberta’s introduction of guidance clinics also allowed eugenic sterilisation to reach into the non-institutionalised population, particularly schools.

What roles have the stories of eugenics survivors played in understanding eugenics? For the most part and until recently, these first-person narratives have been absent from the historical study of eugenics. On its traditional view, according to which eugenics ended around 1945, this is entirely understandable. The number of survivors dwindles over time, and those who survived often chose, as did many in Alberta, to bracket off rather than re-live their past. Yet the limited presence of survivor narratives in the study of eugenics also stems from a corresponding limit in the safe and receptive audience for those narratives.

What Does an Infamous Biohacker’s Death Mean for the Future of DIY Science?

Kristen Brown
The Atlantic
Originally posted May 5, 2018

Here are two excerpts:

At just 28, Traywick was among the most infamous figures in the world of biohacking—the grandiose CEO of a tiny company called Ascendance Biomedical whose goal was to develop and test new gene therapies without the expense and rigor of clinical trials or the oversight of the FDA. Traywick wanted to cure cancer, herpes, HIV, and even aging, and he wanted to do it without having to deal with the rules and safety precautions of regulators and industry standards.

“There are breakthroughs in the world that we can actually bring to market in a way that wouldn’t require us to butt up against the FDA’s walls, but instead walk around them,” Traywick told me the first time I met him in person, during a biotech conference in San Francisco last January.

To “walk around” regulators, Ascendance and other biohackers typically rely on testing products on themselves. Self-experimentation, although strongly discouraged by agencies like the FDA, makes it difficult for regulators to intervene. The rules that govern drug development simply aren’t written to oversee what an individual might do to themselves.

(cut)

The biggest shame, said Zayner, is that we’ll never get the chance to see how Traywick might have matured once he’d been in the biohacking sphere a little longer.

Whatever their opinion of Traywick, everyone who knew him agreed that he was motivated by an extreme desire to make drugs more widely available for those who need them.

The information is here.

Sunday, July 1, 2018

What Trump Administration Corruption Lays Bare: Ineffectual Ethics Rules

Eliza Newlin Carney
The American Prospect
Originally published June 28, 2018

Here is an excerpt:

What’s most stunning about Pruitt’s never-ending ethics saga is not the millions in taxpayer dollars he wasted on first-class, military and private travel to exotic locales, on ‘round the clock security details and on over-the-top office furnishings. The real shocker is that federal ethics officials, having amassed an extraordinary paper trail showing that Pruitt violated multiple rules that bar self-dealing, employee retaliation, unauthorized pay raises and more, have been essentially helpless to do anything about it.

And therein lies the root problem exposed by this administration’s utter disregard for ethics norms: Executive Branch ethics laws are alarmingly weak and out of date. For decades, ethics watchdogs have warned Congress that a patchwork of agencies and officers scattered throughout the government lack the resources and authority to really police federal ethics violations. But since past administrations have typically paid a bit more attention to the Office of Government Ethics (OGE), which oversees executive branch ethics programs, the holes in federal oversight have gone largely unnoticed.

But now that we have a president who, along with much of his cabinet, appears entirely impervious to the OGE’s guidelines and warnings, as well as to a torrent of unfavorable news coverage, the system’s shortfalls have become impossible to ignore. In theory, the Justice Department, the Office of White House Counsel, or Congress could fill in the gaps to help check this administration’s abuses. But none of Trump’s Hill allies or administration appointees has shown the slightest inclination to hold him to account.

The information is here.