Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Monday, October 16, 2017

Can we teach robots ethics?

Dave Edmonds
BBC.com
Originally published October 15, 2017

Here is an excerpt:

However machine learning throws up problems of its own. One is that the machine may learn the wrong lessons. To give a related example, machines that learn language from mimicking humans have been shown to import various biases. Male and female names have different associations. The machine may come to believe that a John or Fred is more suitable to be a scientist than a Joanna or Fiona. We would need to be alert to these biases, and to try to combat them.

A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions. This is an unsettling possibility, especially if robots are making crucial choices about our lives. A partial solution might be to insist that if things do go wrong, we have a way to audit the code - a way of scrutinising what's happened. Since it would be both silly and unsatisfactory to hold the robot responsible for an action (what's the point of punishing a robot?), a further judgement would have to be made about who was morally and legally culpable for a robot's bad actions.

One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations. The autonomous weapon won't make bad choices because it is angry. The autonomous car won't get drunk, or tired, it won't shout at the kids on the back seat. Around the world, more than a million people are killed in car accidents each year - most by human error. Reducing those numbers is a big prize.

The article is here.

No Child Left Alone: Moral Judgments about Parents Affect Estimates of Risk to Children

Thomas, A. J., Stanford, P. K., & Sarnecka, B. W. (2016).
Collabra, 2(1), 10.

Abstract

In recent decades, Americans have adopted a parenting norm in which every child is expected to be under constant direct adult supervision. Parents who violate this norm by allowing their children to be alone, even for short periods of time, often face harsh criticism and even legal action. This is true despite the fact that children are much more likely to be hurt, for example, in car accidents. Why then do bystanders call 911 when they see children playing in parks, but not when they see children riding in cars? Here, we present results from six studies indicating that moral judgments play a role: The less morally acceptable a parent’s reason for leaving a child alone, the more danger people think the child is in. This suggests that people’s estimates of danger to unsupervised children are affected by an intuition that parents who leave their children alone have done something morally wrong.

Here is part of the discussion:

The most important conclusion we draw from this set of experiments is the following: People don’t only think that leaving children alone is dangerous and therefore immoral. They also think it is immoral and therefore dangerous. That is, people overestimate the actual danger to children who are left alone by their parents, in order to better support or justify their moral condemnation of parents who do so.

This brings us back to our opening question: How can we explain the recent hysteria about unsupervised children, often wildly out of proportion to the actual risks posed by the situation? Our findings suggest that once a moralized norm of ‘No child left alone’ was generated, people began to feel morally outraged by parents who violated that norm. The need (or opportunity) to better support or justify this outrage then elevated people’s estimates of the actual dangers faced by children. These elevated risk estimates, in turn, may have led to even stronger moral condemnation of parents and so on, in a self-reinforcing feedback loop.

The article is here.

Sunday, October 15, 2017

Official sends memo to agency leaders about ethical conduct

Avery Anapol
The Hill
Originally published October 10, 2017

The head of the Office of Government Ethics is calling on the leaders of government agencies to promote an “ethical culture.”

David Apol, acting director of the ethics office, sent a memo to agency heads titled, “The Role of Agency Leaders in Promoting an Ethical Culture.” The letter was sent to more than 100 agency heads, CNN reported.

“It is essential to the success of our republic that citizens can trust that your decisions and the decisions made by your agency are motivated by the public good and not by personal interests,” the memo reads.

Several government officials are under investigation for their use of chartered planes for government business.

One Cabinet official, former Health secretary Tom Price, resigned over his use of private jets. Treasury Secretary Steven Mnuchin is also under scrutiny for his travels.

“I am deeply concerned that the actions of some in Government leadership have harmed perceptions about the importance of ethics and what conduct is, and is not, permissible,” Apol wrote.

The memo includes seven suggested actions that Apol says leaders should take to strengthen the ethical culture in their agencies. The suggestions include putting ethics officials in senior leadership meetings, and “modeling a ‘Should I do it?’ mentality versus a ‘Can I do it?’ mentality.”

The article is here.

Saturday, October 14, 2017

Who Sees What as Fair? Mapping Individual Differences in Valuation of Reciprocity, Charity,and Impartiality

Laura Niemi and Liane Young
Social Justice Research

When scarce resources are allocated, different criteria may be considered: impersonal allocation (impartiality), the needs of specific individuals (charity), or the relational ties between individuals (reciprocity). In the present research, we investigated how people’s perspectives on fairness relate to individual differences in interpersonal orientations. Participants evaluated the fairness of allocations based on (a) impartiality (b) charity, and (c) reciprocity. To assess interpersonal orientations, we administered measures of dispositional empathy (i.e., empathic concern and perspective-taking) and Machiavellianism. Across two studies, Machiavellianism correlated with higher ratings of reciprocity as fair, whereas empathic concern and perspective taking correlated with higher ratings of charity as fair. We discuss these findings in relation to recent neuroscientific research on empathy, fairness, and moral evaluations of resource allocations.

The article is here.

Friday, October 13, 2017

Moral Distress: A Call to Action

The Editor
AMA Journal of Ethics. June 2017, Volume 19, Number 6: 533-536.

During medical school, I was exposed for the first time to ethical considerations that stemmed from my new role in the direct provision of patient care. Ethical obligations were now both personal and professional, and I had to navigate conflicts between my own values and those of patients, their families, and other members of the health care team. However, I felt paralyzed by factors such as my relative lack of medical experience, low position in the hospital hierarchy, and concerns about evaluation. I experienced a profound and new feeling of futility and exhaustion, one that my peers also often described.

I have since realized that this experience was likely “moral distress,” a phenomenon originally described by Andrew Jameton in 1984. For this issue, the following definition, adapted from Jameton, will be used: moral distress occurs when a clinician makes a moral judgment about a case in which he or she is involved and an external constraint makes it difficult or impossible to act on that judgment, resulting in “painful feelings and/or psychological disequilibrium”. Moral distress has subsequently been shown to be associated with burnout, which includes poor coping mechanisms such as moral disengagement, blunting, denial, and interpersonal conflict.

Moral distress as originally conceived by Jameton pertained to nurses and has been extensively studied in the nursing literature. However, until a few years ago, the literature has been silent on the moral distress of medical students and physicians.

The article is here.

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Medium.com
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Thursday, October 12, 2017

The Data Scientist Putting Ethics In AI

By Poornima Apte
The Daily Dose
Originally published SEPT 25 2017

Here is an excerpt:

Chowdhury’s other personal goal — to make AI accessible to everyone — is noble, but if the technology’s ramifications are not yet fully known, might it not also be dangerous? Doomsday scenarios — AI as the rapacious monster devouring all our jobs — put forward in the media may not be in our immediate futures, but Alexandra Whittington does worry that implicit human biases could make their way into the AI of the future — a problem that might be exacerbated if not accounted for early on, before any democratization of the tools occurs. Whittington is a futurist and foresight director at Fast Future. She points to a recent example of AI in law where the “robot-lawyer” was named Ross, and the legal assistant had a woman’s name, Cara. “You look at Siri and Cortana, they’re women, right?” Whittington says. “But they’re assistants, not the attorney or the accountant.” It’s the whole garbage-in, garbage-out theory, she says, cautioning against an overly idealistic approach toward the technology.

The article is here.

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

The guide psychologists gave carmakers to convince us it’s safe to buy self-driving cars

Olivia Goldhill
Quartz.com
Originally published September 17, 2017

Driverless cars sound great in theory. They have the potential to save lives, because humans are erratic, distracted, and often bad drivers. Once the technology is perfected, machines will be far better at driving safely.

But in practice, the notion of putting your life into the hands of an autonomous machine—let alone facing one as a pedestrian—is highly unnerving. Three out of four Americans are afraid to get into a self-driving car, an American Automobile Association survey found earlier this year.

Carmakers working to counter those fears and get driverless cars on the road have found an ally in psychologists. In a paper published this week in Nature Human Behavior, three professors from MIT Media Lab, Toulouse School of Economics, and the University of California at Irvine discuss widespread concerns and suggest psychological techniques to help allay them:

Who wants to ride in a car that would kill them to save pedestrians?

First, they address the knotty problem of how self-driving cars will be programmed to respond if they’re in a situation where they must either put their own passenger or a pedestrian at risk. This is a real world version of an ethical dilemma called “The Trolley Problem.”

The article is here.

Tuesday, October 10, 2017

How AI & robotics are transforming social care, retail and the logistics industry

Benedict Dellot and Fabian Wallace-Stephens
RSA.org
Originally published September 18, 2017

Here is an excerpt:

The CHIRON project

CHIRON is a two year project funded by Innovate UK. It strives to design care robotics for the future with a focus on dignity, independence and choice. CHIRON is a set of intelligent modular robotic systems, located in multiple positions around the home. Among its intended uses are to help people with personal hygiene tasks in the morning, get ready for the day, and support them in preparing meals in the kitchen. CHIRON’s various components can be mixed and matched to enable the customer to undertake a wide range of domestic and self-care tasks independently, or to enable a care worker to assist an increased number of customers.

The vision for CHIRON is to move from an ‘end of life’ institutional model, widely regarded as unsustainable and not fit for purpose, to a more dynamic and flexible market that offers people greater choice in the care sector when they require it.

The CHIRON project is being managed by a consortium led by Designability. The key technology partners are Bristol Robotics Laboratory and Shadow Robot Company, who have considerable expertise in conducting pioneering research and development in robotics. Award winning social enterprise care provider, Three Sisters Care will bring user-centred design to the core of the project. Smart Homes & Buildings Association will work to introduce the range of devices that will create CHIRON and make it a valuable presence in people’s homes.

The article is here.

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, October 9, 2017

Artificial Human Embryos Are Coming, and No One Knows How to Handle Them

Antonio Regalado
MIT Tech Review
September 19, 2017

Here is an excerpt:

Scientists at Michigan now have plans to manufacture embryoids by the hundreds. These could be used to screen drugs to see which cause birth defects, find others to increase the chance of pregnancy, or to create starting material for lab-generated organs. But ethical and political quarrels may not be far behind. “This is a hot new frontier in both science and bioethics. And it seems likely to remain contested for the coming years,” says Jonathan Kimmelman, a member of the bioethics unit at McGill University, in Montreal, and a leader of an international organization of stem-cell scientists.

What’s really growing in the dish? There no easy answer to that. In fact, no one is even sure what to call these new entities. In March, a team from Harvard University offered the catch-all “synthetic human entities with embryo-like features,” or SHEEFS, in a paper cautioning that “many new varieties” are on the horizon, including realistic mini-brains.

Shao, who is continuing his training at MIT, dug into the ethics question and came to his own conclusions. “Very early on in our research we started to pay attention to why are we doing this? Is it really necessary? We decided yes, we are trying to grow a structure similar to part of the human early embryo that is hard otherwise to study,” says Shao. “But we are not going to generate a complete human embryo. I can’t just consider my feelings. I have to think about society.”

The article is here.

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Saturday, October 7, 2017

Committee on Publication Ethics: Ethical Guidelines for Peer Reviewers

COPE Council.
Ethical guidelines for peer reviewers. 
September 2017. www.publicationethics.org

Peer reviewers play a role in ensuring the integrity of the scholarly record. The peer review
process depends to a large extent on the trust and willing participation of the scholarly
community and requires that everyone involved behaves responsibly and ethically. Peer
reviewers play a central and critical part in the peer review process, but may come to the role
without any guidance and be unaware of their ethical obligations. Journals have an obligation
to provide transparent policies for peer review, and reviewers have an obligation to conduct
reviews in an ethical and accountable manner. Clear communication between the journal
and the reviewers is essential to facilitate consistent, fair and timely review. COPE has heard
cases from its members related to peer review issues and bases these guidelines, in part, on
the collective experience and wisdom of the COPE Forum participants. It is hoped they will
provide helpful guidance to researchers, be a reference for editors and publishers in guiding
their reviewers, and act as an educational resource for institutions in training their students
and researchers.

Peer review, for the purposes of these guidelines, refers to reviews provided on manuscript
submissions to journals, but can also include reviews for other platforms and apply to public
commenting that can occur pre- or post-publication. Reviews of other materials such as
preprints, grants, books, conference proceeding submissions, registered reports (preregistered
protocols), or data will have a similar underlying ethical framework, but the process
will vary depending on the source material and the type of review requested. The model of
peer review will also influence elements of the process.

The guidelines are here.

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Lawsuit Over a Suicide Points to a Risk of Antidepressants

Roni Caryn Rabin
The New York Times
Originally published September 11, 2017

Here is an excerpt:

The case is a rare instance in which a lawsuit over a suicide involving antidepressants actually went to trial; many such cases are either dismissed or settled out of court, said Brent Wisner, of the law firm Baum Hedlund Aristei Goldman, which represented Ms. Dolin.

The verdict is also unusual because Glaxo, which has asked the court to overturn the verdict or to grant a new trial, no longer sells Paxil in the United States and did not manufacture the generic form of the medication Mr. Dolin was taking. The company argues that it should not be held liable for a pill it did not make.

Concerns about safety have long dogged antidepressants, though many doctors and patients consider the medications lifesavers.

Ever since they were linked to an increase in suicidal behaviors in young people more than a decade ago, all antidepressants, including Paxil, have carried a “black box” warning label, reviewed and approved by the Food and Drug Administration, saying that they increase the risk of suicidal thinking and behavior in children, teens and young adults under age 25.

The warning labels also stipulate that the suicide risk has not been seen in short-term studies in anyone over age 24, but urges close monitoring of all patients initiating drug treatment.

The article is here.

Thursday, October 5, 2017

Leadership Takes Self-Control. Here’s What We Know About It

Kai Chi (Sam) Yam, Huiwen Lian, D. Lance Ferris, Douglas Brown
Harvard Business Review
Originally published June 5, 2017

Here is an excerpt:

Our review identified a few consequences that are consistently linked to having lower self-control at work:
  1. Increased unethical/deviant behavior: Studies have found that when self-control resources are low, nurses are more likely to be rude to patients, tax accountants are more likely to engage in fraud, and employees in general engage in various forms of unethical behavior, such as lying to their supervisors, stealing office supplies, and so on.
  2. Decreased prosocial behavior: Depleted self-control makes employees less likely to speak up if they see problems at work, less likely to help fellow employees, and less likely to engage in corporate volunteerism.
  3. Reduced job performance: Lower self-control can lead employees to spend less time on difficult tasks, exert less effort at work, be more distracted (e.g., surfing the internet in working time), and generally perform worse than they would had their self-control been normal.
  4. Negative leadership styles: Perhaps what’s most concerning is that leaders with lower self-control often exhibit counter-productive leadership styles. They are more likely to verbally abuse their followers (rather than using positive means to motivate them), more likely to build weak relationships with their followers, and they are less charismatic. Scholars have estimated that the cost to corporations in the United States for such a negative and abusive behavior is at $23.8 billion annually.
Our review makes clear that helping employees maintain self-control is an important task if organizations want to be more effective and ethical. Fortunately, we identified three key factors that can help leaders foster self-control among employees and mitigate the negative effects of losing self-control.

The article is here.

Biased Algorithms Are Everywhere, and No One Seems to Care

Will Knight
MIT News
Originally published July 12, 2017

Here is an excerpt:

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

The article is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Google Sets Limits on Addiction Treatment Ads, Citing Safety

Michael Corkery
The New York Times
Originally published September 14, 2017

As drug addiction soars in the United States, a booming business of rehab centers has sprung up to treat the problem. And when drug addicts and their families search for help, they often turn to Google.

But prosecutors and health advocates have warned that many online searches are leading addicts to click on ads for rehab centers that are unfit to help them or, in some cases, endangering their lives.

This week, Google acknowledged the problem — and started restricting ads that come up when someone searches for addiction treatment on its site. “We found a number of misleading experiences among rehabilitation treatment centers that led to our decision,” Google spokeswoman Elisa Greene said in a statement on Thursday.

Google has taken similar steps to restrict advertisements only a few times before. Last year it limited ads for payday lenders, and in the past it created a verification system for locksmiths to prevent fraud.

In this case, the restrictions will limit a popular marketing tool in the $35 billion addiction treatment business, affecting thousands of small-time operators.

The article is here.

Tuesday, October 3, 2017

VA About To Scrap Ethics Law That Helps Safeguards Veterans From Predatory For-Profit Colleges

Adam Linehan
Task and Purpose
Originally posted October 2, 2017

An ethics law that prohibits Department of Veterans Affairs employees from receiving money or owning a stake in for-profit colleges that rake in millions in G.I. Bill tuition has “illogical and unintended consequences,” according to VA, which is pushing to suspend the 50-year-old statute.

But veteran advocacy groups say suspending the law would make it easier for the for-profit education industry to exploit its biggest cash cow: veterans. 

In a proposal published in the Federal Register on Sept. 14, VA claims that the statute — which, according to The New York Times, was enacted following a string of scandals involving the for-profit education industry — is redundant due to the other conflict-of-interest laws that apply to all federal employees and provide sufficient safeguards.

Critics of the proposal, however, say that the statute provides additional regulations that protect against abuse and provide more transparency. 

“The statute is one of many important bipartisan reforms Congress implemented to protect G.I. Bill benefits from waste, fraud, and abuse,” William Hubbard, Student Veterans of America’s vice president of government affairs, said in an email to Task & Purpose. “A thoughtful and robust public conservation should be had to ensure that the interests of student veterans is the top of the priority list.”

The article is here.

Editor's Note: The swamp continues to grow under the current administration.

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Helio
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.

Monday, October 2, 2017

Cooperation in the Finitely Repeated Prisoner’s Dilemma

Matthew Embrey  Guillaume R. Fréchette  Sevgi Yuksel
The Quarterly Journal of Economics
Published: 26 August 2017

Abstract

More than half a century after the first experiment on the finitely repeated prisoner’s dilemma, evidence on whether cooperation decreases with experience–as suggested by backward induction–remains inconclusive. This paper provides a meta-analysis of prior experimental research and reports the results of a new experiment to elucidate how cooperation varies with the environment in this canonical game. We describe forces that affect initial play (formation of cooperation) and unraveling (breakdown of cooperation). First, contrary to the backward induction prediction, the parameters of the repeated game have a significant effect on initial cooperation. We identify how these parameters impact the value of cooperation–as captured by the size of the basin of attraction of Always Defect–to account for an important part of this effect. Second, despite these initial differences, the evolution of behavior is consistent with the unraveling logic of backward induction for all parameter combinations. Importantly, despite the seemingly contradictory results across studies, this paper establishes a systematic pattern of behavior: subjects converge to use threshold strategies that conditionally cooperate until a threshold round; and conditional on establishing cooperation, the first defection round moves earlier with experience. Simulation results generated from a learning model estimated at the subject level provide insights into the long-term dynamics and the forces that slow down the unraveling of cooperation.

The paper is here.

The Role of a “Common Is Moral” Heuristic in the Stability and Change of Moral Norms

Lindström, B., Jangard, S., Selbing, I., & Olsson, A. (2017).
Journal of Experimental Psychology: General.

Abstract

Moral norms are fundamental for virtually all social interactions, including cooperation. Moral norms develop and change, but the mechanisms underlying when, and how, such changes occur are not well-described by theories of moral psychology. We tested, and confirmed, the hypothesis that the commonness of an observed behavior consistently influences its moral status, which we refer to as the common is moral (CIM) heuristic. In 9 experiments, we used an experimental model of dynamic social interaction that manipulated the commonness of altruistic and selfish behaviors to examine the change of peoples’ moral judgments. We found that both altruistic and selfish behaviors were judged as more moral, and less deserving of punishment, when common than when rare, which could be explained by a classical formal model (social impact theory) of behavioral conformity. Furthermore, judgments of common versus rare behaviors were faster, indicating that they were computationally more efficient. Finally, we used agent-based computer simulations to investigate the endogenous population dynamics predicted to emerge if individuals use the CIM heuristic, and found that the CIM heuristic is sufficient for producing 2 hallmarks of real moral norms; stability and sudden changes. Our results demonstrate that commonness shapes our moral psychology through mechanisms similar to behavioral conformity with wide implications for understanding the stability and change of moral norms.

The article is here.

Sunday, October 1, 2017

Future Frankensteins: The Ethics of Genetic Intervention

Philip Kitcher
Los Angeles Review of Books
Originally posted September 4, 2017

Here is an excerpt:

The more serious argument perceives risks involved in germline interventions. Human knowledge is partial, and so perhaps we will fail to recognize some dire consequence of eliminating a particular sequence from the genomes of all members of our species. Of course, it is very hard to envisage what might go wrong — in the course of human evolution, many DNA sequences have arisen and disappeared. Moreover, in this instance, assuming a version of CRISPR-Cas9 sufficiently reliable to use on human beings, we could presumably undo whatever damage we had done. But, a skeptic may inquire, why take any risk at all? Surely somatic interventions will suffice. No need to tamper with the germline, since we can always modify the bodies of the unfortunate people afflicted with troublesome sequences.

Doudna and Sternberg point out, in a different context, one reason why this argument fails: some genes associated with disease act too early in development (in utero, for example). There is a second reason for failure. In a world in which people are regularly rescued through somatic interventions, the percentage of later generations carrying problematic sequences is likely to increase, with the consequence that ever more resources would have to be devoted to editing the genomes of individuals.  Human well-being might be more effectively promoted through a program of germline intervention, freeing those resources to help those who suffer in other ways. Once again, allowing editing of eggs and sperm seems to be the path of compassion. (The problems could be mitigated if genetic testing and in vitro fertilization were widely available and widely used, leaving somatic interventions as a last resort for those who slipped through the cracks. But extensive medical resources would still be required, and encouraging — or demanding — pre-natal testing and use of IVF would introduce a problematic and invasive form of eugenics.)

The article is here.

Saturday, September 30, 2017

What is New In Psychotherapy & Counseling in the Last 10 Years



Sam Knapp and I will be presenting this unique blend of small group learning, research, and lecture.

It has been estimated that the half-life for a professional psychologist is 9 years. Thus, professional psychologists need to work assiduously to keep up to date with the changes in the field. This continuing education program strives to do that by having participants reflect on the most significant changes in the field in the last 10 years. To facilitate this reflection, the presenter offers his update in the psychotherapy and counseling literature in the last 10 years as an opportunity for participants to reflect on and consider their perceptions of the important developments in the field. This focuses on changes in psychotherapy and counseling and does not consider changes in other fields, except as they influence psychotherapy or counseling. There will be considerable participant interaction.

Ethics office: Anonymous gifts to legal defense funds are not allowed

Megan Wilson
The Hill
Originally posted September 28, 2017

The Office of Government Ethics (OGE), the federal government’s ethics watchdog, clarified its policy on legal defense funds on Thursday, stating that anonymous contributions should not be accepted.

The announcement comes after a report that suggested the OGE was departing from internal policy regarding the donations, paving the way for federal officials to accept anonymous donations from otherwise prohibited groups — such as lobbyists — to offset their legal bills.

In 1993, the OGE issued an informal advisory opinion that allowed for such donations because the federal employee “does not know who the paymasters are.”

Immediately after, the office acknowledged the problems associated with allowing prohibited individuals to give to legal defense funds anonymously and instead advised lawyers not to accept those contributions.

Then-OGE Director Stephen Potts told a congressional panel in 1994 that the agency “recognized that donor anonymity may be difficult to enforce in practice because there is nothing to prevent a donor disclosing to the employee that he or she contributed to the employee’s legal defense fund,” the advisory published Thursday notes.

The article is here.

Friday, September 29, 2017

How Silicon Valley is erasing your individuality

Franklin Foer
Washington Post
Originally posted September 8, 2017

Here is an excerpt:

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

The article is here.

The Dark Side of Morality: Group Polarization and Moral-Belief Formation

Marcus Arvan
University of Tampa

Most of us are accustomed to thinking of morality in a positive light. Morality, we say, is a matter of “doing good” and treating ourselves and each other “rightly.” However, moral beliefs and discourse also plausibly play a role in group polarization, the tendency of social groups to divide into progressively more extreme factions, each of which regards other groups to be “wrong.” Group polarization often occurs along moral lines, and is known to have many disturbing effects, increasing racial prejudice among the already moderately prejudiced, leading group decisions to be more selfish, competitive, less trusting, and less altruistic than individual decisions, eroding public trust, leading juries to impose more severe punishments in trial, generating more extreme political decisions, and contributing to war, genocide, and other violent behavior.

This paper argues that three empirically-supported theories of group polarization predict that polarization is likely caused in substantial part by a conception of morality that I call the Discovery Model—a model which holds moral truths exist to be discovered through moral intuition, moral reasoning, or some other process.

The paper is here.

Thursday, September 28, 2017

How Much Do A Company's Ethics Matter In The Modern Professional Climate?

Larry Alton
Forbes
Originally posted September 12, 2017

More than ever, a company’s success depends on the talent it’s able to attract, but attracting the best talent is about more than just offering the best salary—or even the best benefits. Companies may have a lucrative offer for a prospective candidate, and a culture where they’ll feel at home, but how do corporate ethics stack up against those of its competition?

This may not seem like the most important question to ask when you’re trying to hire someone for a position—especially one that might not be directly affected by the actions of your corporation as a whole—but the modern workplace is changing, as are American professionals’ values, and if you want to keep up, you need to know just how significant those ethical values are.

What Qualifies as “Ethics”?

What do I mean by “ethics”? This is a broad category, and subjective in nature, but generally, I’m referring to these areas:
  • Fraud and manipulation. This should be obvious, but ethical companies don’t engage in shady or manipulative financial practices, such as fraud, bribery, or insider trading. The problem here is that individual actions are often associated with the company as a whole, so any individual within your company who behaves in an unethical way could compromise the reputation of your company. Setting strict no-tolerance policies and taking proper disciplinary action can mitigate these effects.

What’s Wrong With Voyeurism?

David Boonin
What's Wrong?
Originally posted August 31, 2017

The publication last year of The Voyeur’s Motel, Gay Talese’s controversial account of a Denver area motel owner who purportedly spent several decades secretly observing the intimate lives of his customers, raised a number of difficult ethical questions.  Here I want to focus on just one: does the peeping Tom who is never discovered harm his victims?

The peeping Tom profiled in Talese’s book certainly doesn’t think so.  In an excerpt that appeared in the New Yorker in advance of the book’s publication, Talese reports that Gerald Foos, the proprietor in question, repeatedly insisted that his behavior was “harmless” on the grounds that his “guests were unaware of it.”  Talese himself does not contradict the subject of his account on this point, and Foos’s assertion seems to be grounded in a widely accepted piece of conventional wisdom, one that often takes the form of the adage that “what you don’t know can’t hurt you”.  But there’s a problem with this view of harm, and thus a problem with the view that voyeurism, when done successfully, is a harmless vice.

The blog post is here.

Wednesday, September 27, 2017

New York’s Highest Court Rules Against Physician-Assisted Suicide

Jacob Gershman
The Wall Street Journal
Originally posted September 7, 2017

New York’s highest court on Thursday ruled that physician-assisted suicide isn’t a fundamental right, rejecting a legal effort by terminally ill patients to decriminalize doctor-assisted suicide through the courts.

The state Court of Appeals, though, said it wouldn’t stand in the way if New York’s legislature were to decide that assisted suicide could be “effectively regulated” and pass legislation allowing terminally ill and suffering patients to kill themselves.

Physician-assisted suicide is illegal in most of the country. But advocates who support loosening the laws have been making gains. Doctor-assisted dying has been legalized in several states, most recently in California and Colorado, the former by legislation and the latter by a ballot measure approved by voters in November. Oregon, Vermont and Washington have enacted similar “end-of-life” measures. Washington, D.C., also passed an “assisted-dying” law last year.

Montana’s highest court in 2009 ruled that physicians who provide “aid in dying” are shielded from liability.

No state court has recognized “aid in dying” as a fundamental right.

The article is here.

How to Recognize Burnout Before You’re Burned Out

Kenneth R. Rosen
The New York Times
Originally published September 5, 2017

Here is an excerpt:

In today’s era of workplace burnout, achieving a simpatico work-life relationship seems practically out of reach. Being tired, ambivalent, stressed, cynical and overextended has become a normal part of a working professional life. The General Social Survey of 2016, a nationwide survey that since 1972 has tracked the attitudes and behaviors of American society, found that 50 percent of respondents are consistently exhausted because of work, compared with 18 percent two decades ago.

Where once the term burnout was applied exclusively to health care workers, police officers, firefighters, paramedics or social workers who deal with trauma and human services — think Graham Greene’s novel “A Burnt-Out Case,” about a doctor in the Belgian Congo, a book that gave rise to the term colloquially — the term has since expanded to workers who are now part of a more connected, hyperactive and overcompensating work force.

But occupational burnout goes beyond needing a simple vacation or a family retreat, and many experts, psychologists and institutions, including the Centers for Disease Control and Prevention, highlight long-term and unresolvable burnout as not a symptom but rather a major health concern. (Though it does not appear in the Diagnostic and Statistical Manual of Mental Disorders, which outlines psychiatric disorders, it does appear in the International Statistical Classification of Diseases and Related Health Problems, a classification used by the World Health Organization.)

“We’re shooting ourselves in the foot,” Ms. Seppala told me. “Biologically we are not meant to be in that high-stress mode all the time. We got lost in this idea that the only way to be productive is to be on the go-go-go mode.”

The article is here.

Tuesday, September 26, 2017

The Influence of War on Moral Judgments about Harm

Hanne M Watkins and Simon M Laham
Preprint

Abstract

How does war influence moral judgments about harm? While the general rule is “thou shalt not kill,” war appears to provide an unfortunately common exception to the moral prohibition on intentional harm. In three studies (N = 263, N = 557, N = 793), we quantify the difference in moral judgments across peace and war contexts, and explore two possible explanations for the difference. Taken together, the findings of the present studies have implications for moral psychology researchers who use war based scenarios to study broader cognitive or affective processes. If the war context changes judgments of moral scenarios by triggering group-based reasoning or altering the perceived structure of the moral event, using such scenarios to make “decontextualized” claims about moral judgment may not be warranted.

Here is part of the discussion.

A number of researchers have begun to investigate how social contexts may influence moral judgment, whether those social contexts are grounded in groups (Carnes et al, 2015; Ellemers & van den Bos, 2009) or relationships (Fiske & Rai, 2014; Simpson, Laham, & Fiske, 2015). The war context is another specific context which influences moral judgments: in the present study we found that the intergroup nature of war influenced people’s moral judgments about harm in war – even if they belonged to neither of the two groups actually at war – and that the usually robust difference between switch and footbridge scenarios was attenuated in the war context. One implication of these findings is that some caution may be warranted when using war-based scenarios for studying morality in general. As mentioned in the introduction, scenarios set in war are often used in the study of broad domains or general processes of judgment (e.g. Graham et al., 2009; Phillips & Young, 2011; Piazza et al., 2013). Given the interaction of war context with intergroup considerations and with the construed structure of the moral event in the present studies, researchers are well advised to avoid making generalizations to morality writ large on the basis of war-related scenarios (see also Bauman, McGraw, Bartels, & Warren, 2014; Bloom, 2011).

The preprint is here.

Drug company faked cancer patients to sell drug

Aaron M. Kessler
CNN.com
Originally published September 6, 2017

When Insys Therapeutics got approval to sell an ultra-powerful opioid for cancer patients with acute pain in 2012, it soon discovered a problem: finding enough cancer patients to use the drug.

To boost sales, the company allegedly took patients who didn't have cancer and made it look like they did.

The drug maker used a combination of tactics, such as falsifying medical records, misleading insurance companies and providing kickbacks to doctors in league with the company, according to a federal indictment and ongoing congressional investigation by Sen. Claire McCaskill, a Democrat from Missouri.

The new report by McCaskill's office released Wednesday includes allegations about just how far the company went to push prescriptions of its sprayable form of fentanyl, Subsys.

Because of the high cost associated with Subsys, most insurers wouldn't pay for it unless it was approved in advance. That process, likely familiar to anyone who's taken an expensive medication, is called "prior-authorization."

The article is here.

Monday, September 25, 2017

Science debate: Should we embrace an enhanced future?

Alexander Lees
BBC.com
Originally posted September 9, 2017

Here is an excerpt:

Are we all enhanced?

Most humans are now enhanced to be resistant to many infectious diseases. Vaccination is human enhancement. Apart from "anti-vaxxers" - as those who lobby against childhood inoculations are often dubbed - most of us are content to participate. And society as a whole benefits from being free of those diseases.

So what if we took that a pharmaceutical step further. What if, as well as vaccines against polio, mumps, measles, rubella and TB, everyone also "upgraded" by taking drugs to modify their behaviour? Calming beta-blocker drugs could reduce aggression - perhaps even helping to diffuse racial tension. Or what if we were all prescribed the hormone oxytocin, a substance known to enhance social and family bonds - to just help us all just get along a little better.

Would society function better with these chemical tweaks? And might those who opt out become pariahs for not helping to build a better world - for not wanting to be "vaccinated" against anti-social behaviours?

And what if such chemical upgrades could not be made available to everyone, because of cost or scarcity? Should they be available to no one? An enhanced sense of smell might be useful for a career in wine tasting but not perhaps in rubbish disposal.

A case in point is military research - an arm of which is already an ongoing transhumanism experiment.

Many soldiers on the battlefield routinely take pharmaceuticals as cognitive enhancers to reduce the need to sleep and increase the ability to operate under stress. High tech exoskeletons, increasing strength and endurance, are no longer the realms of science fiction and could soon be in routine military use.

The article is here.

New class of drugs targets aging to help keep you healthy

Jacqueline Howard
CNN.com
Originally published September 5, 2017

Here is an excerpt:

"In the coming decades, I believe that health care will be transformed by this class of medicine and a whole set of diseases that your parents and grandparents have will be things you only see in movies or read in books, things like age-associated arthritis," said David, whose company was not involved in the new paper.

Yet he cautioned that, while many more studies may be on the horizon for senolytic drugs, some might not be successful.

"One thing that people tend to do is, they tend to overestimate things in the short run but then underestimate things in the long run, and I think that, like many fields, this suffers from that as well," David said.

"It will take a while," he said. "I think it's important to recognize that a drug discovery is among the most important of all human activities ... but it takes time, and there must be a recognition of that, and it takes patience."

The article is here.

Sunday, September 24, 2017

Ethics experts say Trump administration far from normal

Rachael Seeley Flores
The Center for Public Integrity
Originally published September 23, 2017

President Donald Trump’s young administration has already sharply diverged from the ethical norms that typically govern the executive branch, exposing vulnerabilities in the system, a small group of ethics experts and former government officials agreed Saturday.

The consensus emerged at a panel titled “Trump, Ethics and the Law” at the Texas Tribune Festival in Austin, Texas. The panel was moderated by Dave Levinthal, a senior reporter at the Center for Public Integrity.

“There have been untidy administrations in the past, but usually it takes a while to see these things develop,” said Ken Starr, a lawyer and judge who served as solicitor general under President George H.W. Bush and is best known for heading the investigation that led to the impeachment of President Bill Clinton.

Ethics laws are based on the idea that norms will be followed, said Walter Shaub, former director of the U.S. Office of Government Ethics (OGE).

“When they’re not followed, we suddenly discover how completely vulnerable our system is,” Shaub said.

The article is here.

The Bush Torture Scandal Isn’t Over

Daniel Engber
Slate.com
Originally published September 5, 2017

In June, a little-known academic journal called Teaching of Psychology published an article about the American Psychological Association’s role in the U.S. government’s war on terror and the interrogation of military detainees. Mitchell Handelsman’s seven-page paper, called “A Teachable Ethics Scandal,” suggested that the seemingly cozy relationship between APA officials and the Department of Defense might be used to illustrate numerous psychological concepts for students including obedience, groupthink, terror management theory, group influence, and motivation.

By mid-July, Teaching of Psychology had taken steps to retract the paper. The thinking that went into that decision reveals a disturbing under-covered coda to a scandal that, for a time, was front-page news. In July 2015, then–APA President Nadine Kaslow apologized for the organization’s involvement in Bush-era enhanced interrogations. “This bleak chapter in our history,” she said, speaking for a group with more than 100,000 members and a nine-figure budget, “occurred over a period of years and will not be resolved in a matter of months.” Two years later, the APA’s attempt to turn the page has devolved into a vicious internecine battle in which former association presidents have taken aim at one another. At issue is the question of who (if anyone) should be blamed for giving the Bush administration what’s been called a “green light” to torture detainees—and when the APA will ever truly get past this scandal.

The article is here.

Saturday, September 23, 2017

Why Buddhism is True with Robert Wright

Scott Barry Kaufman
The Psychology Podcast
August 13, 2017

This week we’re excited to have Robert Wright on The Psychology Podcast. Robert is the New York Times best-selling author of Nonzero, The Moral Animal, The Evolution of God, and most recently Why Buddhism is True. He has also written for The New Yorker, The Atlantic, The New York Times, Time, Slate, and The New Republic, and has taught at The University of Pennsylvania and Princeton University, where he also created the online course Buddhism and Modern Psychology. Robert draws on his wide-ranging knowledge of science, religion, psychology, history and politics to figure out what makes humanity tick.




Note from John: If you are a psychologist and cannot read Why Buddhism is True, then this is your next best option.  This book is really good and I highly recommend it.

Tom Price Flies Blind on Ethics

Editors
Bloomberg View
Originally published September 21, 2017

Under the lax ethical standards President Donald Trump brought to the White House, rampant conflicts of interest are treated with casual indifference. This disregard has sent a message to his entire administration that blurring lines -- between public and private, right and wrong -- will be not just tolerated but defended. At least one cabinet member appears to have taken the message to heart.

Health and Human Services Secretary Tom Price took five chartered flights last week, including one to a conference at a resort in Maine. Two of the flights -- round-trip from Washington to Philadelphia -- probably cost about $25,000, or roughly $24,750 more than the cost of an Amtrak ticket, for a trip that would have taken roughly the same amount of time. Total costs for the five flights are estimated to be at least $60,000.

The department has yet to reveal how many times Price has flown by charter since being sworn into office. There would be no problem were he picking up the tab himself, as Education Secretary Betsy DeVos reportedly does. But cabinet secretaries -- other than for the Defense and State departments, who often ride in military planes -- typically fly commercial. Taxpayers should not have to foot the bill for charters except in emergency situations.

The article is here.

Friday, September 22, 2017

I Lie? We Lie! Why? Experimental Evidence on a Dishonesty Shift in Groups

Kocher, Martin G. and Schudy, Simeon and Spantig, Lisa
CESifo Working Paper Series No. 6008.

Abstract

Unethical behavior such as dishonesty, cheating and corruption occurs frequently in organizations or groups. Recent experimental evidence suggests that there is a stronger inclination to behave immorally in groups than individually. We ask if this is the case, and if so, why. Using a parsimonious laboratory setup, we study how individual behavior changes when deciding as a group member. We observe a strong dishonesty shift. This shift is mainly driven by communication within groups and turns out to be independent of whether group members face payoff commonality or not (i.e., whether other group members benefit from one’s lie). Group members come up with and exchange more arguments for being dishonest than for complying with the norm of honesty. Thereby, group membership shifts the perception of the validity of the honesty norm and of its distribution in the population.

The article is here.

3D bioprint me: a socioethical view of bioprinting human organs and tissues

Vermeulen N, Haddow G, Seymour T, et al
Journal of Medical Ethics 2017;43:618-624.

Abstract

In this article, we review the extant social science and ethical literature on three-dimensional (3D) bioprinting. 3D bioprinting has the potential to be a ‘game-changer’, printing human organs on demand, no longer necessitating the need for living or deceased human donation or animal transplantation. Although the technology is not yet at the level required to bioprint an entire organ, 3D bioprinting may have a variety of other mid-term and short-term benefits that also have positive ethical consequences, for example, creating alternatives to animal testing, filling a therapeutic need for minors and avoiding species boundary crossing. Despite a lack of current socioethical engagement with the consequences of the technology, we outline what we see as some preliminary practical, ethical and regulatory issues that need tackling. These relate to managing public expectations and the continuing reliance on technoscientific solutions to diseases that affect high-income countries. Avoiding prescribing a course of action for the way forward in terms of research agendas, we do briefly outline one possible ethical framework ‘Responsible Research Innovation’ as an oversight model should 3D bioprinting promises are ever realised. 3D bioprinting has a lot to offer in the course of time should it move beyond a conceptual therapy, but is an area that requires ethical oversight and regulation and debate, in the here and now. The purpose of this article is to begin that discussion.

The article is here.

Thursday, September 21, 2017

Jimmy Kimmel Monologues on Health Care Legislation



Jimmy Kimmel keeps it simple on Graham-Cassidy healthcare legislation.

When is a lie acceptable? Work and private life lying acceptance depends on its beneficiary

Katarzyna Cantarero, Piotr Szarota, E. Stamkou, M. Navas & A. del Carmen Dominguez Espinosa
The Journal of Social Psychology 
Pages 1-16 | Received 02 Jan 2017, Accepted 25 Apr 2017, Published online: 14 Aug 2017

ABSTRACT

In this article we show that when analyzing attitude towards lying in a cross-cultural setting, both the beneficiary of the lie (self vs other) and the context (private life vs. professional domain) should be considered. In a study conducted in Estonia, Ireland, Mexico, The Netherlands, Poland, Spain, and Sweden (N = 1345), in which participants evaluated stories presenting various types of lies, we found usefulness of relying on the dimensions. Results showed that in the joint sample the most acceptable were other-oriented lies concerning private life, then other-oriented lies in the professional domain, followed by egoistic lies in the professional domain; and the least acceptance was shown for egoistic lies regarding one’s private life. We found a negative correlation between acceptance of a behavior and the evaluation of its deceitfulness.

Here is an excerpt:

Research shows differences in reactions to moral transgressions depending on the culture of the respondent as culture influences our moral judgments (e.g., Gold, Colman, & Pulford, 2014; Graham, Meindl, Beall, Johnson, & Zhang, 2016). For example, when analyzing transgressions of community (e.g., hearing children talking with their teacher the same way as they do towards their peers) Indian participants showed more moral outrage than British participants (Laham, Chopra, Lalljee, & Parkinson, 2010). Importantly, one of the main reasons why we can observe cross-cultural differences in reactions to moral transgressions is that culture influences our perception of whether an act itself constitutes a moral transgression at all (Haidt, 2001; Haidt & Joseph, 2004; Shweder, Mahapatra, & Miller, 1987; Shweder, Much, Mahapatra, & Park, 1997). Haidt, Koller and Dias (1993) showed that Brazilian participants would perceive some acts of victimless yet offensive actions more negatively than did Americans. The authors argue that for American students some of the acts that were being evaluated (e.g., using an old flag of ones’ country to clean the bathroom) fall outside the moral domain and are only a matter of social convention, whereas Brazilians would perceive them as morally wrong.

The paper is here.

Wednesday, September 20, 2017

What is moral injury, and how does it affect journalists covering bad stuff?

Thomas Ricks
Foreign Policy
Originally published September 5, 2017

Here is an excerpt:

They noted that moral injury is the damage done to a “person’s conscience or moral compass by perpetrating, witnessing, or failing to prevent acts that transgress personal moral and ethical values or codes of conduct.”

While not all journalists were affected the same way, the most common reactions were feelings of guilt at not having done enough personally to help refugees and shame at the behavior of others, such as local authorities, they wrote.

Journalists with children had more moral injury-related distress while those working alone said they were more likely to have acted in ways that violated their own moral code. Those who said they had not received enough support from their organization were more likely to admit seeing things they perceived as morally wrong. Less control over resources to report on the crisis also correlated significantly with moral injury. And moral injury scores correlated significantly with guilt. Greater guilt, in turn, was noted by journalists covering the story close to home and by those who had assisted refugees, the report added.

Feinstein and Storm wrote that moral injury can cause “considerable emotional upset.” They noted that journalists reported symptoms of intrusion. While they didn’t go into detail, intrusion can mean flashbacks, nightmares and unwanted memories. These can disrupt normal functioning. In my view, guilt and shame can also be debilitating.

The article is here.

Companies should treat cybersecurity as a matter of ethics

Thomas Lee
The San Francisco Chronicle
Originally posted September 2, 2017

Here is an excerpt:

An ethical code will force companies to rethink how they approach research and development. Instead of making stuff first and then worrying about data security later, companies will start from the premise that they need to protect consumer privacy before they start designing new products and services, Harkins said.

There is precedent for this. Many professional organizations like the American Medical Association and American Bar Association require members to follow a code of ethics. For example, doctors must pledge above all else not to harm a patient.

A code of ethics for cybersecurity will no doubt slow the pace of innovation, said Maurice Schweitzer, a professor of operations, information and decisions at the University of Pennsylvania’s Wharton School.

Ultimately, though, following such a code could boost companies’ reputations, Schweitzer said. Given the increasing number and severity of hacks, consumers will pay a premium for companies dedicated to security and privacy from the get-go, he said.

In any case, what’s wrong with taking a pause so we can catch our breath? The ethical quandaries technology poses to mankind are only going to get more complex as we increasingly outsource our lives to thinking machines.

That’s why a code of ethics is so important. Technology may come and go, but right and wrong never changes.

The article is here.

Tuesday, September 19, 2017

Massive genetic study shows how humans are evolving

Bruno Martin
Nature
Originally published 06 September 2017

Here is an excerpt:

Why these late-acting mutations might lower a person’s genetic fitness — their ability to reproduce and spread their genes — remains an open question.

The authors suggest that for men, it could be that those who live longer can have more children, but this is unlikely to be the whole story. So scientists are considering two other explanations for why longevity is important. First, parents surviving into old age in good health can care for their children and grandchildren, increasing the later generations’ chances of surviving and reproducing. This is sometimes known as the ‘grandmother hypothesis’, and may explain why humans tend to live long after menopause.

Second, it’s possible that genetic variants that are explicitly bad in old age are also harmful — but more subtly — earlier in life. “You would need extremely large samples to see these small effects,” says Iain Mathieson, a population geneticist at the University of Pennsylvania in Philadelphia, so that’s why it’s not yet possible to tell whether this is the case.

The researchers also found that certain groups of genetic mutations, which individually would not have a measurable effect but together accounted for health threats, appeared less often in people who were expected to have long lifespans than in those who weren't. These included predispositions to asthma, high body mass index and high cholesterol. Most surprising, however, was the finding that sets of mutations that delay puberty and childbearing are more prevalent in long-lived people.

The article is here.

Note: This article is posted, in part, because evolution is not emphasized in the field of psychology. There are psychologists who believe that humans did not evolve in the way other plants and animals evolved.  I have argued in lectures and workshops that we humans are not in our final form.

The strategic moral self: Self-presentation shapes moral dilemma judgments

Sarah C. Roma and Paul Conway
Journal of Experimental Social Psychology
Volume 74, January 2018, Pages 24–37

Abstract

Research has focused on the cognitive and affective processes underpinning dilemma judgments where causing harm maximizes outcomes. Yet, recent work indicates that lay perceivers infer the processes behind others' judgments, raising two new questions: whether decision-makers accurately anticipate the inferences perceivers draw from their judgments (i.e., meta-insight), and, whether decision-makers strategically modify judgments to present themselves favorably. Across seven studies, a) people correctly anticipated how their dilemma judgments would influence perceivers' ratings of their warmth and competence, though self-ratings differed (Studies 1–3), b) people strategically shifted public (but not private) dilemma judgments to present themselves as warm or competent depending on which traits the situation favored (Studies 4–6), and, c) self-presentation strategies augmented perceptions of the weaker trait implied by their judgment (Study 7). These results suggest that moral dilemma judgments arise out of more than just basic cognitive and affective processes; complex social considerations causally contribute to dilemma decision-making.

The article is here.

Monday, September 18, 2017

Hindsight Bias in Depression

Julia Groß, Hartmut Blank, Ute J. Bayen
Clinical Psychological Science 
First published date: August-07-2017

Abstract

People tend to be biased by outcome knowledge when looking back on events. This phenomenon is known as hindsight bias. Clinical intuition and theoretical accounts of affect-regulatory functions of hindsight bias suggest a link between hindsight bias and depression, but empirical evidence is scarce. In two experiments, participants with varying levels of depressive symptoms imagined themselves in everyday scenarios that ended positively or negatively and completed hindsight and affect measures. Participants with higher levels of depression judged negative outcomes, but not positive outcomes, as more foreseeable and more inevitable in hindsight. For negative outcomes, they also misremembered prior expectations as more negative than they initially were. This memory hindsight bias was accompanied by disappointment, suggesting a relation to affect-regulatory malfunction. We propose that “depressive hindsight bias” indicates a negative schema of the past and that it sustains negative biases in depression.

The research is here.

Artificial wombs could soon be a reality. What will this mean for women?

Helen Sedgwick
The Guardian
Originally posted Monday 4 September 2017

Here is an excerpt:

There is a danger that whoever pays for the technology behind ectogenesis would have the power to decide how, when and for whose benefit it is used. It could be the state or private insurance companies trying to avoid the unpredictable costs of traditional childbirth. Or, it could become yet another advantage available only to the privileged, with traditional pregnancies becoming associated with poverty, or with a particular class or race. Would babies gestated externally have advantages over those born via the human body? Or, if artificial gestation turns out to be cheaper than ordinary pregnancy, could it become an economic necessity forced on some?

But an external womb could also lead to a new equality in parenthood and consequently change the structure of our working and private lives. Given time, it could dismantle the gender hierarchies within our society. Given more time, it could eliminate the differences between the sexes in our biology. Once parental roles are equal, there will be no excuse for male-dominated boardrooms or political parties, or much of the other blatant inequality we see today.

Women’s rights are never more emotive than when it comes to a woman’s right to choose. While pregnancy occurs inside a woman’s body, women have some control over it, at least. But what happens when a foetus can survive entirely outside the body? How will our legislation stand up when viability begins at conception? There are fundamental questions about what rights we give to embryos outside the body (think of the potential for harvesting “spare parts” from unwanted foetuses). There is also the possibility of pro-life activists welcoming this process as an alternative to abortion – with, in the worst case, women being forced to have their foetuses extracted and gestated outside their bodies.

The article is here.

Sunday, September 17, 2017

Genitals photographed, shared by UPMC hospital employees: a common violation in health care industry

David Wenner
The Patriot News/PennLive.com
Updated September 16, 2017

You might assume anyone in healthcare would know better. Smart phones aren't new. Health care providers have long wrestled with the patient privacy- and medical ethics-related ramifications. Yet once again, smart phones have contributed to a very public black eye for a health care provider.

UPMC Bedford in Everett, Pa. has been cited by the Pennsylvania Department of Health after employees snapped and shared photos and video of an unconscious patient who needed surgery to remove an object from a genital. Numerous employees, including two doctors, were disciplined for being present.

It's not the first time unauthorized photos were taken of a hospital patient and shared or posted on social media.

  • Last year, a nurse in New York lost her license after taking a smart phone photo of an unconscious patient's penis and sending it to some of her co-workers. She also pleaded guilty to misdemeanor criminal charges.
  • The Los Angeles Times in 2013 wrote about an anesthesiologist in California who put a sticker of a mustache on the face of an unconscious female patient, with a nurse's aid then taking a picture. That article also reported allegations of a medical device salesman taking photos of a naked woman without her knowledge.
  • In 2010, employees at a hospital in Florida were disciplined after taking and posting online photos of a shark attack victim who didn't survive. No one was fired, with the hospital concluding the incident was the "result of poor judgement rather than malicious intent," according to an article in Radiology Today. 
  • Many such incidents have involved nursing homes. An article published by the American Association of Nurse Assessment Coordination in 2016 stated, "In the shadow of the social media revolution, a disturbing trend has begun to emerge of [nursing home] employees posting and sharing degrading images of their residents on social media." An investigation published by ProPublica in 2015 detailed 47 cases since 2012 of workers at nursing homes and assisted living facilities sharing photos or videos of residents on Facebook. 

The behavioural ecology of irrational behaviours

Philippe Huneman Johannes Martens
History and Philosophy of the Life Sciences
September 2017, 39:23

Abstract

Natural selection is often envisaged as the ultimate cause of the apparent rationality exhibited by organisms in their specific habitat. Given the equivalence between selection and rationality as maximizing processes, one would indeed expect organisms to implement rational decision-makers. Yet, many violations of the clauses of rationality have been witnessed in various species such as starlings, hummingbirds, amoebas and honeybees. This paper attempts to interpret such discrepancies between economic rationality (defined by the main axioms of rational choice theory) and biological rationality (defined by natural selection). After having distinguished two kinds of rationality we introduce irrationality as a negation of economic rationality by biologically rational decision-makers. Focusing mainly on those instances of irrationalities that can be understood as exhibiting inconsistency in making choices, i.e. as non-conformity of a given behaviour to axioms such as transitivity or independence of irrelevant alternatives, we propose two possible families of Darwinian explanations that may account for these apparent irrationalities. First, we consider cases where natural selection may have been an indirect cause of irrationality. Second, we consider putative cases where violations of rationality axioms may have been directly favored by natural selection. Though the latter cases (prima facie) seem to clearly contradict our intuitive representation of natural selection as a process that maximizes fitness, we argue that they are actually unproblematic; for often, they can be redescribed as cases where no rationality axiom is violated, or as situations where no adaptive solution exists in the first place.

The article is here.

Saturday, September 16, 2017

How to Distinguish Between Antifa, White Supremacists, and Black Lives Matter

Conor Friedersdorf
The Atlantic
Originally published August 31, 2017

Here are two excerpts:

One can condemn the means of extralegal violence, and observe that the alt-right, Antifa, and the far-left have all engaged in it on different occasions, without asserting that all extralegal violence is equivalent––murdering someone with a car or shooting a representative is more objectionable than punching with the intent to mildly injure. What’s more, different groups can choose equally objectionable means without becoming equivalent, because assessing any group requires analyzing their ends, not just their means.

For neo-Nazis and Klansmen in Charlottesville, one means, a torch-lit parade meant to intimidate by evoking bygone days of racial terrorism, was deeply objectionable; more importantly, their end, spreading white-supremacist ideology in service of a future where racists can lord power over Jews and people of color, is abhorrent.

Antifa is more complicated.

Some of its members employ the objectionable means of initiating extralegal street violence; but its stated end of resisting fascism is laudable, while its actual end is contested. Is it really just about resisting fascists or does it have a greater, less defensible agenda? Many debates about Antifa that play out on social media would prove less divisive if the parties understood themselves to be agreeing that opposing fascism is laudable while disagreeing about Antifa’s means, or whether its end is really that limited.

(cut)

A dearth of distinctions has a lot of complicated consequences, but in aggregate, it helps to empower the worst elements in a society, because those elements are unable to attract broad support except by muddying distinctions between themselves and others whose means or ends are defensible to a broader swath of the public. So come to whatever conclusions accord with your reason and conscience. But when expressing them, consider drawing as many distinctions as possible.

The article is here.