Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, April 30, 2018

Social norm complexity and past reputations in the evolution of cooperation

Fernando P. Santos, Francisco C. Santos & Jorge M. Pacheco
Nature volume 555, pages 242–245 (08 March 2018)

Abstract

Indirect reciprocity is the most elaborate and cognitively demanding of all known cooperation mechanisms, and is the most specifically human because it involves reputation and status. By helping someone, individuals may increase their reputation, which may change the predisposition of others to help them in future. The revision of an individual’s reputation depends on the social norms that establish what characterizes a good or bad action and thus provide a basis for morality. Norms based on indirect reciprocity are often sufficiently complex that an individual’s ability to follow subjective rules becomes important, even in models that disregard the past reputations of individuals, and reduce reputations to either ‘good’ or ‘bad’ and actions to binary decisions. Here we include past reputations in such a model and identify the key pattern in the associated norms that promotes cooperation. Of the norms that comply with this pattern, the one that leads to maximal cooperation (greater than 90 per cent) with minimum complexity does not discriminate on the basis of past reputation; the relative performance of this norm is particularly evident when we consider a ‘complexity cost’ in the decision process. This combination of high cooperation and low complexity suggests that simple moral principles can elicit cooperation even in complex environments.

The article is here.

Can Nudges Be Transparent and Yet Effective?

Bruns, Hendrik and Kantorowicz-Reznichenko, Elena and Klement, Katharina and Luistro Jonsson, Marijane and Rahali, Bilel.
Journal of Economic Psychology, Forthcoming. (February 28, 2018).

Abstract

Nudges receive growing attention as an effective concept to alter people's decisions without significantly changing economic incentives or limiting options. However, being often very subtle and covert, nudges are also criticized as unethical.  By not being transparent about the intention to influence individual choice they might be perceived as limiting freedom of autonomous actions and decisions. So far, empirical research on this issue is scarce. In this study, we investigate whether nudges can be made transparent without limiting their effectiveness. For this purpose we conduct a laboratory experiment where we nudge contributions to carbon emission reduction by introducing a default value. We test how different types of transparency (i.e. knowledge of the potential influence of the default, its purpose, or both) influence the effect of the default. Our findings demonstrate that the default increases contributions, and information on the potential influence, its purpose, or both combined do not significantly influence the default effect. Furthermore, we do not find evidence that psychological reactance interacts with the influence of transparency. Findings support the policy-relevant claim that nudges (in the form of defaults) can be transparent and yet effective.

The paper can be downloaded here.

Sunday, April 29, 2018

Who Am I? The Role of Moral Beliefs in Children’s and Adults’ Understanding of Identity

Larisa Heiphetz, Nina Strohminger, Susan A. Gelman, and Liane L. Young
Forthcoming: Journal of Experimental and Social Psychology

Abstract

Adults report that moral characteristics—particularly widely shared moral beliefs—are central to identity. This perception appears driven by the view that changes to widely shared moral beliefs would alter friendships and that this change in social relationships would, in turn, alter an individual’s personal identity. Because reasoning about identity changes substantially during adolescence, the current work tested pre- and post-adolescents to reveal the role that such changes could play in moral cognition. Experiment 1 showed that 8- to 10-year-olds, like adults, judged that people would change more after changes to their widely shared moral beliefs (e.g., whether hitting is wrong) than after changes to controversial moral beliefs (e.g., whether telling prosocial lies is wrong). Following up on this basic effect, a second experiment examined whether participants regard all changes to widely shared moral beliefs as equally impactful. Adults, but not children, reported that individuals would change more if their good moral beliefs (e.g., it is not okay to hit) transformed into bad moral beliefs (e.g., it is okay to hit) than if the opposite change occurred. This difference in adults was mediated by perceptions of how much changes to each type of belief would alter friendships. We discuss implications for moral judgment and social cognitive development.

The research is here.

Saturday, April 28, 2018

Is fairness intuitive? An experiment accounting for subjective utility differences under time pressure

Merkel, A.L. & Lohse, J. Exp Econ (2018).
https://doi.org/10.1007/s10683-018-9566-3

Abstract

Evidence from response time studies and time pressure experiments has led several authors to conclude that “fairness is intuitive”. In light of conflicting findings, we provide theoretical arguments showing under which conditions an increase in “fairness” due to time pressure indeed provides unambiguous evidence in favor of the “fairness is intuitive” hypothesis. Drawing on recent applications of the Drift Diffusion Model (Krajbich et al. in Nat Commun 6:7455, 2015a), we demonstrate how the subjective difficulty of making a choice affects decisions under time pressure and time delay, thereby making an unambiguous interpretation of time pressure effects contingent on the choice situation. To explore our theoretical considerations and to retest the “fairness is intuitive” hypothesis, we analyze choices in two-person binary dictator and prisoner’s dilemma games under time pressure or time delay. In addition, we manipulate the subjective difficulty of choosing the fair relative to the selfish option. Our main finding is that time pressure does not consistently promote fairness in situations where this would be predicted after accounting for choice difficulty. Hence, our results cast doubt on the hypothesis that “fairness is intuitive”.

The research is here.

Friday, April 27, 2018

Why We Don’t Let Coworkers Help Us, Even When We Need It

Mark Bolino and Phillip S. Thompson
Harvard Business Review
Originally published March 15, 2018

Here is the conclusion:

Taken together, our studies suggest that employees who are unwilling to accept help when they need it may undermine their own performance and the effectiveness of their team or unit. In light of those potential costs, managers should directly address the negative beliefs that people are harboring. For instance, research shows that employees tend to look to their leaders to determine who is trustworthy and who isn’t. So, to build people’s trust in their coworkers’ motives and competence, managers can demonstrate their faith in those employees by giving them challenging assignments, ownership of certain decisions, direct access to sensitive information or valuable stakeholders, and so on. Further, since giving help and receiving it go hand in hand, managers should create an environment where assisting one another is encouraged and recognized. They can do this by calling attention to successful collaborations and explaining how they’ve contributed to the organization’s larger goals and mission. And they should show their own willingness to help and be helped, since employees are more likely to see the merits of citizenship behaviors when they observe their leaders engaging in such behaviors themselves.

Finally, it’s important not to send mixed messages. If employees who go it alone get ahead more quickly than those who give and receive support, people will pick up on that discrepancy — and they’ll go back to looking out for number one, to their detriment and the organization’s.

The article is here.

The Mind-Expanding Ideas of Andy Clark

Larissa MacFarquhar
The New Yorker
Originally published April 2, 2018

Here is an excerpt:

Cognitive science addresses philosophical questions—What is a mind? What is the mind’s relationship to the body? How do we perceive and make sense of the outside world?—but through empirical research rather than through reasoning alone. Clark was drawn to it because he’s not the sort of philosopher who just stays in his office and contemplates; he likes to visit labs and think about experiments. He doesn’t conduct experiments himself; he sees his role as gathering ideas from different places and coming up with a larger theoretical framework in which they all fit together. In physics, there are both experimental and theoretical physicists, but there are fewer theoretical neuroscientists or psychologists—you have to do experiments, for the most part, or you can’t get a job. So in cognitive science this is a role that philosophers can play.

Most people, he realizes, tend to identify their selves with their conscious minds. That’s reasonable enough; after all, that is the self they know about. But there is so much more to cognition than that: the vast, silent cavern of underground mental machinery, with its tubes and synapses and electric impulses, so many unconscious systems and connections and tricks and deeply grooved pathways that form the pulsing substrate of the self. It is those primal mechanisms, the wiring and plumbing of cognition, that he has spent most of his career investigating. When you think about all that fundamental stuff—some ancient and shared with other mammals and distant ancestors, some idiosyncratic and new—consciousness can seem like a merely surface phenomenon, a user interface that obscures the real works below.

The article and audio file are here.

Thursday, April 26, 2018

Practical Tips for Ethical Data Sharing

Michelle N. Meyer
Advances in Methods and Practices in Psychological Science
Volume: 1 issue: 1, page(s): 131-144

Abstract

This Tutorial provides practical dos and don’ts for sharing research data in ways that are effective, ethical, and compliant with the federal Common Rule. I first consider best practices for prospectively incorporating data-sharing plans into research, discussing what to say—and what not to say—in consent forms and institutional review board applications, tools for data de-identification and how to think about the risks of re-identification, and what to consider when selecting a data repository. Turning to data that have already been collected, I discuss the ethical and regulatory issues raised by sharing data when the consent form either was silent about data sharing or explicitly promised participants that the data would not be shared. Finally, I discuss ethical issues in sharing “public” data.

The article is here.

Rogue chatbots deleted in China after questioning Communist Party

Neil Connor
The Telegraph
Originally published August 3, 2017

Two chatbots have been pulled from a Chinese messaging app after they questioned the rule of the Communist Party and made unpatriotic comments.

The bots were available on a messaging app run by Chinese Internet giant Tencent, which has more than 800 million users, before apparently going rogue.

One of the robots, BabyQ, was asked “Do you love the Communist Party”, according to a screenshot posted on Sina Weibo, China’s version of Twitter.

Another web user said to the chatbot: “Long Live the Communist Party”, to which BabyQ replied: “Do you think such corrupt and incapable politics can last a long time?”

(cut)

The Chinese Internet is heavily censored by Beijing, which sees any criticism of its rule as a threat.

Social media posts which are deemed critical are often quickly deleted by authorities, while searches for sensitive topics are often blocked.

The information is here.

Wednesday, April 25, 2018

The Peter Principle: Promotions and Declining Productivity

Edward P. Lazear
Hoover Institution and Graduate School of Business
Revision 10/12/00

Abstract

Many have observed that individuals perform worse after having received a promotion. The
most famous statement of the idea is the Peter Principle, which states that people are promoted to
their level of incompetence. There are a number of possible explanations. Two are explored. The
most traditional is that the prospect of promotion provides incentives which vanish after the
promotion has been granted; thus, tenured faculty slack off. Another is that output as a statistical
matter is expected to fall. Being promoted is evidence that a standard has been met. Regression
to the mean implies that future productivity will decline on average. Firms optimally account for the
regression bias in making promotion decisions, but the effect is never eliminated. Both explanations
are analyzed. The statistical point always holds; the slacking off story holds only under certain
compensation structures.

The paper is here.

Dear Therapist: I Google-Stalked My Therapist

Lori Gottlieb
The Atlantic
Originally published March 21, 2108

Here is an excerpt:

Most of us wonder who our therapists are outside of the therapy room, usually because we like them so much. Sometimes, of course, people Google their therapists if something feels off—to see if their credentials check out, or if other patients have posted similar concerns. More often, though, our curiosity is a reflection of how important our therapist has become to us, and in some cases, it’s a way to feel connected to the therapist between sessions. The problem is, of course, that we want therapy to be a space where we feel free to talk about absolutely anything. And no matter what we discover—a bombshell like yours, or something more mundane—the fallout of a Google binge becomes a secret that takes that freedom away.

Carl Jung called secrets “psychic poison” for good reason. When I finally confessed my Google-stalking to my therapist, all the air returned to the room. My verbal shackles were removed, and we talked about what was behind my desire to type his name into my search engine. But more important, the way I handled the situation before fessing up taught me something interesting about how I handle discomfort—something far more interesting than anything I learned about my therapist online.

And I think the same might prove true for you.

What people do in therapy is pretty much what they do in their outside lives. In other words, if a patient tends to feel dissatisfied with people in her life, it’s likely that she’ll eventually feel dissatisfied with me. If she tries to please people, she’ll probably try to please me too. And if she avoids people when she feels hurt by them, I’ll be on the lookout for signs that I’ve said something that may have hurt her, too (she cancels her next session, or clams up, or comes late).

The information is here.

Tuesday, April 24, 2018

The Next Best Version of Me: How to Live Forever

David Ewing Duncan
Wired.com
Originally published March 27, 2018

Here is an excerpt:

There are also the ethics of using a powerful new technology to muck around with life’s basic coding. Theoretically, scientists could one day manufacture genomes, human or otherwise, almost as easily as writing code on a computer, transforming digital DNA on someone’s laptop into living cells of, say, Homo sapiens. Mindful of the controversy, Church and his HGP-Write colleagues insist that minting people is not their goal, though the sheer audacity of making genome-scale changes to human DNA is enough to cause controversy. “People get upset if you put a gene from another species into something you eat,” says Stanford bioethicist and legal scholar Henry Greely. “Now we’re talking about a thorough rewriting of life? Hairs will stand on end. Hackles will be raised.”

Raised hackles or not, Church and his team are forging ahead. “We want to start with a human Y,” he says, referring to the male sex chromosome, which he explains has the fewest genes of a person’s 23 chromo­somes and is thus easier to build. And he doesn’t want to synthesize just any Y chromosome. He and his team want to use the Y chromosome sequence from an actual person’s genome: mine.

“Can you do that?” I stammer.

“Of course we can—with your permission,” he says, reminding me that it would be easy to tap into my genome, since it was stored digitally in his lab’s computers as part of an effort he launched in 2005 called the Personal Genome Project.

The article is here.

When therapists face discrimination

Zara Abrams
The Monitor on Psychology - April 2018

Here is an excerpt:

Be aware of your own internalized biases. 

Reflecting on their own social, cultural and political perspectives means practitioners are less likely to be caught off guard by something a client says. “It’s important for psychologists to be aware of what a client’s biases and prejudices are bringing up for them internally, so as not to project that onto the client—it’s important to really understand what’s happening,” says Kathleen Brown, PhD, a licensed clinical psychologist and APA fellow.

For Kelly, the Atlanta-based clinical psychologist, this means she’s careful not to assume that resistant clients are treating her disrespectfully because she’s African American. Sometimes her clients, who are referred for pre-surgical evaluation and treatment, are difficult or even hostile
because their psychological intervention was mandated.

Foster an open dialogue about diversity and identity issues.

“The benefit of having that conversation, even though it can be scary or uncomfortable to bring it up in the room, is that it prevents it from festering or interfering with your ability to provide high-quality care to the client,” says Illinois-based clinical psychologist Robyn Gobin, PhD, who has experienced ageism from patients. She responds to ageist remarks by exploring what specific concerns the client has regarding her age (like Turner, she looks young). If she’s met with criticism, she tries to remain receptive, understanding that the client is vulnerable and any hostility the client expresses reflects concern for his or her own well-being. By being open and frank from the start, she shows her clients the appropriate way to confront their biases in therapy.

Of course, practitioners approach these conversations differently. If a client makes a prejudiced remark about another group, Buckman says labeling the comment as “offensive” shifts the attention from the client onto her. “It doesn’t get to the core of what’s going on with them. In the long run, exploring a way to shift how the client interacts with the ‘other’ is probably more valuable than standing up for a group in the moment.”

The information is here.

Monday, April 23, 2018

Shared Decision-making for PTSD

Juliette Harik, PhD
PTSD Research Quarterly (2018) Volume 29 (1)

Here is an excerpt:

Although several different shared decision-making models exist (for a review see Lin & Fagerlin, 2014), one useful approach conceptualizes shared decision-making as consisting of three phases
(Elwyn et al., 2012): choice talk, option talk, and decision talk. Choice talk involves communicating
to patients that there is a decision to make and that they can be involved in this decision to the extent
that they are comfortable. Option talk consists of sharing accurate and comprehensive information
about treatment options. Ideally, this involves the use of a decision aid, which is an educational tool
such as a website, brochure, or video designed to help patients understand and compare various
options (for a review, see Stacey et al., 2017). The third and final step, decision talk, consists of an
exploration of the patient’s preferences and what matters most to him or her. The process of shared
decision-making is intended to help the patient develop informed preferences, and ultimately arrive
at the decision that is best for him or her. Importantly, patients with the same clinical condition may arrive at very different treatment decisions on the basis of unique values and preferences.

Shared decision-making has been evaluated most often among patients facing care decisions for chronic medical conditions, especially cancer. In medical patients, shared decision-making has been linked with greater confidence in the treatment decision, improved satisfaction with decision-making and with treatment, greater self-efficacy, and increased trust in the provider (Joosten et al., 2008; Shay & Lafata, 2015). In mental health, shared decision-making has been most often evaluated in the context of depression, yielding mixed results on both satisfaction and treatment outcomes (Duncan, Best, & Hagen, 2010). Fewer studies have evaluated the effectiveness of shared decision-making for other mental health conditions such as PTSD.

The information is here.

Bad science puts innocent people in jail — and keeps them there

Radley Balko and Tucker Carrington
The Washington Post
Originally posted March 21, 2018

Here is an excerpt:

At the trial level, juries hear far too much dubious science, whether it’s an unproven field like bite mark matching or blood splatter analysis, exaggerated claims in a field like hair fiber analysis, or analysts testifying outside their area of expertise.  It’s difficult to say how many convictions have involved faulty or suspect forensics, but the FBI estimated in 2015 that its hair fiber analysts had testified in about 3,000 cases — and that’s merely one subspecialty of forensics, and only at the federal level.    Extrapolating from the database of DNA exonerations, the Innocence Project estimates that bad forensics contributes to about 45 percent of wrongful convictions.

But flawed evidence presented at trial is only part of the problem.  Even once a field of forensics or a particular expert has been discredited, the courts have made it extremely difficult for those convicted by bad science to get a new trial.

The Supreme Court makes judges responsible for determining what is good science.  They already decide what evidence is allowed at trial, so asking them to do the same for expert testimony may seem intuitive.  But judges are trained to do legal analyses, not scientific ones.  They generally deal with challenges to expert testimony by looking at what other judges have said.  If a previous court has allowed a field of forensic evidence, subsequent courts will, too.

The article is here.

Note: These issues also apply to psychologists in the courtroom.

Sunday, April 22, 2018

What is the ethics of ageing?

Christopher Simon Wareham
Journal of Medical Ethics 2018;44:128-132.

Abstract

Applied ethics is home to numerous productive subfields such as procreative ethics, intergenerational ethics and environmental ethics. By contrast, there is far less ethical work on ageing, and there is no boundary work that attempts to set the scope for ‘ageing ethics’ or the ‘ethics of ageing’. Yet ageing is a fundamental aspect of life; arguably even more fundamental and ubiquitous than procreation. To remedy this situation, I examine conceptions of what the ethics of ageing might mean and argue that these conceptions fail to capture the requirements of the desired subfield. The key reasons for this are, first, that they view ageing as something that happens only when one is old, thereby ignoring the fact that ageing is a process to which we are all subject, and second that the ageing person is treated as an object in ethical discourse rather than as its subject. In response to these shortcomings I put forward a better conception, one which places the ageing person at the centre of ethical analysis, has relevance not just for the elderly and provides a rich yet workable scope. While clarifying and justifying the conceptual boundaries of the subfield, the proposed scope pleasingly broadens the ethics of ageing beyond common negative associations with ageing.

The article is here.

Saturday, April 21, 2018

A Systematic Review and Meta‐Synthesis of Qualitative Research Into Mandatory Personal Psychotherapy During Training

David Murphy, Nisha Irfan, Harriet Barnett, Emma Castledine, & Lily Enescu
Counseling and Psychotherapy Research
First published February 23, 2018

Abstract

Background
This study addresses the thorny issue of mandatory personal psychotherapy within counselling and psychotherapy training. It is expensive, emotionally demanding and time‐consuming. Nevertheless, proponents argue that it is essential in protecting the public and keeping clients safe; to ensure psychotherapists develop high levels of self‐awareness and gain knowledge of interpersonal dynamics; and that it enhances therapist effectiveness. Existing evidence about these potential benefits is equivocal and is largely reliant on small‐scale qualitative studies.

Method
We carried out a systematic review of literature searched within five major databases. The search identified 16 published qualitative research studies on the topic of mandatory personal psychotherapy that matched the inclusion criteria. All studies were rated for quality. The findings from individual studies were thematically analysed through a process of meta‐synthesis.

Results
Meta‐synthesis showed studies on mandatory psychotherapy had reported both positive and hindering factors in almost equal number. Six main themes were identified: three positive and three negative. Positive findings were related to personal and professional development, experiential learning and therapeutic benefits. Negative findings related to ethical imperatives do no harm, justice and integrity.

Conclusion
When mandatory personal psychotherapy is used within a training programme, courses must consider carefully and put ethical issues at the forefront of decision‐making. Additionally, the requirement of mandatory psychotherapy should be positioned and identified as an experiential pedagogical device rather than fulfilling a curative function. Recommendations for further research are made.

The research is here.

Friday, April 20, 2018

Making a Thinking Machine

Leah Winerman
The Monitor on Psychology - April 2018

Here is an excerpt:

A 'Top Down' Approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more “top-down” approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world. Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately.

The article is here.

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Thursday, April 19, 2018

Common Sense for A.I. Is a Great Idea

Carissa Veliz
www.slate.com
Originally posted March 19, 2018

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

The information is here.

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Sandra Upson
Wired.com
Originally posted February 16, 2018

Here is an excerpt:

But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?

A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.

The information is here.

Wednesday, April 18, 2018

Is There A Difference Between Ethics And Morality In Business?

Bruce Weinstein
Forbes.com
Originally published February 23, 2018

Here is an excerpt:

In practical terms, if you use both “ethics” and “morality” in conversation, the people you’re speaking with will probably take issue with how you’re using these terms, even if they believe they’re distinct in some way.

The conversation will then veer from whatever substantive ethical point you were trying to make (“Our company has an ethical and moral responsibility to hire and promote only honest, accountable people”) to an argument about the meaning of the words “ethical” and “moral.” I had plenty of those arguments as a graduate student in philosophy, but is that the kind of discussion you really want to have at a team meeting or business conference?

You can do one of three things, then:

1. Use “ethics” and “morality” interchangeably only when you’re speaking with people who believe they’re synonymous.

2. Choose one term and stick with it.

3. Minimize the use of both words and instead refer to what each word is broadly about: doing the right thing, leading an honorable life and acting with high character.

As a professional ethicist, I’ve come to see #3 as the best option. That way, I don’t have to guess whether the person I’m speaking with believes ethics and morality are identical concepts, which is futile when you’re speaking to an audience of 5,000 people.

The information is here.

Note: I do not agree with everything in this article, but it is worth contemplating.

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Tuesday, April 17, 2018

Planning Complexity Registers as a Cost in Metacontrol

Kool, W., Gershman, S. J., & Cushman, F. A. (in press). Planning complexity registers as a
cost in metacontrol. Journal of Cognitive Neuroscience.

Abstract

Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans have can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, while others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains wouldn’t be “worth it”. We test and verify this prediction in a sequential RL task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.

The research is here.

Building A More Ethical Workplace Culture

PYMNTS
PYMNTS.com
Originally posted March 20, 2018

Here is an excerpt:

The Worst News

Among the positive findings in the report was the fact that reporting is on the rise by a whole 19 percent, with 69 percent of employees stating they had reported misconduct in the last two years.

But that number, Harned said, comes with a bitter side note. Retaliation has also spiked during the same time period, with 44 percent reporting it – up from 22 percent two years ago.

The rate of retaliation going up faster than the rate of reporting, Harned noted, is disturbing.

“That is a very real problem for employees, and I think over the last year, we’ve seen what a huge problem it has become for employers.”

The door-to-door on retaliation for reporting is short – about three weeks on average. That is just about the time it takes for firms – even those serious about doing a good job with improving compliance – to get any investigation up and organized.

“By then, the damage is already done,” said Harned. “We are better at seeing misconduct, but we aren’t doing enough to prevent it from happening – especially because retaliation is such a big problem.”

There are not easy solutions, Harned noted, but the good news – even in the face of the worst news – is that improvement is possible, and is even being logged in some segments. Employees, she stated, mostly come in the door with a moral compass to call their own, and want to work in environments that are healthy, not vicious.

“The answer is culture is everything: Companies need to constantly communicate to employees that conduct is the expectation for all levels of the organization, and that breaking those rules will always have consequences.”

The post is here.

Monday, April 16, 2018

The Seth Rich lawsuit matters more than the Stormy Daniels case

Jill Abramson
The Guardian
Originally published March 20, 2018

Here is an excerpt:

I’ve previously written about Fox News’ shameless coverage of the 2016 unsolved murder of a young former Democratic National Committee staffer named Seth Rich. Last week, ABC News reported that his family has filed a lawsuit against Fox, charging that several of its journalists fabricated a vile story attempting to link the hacked emails from Democratic National Committee computers to Rich, who worked there.

After the fabricated story ran on the Fox website, it was retracted, but not before various on-air stars, especially Trump mouthpiece Sean Hannity, flogged the bogus conspiracy theory suggesting Rich had something to do with the hacked messages.

This shameful episode demonstrated, once again, that Rupert Murdoch’s favorite network, and Trump’s, has no ethical compass and had no hesitation about what grief this manufactured story caused to the 26-year-old murder victim’s family. It’s good to see them striking back, since that is the only tactic that the Murdochs and Trumps of the world will respect or, perhaps, will force them to temper the calumny they spread on a daily basis.

Of course, the Rich lawsuit does not have the sex appeal of the Stormy case. The rightwing echo chamber will brazenly ignore its self-inflicted wounds. And, for the rest of the cable pundit brigades, the DNC emails and Rich are old news.

The article is here.

Psychotherapy Is 'The' Biological Treatment

Robert Berezin
Medscape.com
Originally posted March 16, 2018

Neuroscience surprisingly teaches us that not only is psychotherapy purely biological, but it is the only real biological treatment. It addresses the brain in the way it actually develops, matures, and operates. It follows the principles of evolutionary adaptation. It is consonant with genetics. And it specifically heals the problematic adaptations of the brain in precisely the ways that they evolved in the first place. Psychotherapy deactivates maladaptive brain mappings and fosters new and constructive pathways. Let me explain.

The operations of the brain are purely biological. The brain maps our experiences and memories through the linking of trillions of neuronal connections. These interconnected webs create larger circuits that map all throughout the architecture of the cortex. This generates high-level symbolic neuronal maps that take form as images in our consciousness. The play of consciousness is the highest level of symbolic form. It is a living theater of "image-ination," a representational world that consists of a cast of characters who relate together by feeling as well as scenarios, plots, set designs, and landscape.

As we adapt to our environment, the brain maps our emotional experience through cortical memory. This starts very early in life. If a baby is startled by a loud noise, his arms and legs will flail. His heart pumps adrenaline, and he cries. This "startle" maps a fight-or-flight response in his cortex, which is mapped through serotonin and cortisol. The baby is restored by his mother's holding. Her responsive repair once again re-establishes and maintains his well-being, which is mapped through oxytocin. These ongoing formative experiences of life are mapped into memory in precisely these two basic ways.

The article is here.

Sunday, April 15, 2018

What If There Is No Ethical Way to Act in Syria Now?

Sigal Samel
The Atlantic
Originally posted April 13, 2018

For seven years now, America has been struggling to understand its moral responsibility in Syria. For every urgent argument to intervene against Syrian President Bashar al-Assad to stop the mass killing of civilians, there were ready responses about the risks of causing more destruction than could be averted, or even escalating to a major war with other powers in Syria. In the end, American intervention there has been tailored mostly to a narrow perception of American interests in stopping the threat of terror. But the fundamental questions are still unresolved: What exactly was the moral course of action in Syria? And more urgently, what—if any—is the moral course of action now?

The war has left roughly half a million people dead—the UN has stopped counting—but the question of moral responsibility has taken on new urgency in the wake of a suspected chemical attack over the weekend. As President Trump threatened to launch retaliatory missile strikes, I spoke about America’s ethical responsibility with some of the world’s leading moral philosophers. These are people whose job it is to ascertain the right thing to do in any given situation. All of them suggested that, years ago, America might have been able to intervene in a moral way to stop the killing in the Syrian civil war. But asked what America should do now, they all gave the same startling response: They don’t know.

The article is here.

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Friday, April 13, 2018

The Farmbots Are Coming

Matt Jancer
www.wired.com
Originally published March 9, 2018

The first fully autonomous ground vehicles hitting the market aren’t cars or delivery trucks—they’re ­robo­-farmhands. The Dot Power Platform is a prime example of an explosion in advanced agricultural technology, which Goldman Sachs predicts will raise crop yields 70 percent by 2050. But Dot isn’t just a tractor that can drive without a human for backup. It’s the Transformer of ag-bots, capable of performing 100-plus jobs, from hay baler and seeder to rock picker and manure spreader, via an ­arsenal of tool modules. And though the hulking machine can carry 40,000 pounds, it navigates fields with balletic precision.

The information is here.

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

Thursday, April 12, 2018

CA’s Tax On Millionaires Yields Big Benefits For People With Mental Illness

Anna Gorman
Kaiser Health News
Originally published March 14, 2018

A statewide tax on the wealthy has significantly boosted mental health programs in California’s largest county, helping to reduce homelessness, incarceration and hospitalization, according to a report released Tuesday.

Revenue from the tax, the result of a statewide initiative passed in 2004, also expanded access to therapy and case management to almost 130,000 people up to age 25 in Los Angeles County, according to the report by the Rand Corp. Many were poor and from minority communities, the researchers said.

“Our results are encouraging about the impact these programs are having,” said Scott Ashwood, one of the authors and an associate policy researcher at Rand. “Overall we are seeing that these services are reaching a vulnerable population that needs them.”

The positive findings came just a few weeks after a critical state audit accused California counties of hoarding the mental health money — and the state of failing to ensure that the money was being spent. The February audit said that the California Department of Health Care Services allowed local mental health departments to accumulate $231 million in unspent funds by the end of the 2015-16 fiscal year — which should have been returned to the state because it was not spent in the allowed time frame.

Proposition 63, now known as the Mental Health Services Act, imposed a 1 percent tax on people who earn more than $1 million annually to pay for expanded mental health care in California. The measure raises about $2 billion each year for such services, such as preventing mental illness from progressing, reducing stigma and improving treatment. Altogether, counties have received $16.53 billion.

The information is here.

The Tech Industry’s War on Kids

Richard Freed
Medium.com
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Wednesday, April 11, 2018

What to do with those divested billions? The only way is ethics

Juliette Jowit
The Guardian
Originally posted March 15, 2018

Here is an excerpt:

“I would not feel comfortable gaining from somebody else’s misery,” explains company owner and private investor Rebecca Hughes.

Institutions too are heading in the same direction: nearly 80% of investors across 30 countries told last year’s Schroders’ Global Investor Study that sustainability had become more important to them over the last five years.

“While profitability remains the central investment consideration, interest in sustainability is increasing,” said Jessica Ground, Schroders’ global head of stewardship. “But investors also see sustainability and profits as intertwined.”

UBS’s Doing well by doing good report claims more than half the UK public would pay more for goods or services with a conscience. Many more people will want better ethical standards, even if they don’t want or can’t afford to pay for them.

“It’s in my upbringing: you treat others in the way you’d like to be treated,” says Hughes.

More active financial investors are also taking the issues seriously. Several have indices to track the value of shares in companies which are not doing ‘bad’, or actively doing ‘good’. One is Morgan Stanley, whose two environmental, social and governance (ESG) indices – also covering weapons and women’s progress – were worth $62bn by last summer.

The information is here.

How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock and William C. Gerken
Harvard Business Review
Originally posted March 5, 2018

One bad apple, the saying goes, can ruin the bunch. So, too, with employees.

Our research on the contagiousness of employee fraud tells us that even your most honest employees become more likely to commit misconduct if they work alongside a dishonest individual. And while it would be nice to think that the honest employees would prompt the dishonest employees to better choices, that’s rarely the case.

Among co-workers, it appears easier to learn bad behavior than good.

For managers, it is important to realize that the costs of a problematic employee go beyond the direct effects of that employee’s actions — bad behaviors of one employee spill over into the behaviors of other employees through peer effects. By under-appreciating these spillover effects, a few malignant employees can infect an otherwise healthy corporate culture.

History — and current events — are littered with outbreaks of misconduct among co-workers: mortgage underwriters leading up to the financial crisis, stock brokers at boiler rooms such as Stratton Oakmont, and cross-selling by salespeople at Wells Fargo.

The information is here.

Tuesday, April 10, 2018

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Lily Frank and Sven Nyholm
Artificial Intelligence and Law
September 2017, Volume 25, Issue 3, pp 305–323

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

The article is here.

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Monday, April 9, 2018

Use Your Brain: Artificial Intelligence Isn't Close to Replacing It

Leonid Bershidsky
Bloomberg.com
Originally posted March 19, 2018

Nectome promises to preserve the brains of terminally ill people in order to turn them into computer simulations -- at some point in the future when such a thing is possible. It's a startup that's easy to mock. 1  Just beyond the mockery, however, lies an important reminder to remain skeptical of modern artificial intelligence technology.

The idea behind Nectome is known to mind uploading enthusiasts (yes, there's an entire culture around the idea, with a number of wealthy foundations backing the research) as "destructive uploading": A brain must be killed to map it. That macabre proposition has resulted in lots of publicity for Nectome, which predictably got lumped together with earlier efforts to deep-freeze millionaires' bodies so they could be revived when technology allows it. Nectome's biggest problem, however, isn't primarily ethical.

The company has developed a way to embalm the brain in a way that keeps all its synapses visible with an electronic microscope. That makes it possible to create a map of all of the brain's neuron connections, a "connectome." Nectome's founders believe that map is the most important element of the reconstructed human brain and that preserving it should keep all of a person's memories intact. But even these mind uploading optimists only expect the first 10,000-neuron network to be reconstructed sometime between 2021 and 2024.

The information is here.

Do Evaluations Rise With Experience?

Kieran O’Connor and Amar Cheema
Psychological Science 
First Published March 1, 2018

Abstract

Sequential evaluation is the hallmark of fair review: The same raters assess the merits of applicants, athletes, art, and more using standard criteria. We investigated one important potential contaminant in such ubiquitous decisions: Evaluations become more positive when conducted later in a sequence. In four studies, (a) judges’ ratings of professional dance competitors rose across 20 seasons of a popular television series, (b) university professors gave higher grades when the same course was offered multiple times, and (c) in an experimental test of our hypotheses, evaluations of randomly ordered short stories became more positive over a 2-week sequence. As judges completed repeated evaluations, they experienced more fluent decision making, producing more positive judgments (Study 4 mediation). This seemingly simple bias has widespread and impactful consequences for evaluations of all kinds. We also report four supplementary studies to bolster our findings and address alternative explanations.

The article is here.

Sunday, April 8, 2018

Can Bots Help Us Deal with Grief?

Evan Selinger
Medium.com
Originally posted March 13, 2018

Here are two excerpts:

Muhammad is under no illusion that he’s speaking with the dead. To the contrary, Muhammad is quick to point out the simulation he created works well when generating scripts of predictable answers, but it has difficulty relating to current events, like a presidential election. In Muhammad’s eyes, this is a feature, not a bug.

Muhammad said that “out of good conscience” he didn’t program the simulation to be surprising, because that capability would deviate too far from the goal of “personality emulation.”

This constraint fascinates me. On the one hand, we’re all creatures of habit. Without habits, people would have to deliberate before acting every single time. This isn’t practically feasible, so habits can be beneficial when they function as shortcuts that spare us from paralysis resulting from overanalysis.

(cut)

The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair. When talking to the empty chair, the patient engages in role-playing and acts as if a person is seated right in front of her — someone to whom she has something to say. After making a statement, launching an accusation, or asking a question, the patient then responds to herself by taking on the absent interlocutor’s perspective.

In the case of unresolved parental issues, the dialog could have the scripted format of the patient saying something to her “mother,” and then having her “mother” respond to what she said, going back and forth in a dialog until something that seems meaningful happens. The prop of an actual chair isn’t always necessary, and the context of the conversations can vary. In a bereavement context, for example, a widow might ask the chair-as-deceased-spouse for advice about what to do in a troubling situation.

The article is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Friday, April 6, 2018

Complaint: Allina ignored intern’s sexual harassment allegations

Barbara L. Jones
Minnesota Lawyer
Originally published March 7, 2018

Here is an excerpt:

Abel’s complaint stems from the practicum at Abbott, partially under Gottlieb’s supervision.  She began her practicum in September 2015. According to the complaint, she immediately encountered sexualized conversation with Gottlieb and he attempted to control any conversations she and other students had with anybody other than him.

On her first day at the clinic, Gottlieb took students outside and instructed Abel to lie down in the street, ostensibly to measure a parking space. She refused and Gottlieb told her that “obeying” him would be an area for growth. When speaking with other people, he frequently referred to Abel, of Asian-Indian descent, as “the graduate student of color” or “the brown one.”  He also refused to provide her with access to the IT chart system, forcing her to ask him for “favors,” the complaint alleges. Gottlieb repeatedly threatened to fire Abel and other students from the practicum, the complaint said.

Gottlieb spent time in individual supervision sessions with Abel and also group sessions that involved role play. He told students to mimic having sex with him in his role as therapist and tell him he was good in bed, the complaint states. At these times he sometimes had a visible erection, the complaint also says. Abel raised these and other concerns but was brushed off by Abbott personnel, her complaint alleges.  Abel asked Dr. Michael Schmitz, the clinical director of hospital-based psychology services, for help but was told that she had to be “emotionally tough” and put up with Gottlieb, the complaint continues. She sought some assistance from Finch, whose job was to assist Gottlieb in the clinical psychology training program and supervise interns.  Gottlieb was displeased and threatening about her discussions with Schmitz and Finch, the complaint says.

The article is here.

Schools are a place for students to grow morally and emotionally — let's encourage them

William Eidtson
The Hill
Originally posted March 10, 2018

Here is an excerpt:

However, if schools were truly a place for students to grow “emotionally and morally,” wouldn’t engaging in a demonstration of solidarity to protest the all too recurrent slaughter of concertgoers, church assemblies, and schoolchildren be one of the most emotionally engaging and morally relevant activities they could undertake?

And if life is all about choices and consequences, wouldn’t the choice to allow students to engage in one of the most cherished traditions of our democracy — namely, political dissent — potentially result in a profound and historically significant educational experience?

The fact is that our educational institutions are often not places that foster emotional and moral growth within students. Why? Part of the reason is because while our schools are pretty good at teaching students how to do things, they fail at teaching why things matter.

School officials tend to assume that if you simply teach students how things work, the “why it’s important” will naturally follow. But this is precisely the opposite of how we learn and grow in the world. People need reasons, stories, and context to direct their skills.

We need the why to give us a context to understand and use the how. We need the why to give us good reasons to learn the how. The why makes the how relevant. The why makes the how endurable. The why makes the how possible.

The article is here.

Thursday, April 5, 2018

Would You Opt for Immortality?

Michael Shermer
Quillette
Originally posted March 2, 2018

Here is an excerpt:

The idea of living forever, in fact, is not such a radical idea when you consider the fact that the vast majority of people already believe that they will do so in the next life. Since the late 1990s Gallup has consistently found that between 72 and 83 percent of Americans believe in heaven. Globally, rates of belief in heaven in other countries typically lag behind those found in America, but they are nonetheless robust. A 2011 Ipsos/Reuters poll, for example, found that of 18,829 people surveyed across 23 countries,2 51 percent said they were convinced that an afterlife exists, ranging from a high of 62 percent of Indonesians and 52 percent of South Africans and Turks, to a low of 28 percent of Brazilians and only 3 percent of the very secular Swedes.

So powerful and pervasive are such convictions that even a third of agnostics and atheists proclaim belief in an afterlife. Say what? A 2014 survey conducted by the Austin Institute for the Study of Family and Culture on 15,738 Americans between the ages of 18 and 60 found that 13.2 percent identify as atheist or agnostic, and 32 percent of those answered in the affirmative to the question: “Do you think there is life, or some sort of conscious existence, after death?”

Depending on what these people believe about what, exactly, is resurrected in the next life—just your soul, or both your body and your soul—the belief among religious people that “you” will continue indefinitely in some form in the hereafter is not so different in principle from what the scientific immortalists are trying to accomplish in the here and now.

The article is here.

Moral Injury and Religiosity in US Veterans With Posttraumatic Stress Disorder Symptoms

Harold Koenig and others
The Journal of Nervous and Mental Disease: February 28, 2018

Abstract

Moral injury (MI) involves feelings of shame, grief, meaninglessness, and remorse from having violated core moral beliefs related to traumatic experiences. This multisite cross-sectional study examined the association between religious involvement (RI) and MI symptoms, mediators of the relationship, and the modifying effects of posttraumatic stress disorder (PTSD) severity in 373 US veterans with PTSD symptoms who served in a combat theater. Assessed were demographic, military, religious, physical, social, behavioral, and psychological characteristics using standard measures of RI, MI symptoms, PTSD, depression, and anxiety. MI was widespread, with over 90% reporting high levels of at least one MI symptom and the majority reporting at least five symptoms or more. In the overall sample, religiosity was inversely related to MI in bivariate analyses (r = −0.25, p < 0.0001) and multivariate analyses (B = −0.40, p = 0.001); however, this relationship was present only among veterans with severe PTSD (B = −0.65, p = 0.0003). These findings have relevance for the care of veterans with PTSD.

The paper is here.

Wednesday, April 4, 2018

Musk and Zuckerberg are fighting over whether we rule technology—or it rules us

Michael Coren
Quartz.com
Originally posted April 1, 2018

Here is an excerpt:

Musk wants to rein in AI, which he calls “a fundamental risk to the existence of human civilization.” Zuckerberg has dismissed such views calling their proponents “naysayers.” During a Facebook live stream last July, he added, “In some ways I actually think it is pretty irresponsible.” Musk was quick to retort on Twitter. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”

Both men’s views on the risks and rewards of technology are embodied in their respective companies. Zuckerberg has famously embraced the motto “Move fast and break things.” That served Facebook well as it exploded from a college campus experiment in 2004 to an aggregator of the internet for more than 2 billion users.

Facebook has treated the world as an infinite experiment, a game of low-stakes, high-volume tests that reliably generate profits, if not always progress. Zuckerberg’s main concern has been to deliver the fruits of digital technology to as many people as possible, as soon as possible. “I have pretty strong opinions on this,” Zuckerberg has said. “I am optimistic. I think you can build things and the world gets better.”

The information is here.

Simple moral code supports cooperation

Charles Efferson & Ernst Fehr
Nature
Originally posted March 7, 2018

The evolution of cooperation hinges on the benefits of cooperation being shared among those who cooperate. In a paper in Nature, Santos et al. investigate the evolution of cooperation using computer-based modelling analyses, and they identify a rule for moral judgements that provides an especially powerful system to drive cooperation.

Cooperation can be defined as a behaviour that is costly to the individual providing help, but which provides a greater overall societal benefit. For example, if Angela has a sandwich that is of greater value to Emmanuel than to her, Angela can increase total societal welfare by giving her sandwich to Emmanuel. This requires sacrifice on her part if she likes sandwiches. Reciprocity offers a way for benefactors to avoid helping uncooperative individuals in such situations. If Angela knows Emmanuel is cooperative because she and Emmanuel have interacted before, her reciprocity is direct. If she has heard from others that Emmanuel is a cooperative person, her reciprocity is indirect — a mechanism of particular relevance to human societies.

A strategy is a rule that a donor uses to decide whether or not to cooperate, and the evolution of reciprocal strategies that support cooperation depends crucially on the amount of information that individuals process. Santos and colleagues develop a model to assess the evolution of cooperation through indirect reciprocity. The individuals in their model can consider a relatively large amount of information compared with that used in previous studies.

The review is here.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Monday, April 2, 2018

Ethics and sport have long been strangers to one another

Kenan Malik
The Guardian
Originally posted March 8, 2018

Here is an excerpt:

Today’s great ethical debate is not about payment but drugs. Last week, the digital, culture, media and sport select committee accused Bradley Wiggins of “crossing the ethical line” for allegedly misusing drugs allowed for medical purposes to enhance performance.

The ethical lines over drug use are, however, as arbitrary and irrational as earlier ones about payment. Drugs are said to be “unnatural” and to provide athletes with an “unfair advantage”. But virtually everything an athlete does, from high-altitude training to high-protein dieting, is unnatural and seeks to gain an advantage.

EPO is a naturally produced hormone that stimulates red blood cell production, so helping endurance athletes. Injections of EPO are banned in sport. Yet Chris Froome is permitted to sleep in a hypoxic chamber, which reduces oxygen in the air, forcing his body to produce more red blood cells. It has the same effect as EPO, is equally unnatural and provides an advantage. Why is one banned but not the other?

The article is here.

The Grim Conclusions of the Largest-Ever Study of Fake News

Robinson Meyer
The Atlantic
Originally posted March 8, 2018

Here is an excerpt:

“It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”

The study has already prompted alarm from social scientists. “We must redesign our information ecosystem for the 21st century,” write a group of 16 political scientists and legal scholars in an essay also published Thursday in Science. They call for a new drive of interdisciplinary research “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

“How can we create a news ecosystem … that values and promotes truth?” they ask.

The new study suggests that it will not be easy. Though Vosoughi and his colleagues only focus on Twitter—the study was conducted using exclusive data which the company made available to MIT—their work has implications for Facebook, YouTube, and every major social network. Any platform that regularly amplifies engaging or provocative content runs the risk of amplifying fake news along with it.

The article is here.

Sunday, April 1, 2018

Sudden-Death Aversion: Avoiding Superior Options Because They Feel Riskier

Jesse Walker, Jane L. Risen, Thomas Gilovich, and Richard Thaler
Journal of Personality and Social Psychology, in press

Abstract

We present evidence of Sudden-Death Aversion (SDA) – the tendency to avoid “fast” strategies that provide a greater chance of success, but include the possibility of immediate defeat, in favor of “slow” strategies that reduce the possibility of losing quickly, but have lower odds of ultimate success. Using a combination of archival analyses and controlled experiments, we explore the psychology behind SDA. First, we provide evidence for SDA and its cost to decision makers by tabulating how often NFL teams send games into overtime by kicking an extra point rather than going for the 2-point conversion (Study 1) and how often NBA teams attempt potentially game-tying 2-point shots rather than potentially game-winning 3-pointers (Study 2). To confirm that SDA is not limited to sports, we demonstrate SDA in a military scenario (Study 3). We then explore two mechanisms that contribute to SDA: myopic loss aversion and concerns about “tempting fate.” Studies 4 and 5 show that SDA is due, in part, to myopic loss aversion, such that decision makers narrow the decision frame, paying attention to the prospect of immediate loss with the “fast” strategy, but not the downstream consequences of the “slow” strategy. Study 6 finds people are more pessimistic about a risky strategy that needn’t be pursued (opting for sudden death) than the same strategy that must be pursued. We end by discussing how these twin mechanisms lead to differential expectations of blame from the self and others, and how SDA influences decisions in several different walks of life.

The research is here.