Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Saturday, April 21, 2018

A Systematic Review and Meta‐Synthesis of Qualitative Research Into Mandatory Personal Psychotherapy During Training

David Murphy, Nisha Irfan, Harriet Barnett, Emma Castledine, & Lily Enescu
Counseling and Psychotherapy Research
First published February 23, 2018

Abstract

Background
This study addresses the thorny issue of mandatory personal psychotherapy within counselling and psychotherapy training. It is expensive, emotionally demanding and time‐consuming. Nevertheless, proponents argue that it is essential in protecting the public and keeping clients safe; to ensure psychotherapists develop high levels of self‐awareness and gain knowledge of interpersonal dynamics; and that it enhances therapist effectiveness. Existing evidence about these potential benefits is equivocal and is largely reliant on small‐scale qualitative studies.

Method
We carried out a systematic review of literature searched within five major databases. The search identified 16 published qualitative research studies on the topic of mandatory personal psychotherapy that matched the inclusion criteria. All studies were rated for quality. The findings from individual studies were thematically analysed through a process of meta‐synthesis.

Results
Meta‐synthesis showed studies on mandatory psychotherapy had reported both positive and hindering factors in almost equal number. Six main themes were identified: three positive and three negative. Positive findings were related to personal and professional development, experiential learning and therapeutic benefits. Negative findings related to ethical imperatives do no harm, justice and integrity.

Conclusion
When mandatory personal psychotherapy is used within a training programme, courses must consider carefully and put ethical issues at the forefront of decision‐making. Additionally, the requirement of mandatory psychotherapy should be positioned and identified as an experiential pedagogical device rather than fulfilling a curative function. Recommendations for further research are made.

The research is here.

Friday, April 20, 2018

Making a Thinking Machine

Leah Winerman
The Monitor on Psychology - April 2018

Here is an excerpt:

A 'Top Down' Approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more “top-down” approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world. Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately.

The article is here.

Feds: Pitt professor agrees to pay government more than $130K to resolve claims of research grant misdeeds

Sean D. Hamill and Jonathan D. Silver
Pittsburgh Post-Gazette
Originally posted March 21, 2018

Here is an excerpt:

A prolific researcher, Mr. Schunn, pulled in more than $50 million in 24 NSF grants over the past 20 years, as well as another $25 million in 24 other grants from the military and private foundations, most of it researching how people learn, according to his personal web page.

Now, according to the government, Mr. Schunn must “provide certifications and assurances of truthfulness to NSF for up to five years, and agree not to serve as a reviewer, adviser or consultant to NSF for a period of three years.”

But all that may be the least of the fallout from Mr. Schunn’s settlement, according to a fellow researcher who worked on a grant with him in the past.

Though the settlement only involved fraud accusations on four NSF grants from 2006 to 2016, it will bring additional scrutiny to all of his work, not only of the grants themselves, but results, said Joseph Merlino, president of the 21st Century Partnership for STEM Education, a nonprofit based in Conshohocken.

“That’s what I’m thinking: Can I trust the data he gave us?” Mr. Merlino said of a project that he worked on with Mr. Schunn, and for which they just published a research article.

The information is here.

Note: The article refers to Dr. Schunn as Mr. Shunn throughout, even though he has a PhD in Psychology at Carnegie Mellon University.

Thursday, April 19, 2018

Common Sense for A.I. Is a Great Idea

Carissa Veliz
www.slate.com
Originally posted March 19, 2018

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

The information is here.

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Sandra Upson
Wired.com
Originally posted February 16, 2018

Here is an excerpt:

But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?

A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.

The information is here.

Wednesday, April 18, 2018

Is There A Difference Between Ethics And Morality In Business?

Bruce Weinstein
Forbes.com
Originally published February 23, 2018

Here is an excerpt:

In practical terms, if you use both “ethics” and “morality” in conversation, the people you’re speaking with will probably take issue with how you’re using these terms, even if they believe they’re distinct in some way.

The conversation will then veer from whatever substantive ethical point you were trying to make (“Our company has an ethical and moral responsibility to hire and promote only honest, accountable people”) to an argument about the meaning of the words “ethical” and “moral.” I had plenty of those arguments as a graduate student in philosophy, but is that the kind of discussion you really want to have at a team meeting or business conference?

You can do one of three things, then:

1. Use “ethics” and “morality” interchangeably only when you’re speaking with people who believe they’re synonymous.

2. Choose one term and stick with it.

3. Minimize the use of both words and instead refer to what each word is broadly about: doing the right thing, leading an honorable life and acting with high character.

As a professional ethicist, I’ve come to see #3 as the best option. That way, I don’t have to guess whether the person I’m speaking with believes ethics and morality are identical concepts, which is futile when you’re speaking to an audience of 5,000 people.

The information is here.

Note: I do not agree with everything in this article, but it is worth contemplating.

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Tuesday, April 17, 2018

Planning Complexity Registers as a Cost in Metacontrol

Kool, W., Gershman, S. J., & Cushman, F. A. (in press). Planning complexity registers as a
cost in metacontrol. Journal of Cognitive Neuroscience.

Abstract

Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans have can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, while others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains wouldn’t be “worth it”. We test and verify this prediction in a sequential RL task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.

The research is here.

Building A More Ethical Workplace Culture

PYMNTS
PYMNTS.com
Originally posted March 20, 2018

Here is an excerpt:

The Worst News

Among the positive findings in the report was the fact that reporting is on the rise by a whole 19 percent, with 69 percent of employees stating they had reported misconduct in the last two years.

But that number, Harned said, comes with a bitter side note. Retaliation has also spiked during the same time period, with 44 percent reporting it – up from 22 percent two years ago.

The rate of retaliation going up faster than the rate of reporting, Harned noted, is disturbing.

“That is a very real problem for employees, and I think over the last year, we’ve seen what a huge problem it has become for employers.”

The door-to-door on retaliation for reporting is short – about three weeks on average. That is just about the time it takes for firms – even those serious about doing a good job with improving compliance – to get any investigation up and organized.

“By then, the damage is already done,” said Harned. “We are better at seeing misconduct, but we aren’t doing enough to prevent it from happening – especially because retaliation is such a big problem.”

There are not easy solutions, Harned noted, but the good news – even in the face of the worst news – is that improvement is possible, and is even being logged in some segments. Employees, she stated, mostly come in the door with a moral compass to call their own, and want to work in environments that are healthy, not vicious.

“The answer is culture is everything: Companies need to constantly communicate to employees that conduct is the expectation for all levels of the organization, and that breaking those rules will always have consequences.”

The post is here.

Monday, April 16, 2018

The Seth Rich lawsuit matters more than the Stormy Daniels case

Jill Abramson
The Guardian
Originally published March 20, 2018

Here is an excerpt:

I’ve previously written about Fox News’ shameless coverage of the 2016 unsolved murder of a young former Democratic National Committee staffer named Seth Rich. Last week, ABC News reported that his family has filed a lawsuit against Fox, charging that several of its journalists fabricated a vile story attempting to link the hacked emails from Democratic National Committee computers to Rich, who worked there.

After the fabricated story ran on the Fox website, it was retracted, but not before various on-air stars, especially Trump mouthpiece Sean Hannity, flogged the bogus conspiracy theory suggesting Rich had something to do with the hacked messages.

This shameful episode demonstrated, once again, that Rupert Murdoch’s favorite network, and Trump’s, has no ethical compass and had no hesitation about what grief this manufactured story caused to the 26-year-old murder victim’s family. It’s good to see them striking back, since that is the only tactic that the Murdochs and Trumps of the world will respect or, perhaps, will force them to temper the calumny they spread on a daily basis.

Of course, the Rich lawsuit does not have the sex appeal of the Stormy case. The rightwing echo chamber will brazenly ignore its self-inflicted wounds. And, for the rest of the cable pundit brigades, the DNC emails and Rich are old news.

The article is here.

Psychotherapy Is 'The' Biological Treatment

Robert Berezin
Medscape.com
Originally posted March 16, 2018

Neuroscience surprisingly teaches us that not only is psychotherapy purely biological, but it is the only real biological treatment. It addresses the brain in the way it actually develops, matures, and operates. It follows the principles of evolutionary adaptation. It is consonant with genetics. And it specifically heals the problematic adaptations of the brain in precisely the ways that they evolved in the first place. Psychotherapy deactivates maladaptive brain mappings and fosters new and constructive pathways. Let me explain.

The operations of the brain are purely biological. The brain maps our experiences and memories through the linking of trillions of neuronal connections. These interconnected webs create larger circuits that map all throughout the architecture of the cortex. This generates high-level symbolic neuronal maps that take form as images in our consciousness. The play of consciousness is the highest level of symbolic form. It is a living theater of "image-ination," a representational world that consists of a cast of characters who relate together by feeling as well as scenarios, plots, set designs, and landscape.

As we adapt to our environment, the brain maps our emotional experience through cortical memory. This starts very early in life. If a baby is startled by a loud noise, his arms and legs will flail. His heart pumps adrenaline, and he cries. This "startle" maps a fight-or-flight response in his cortex, which is mapped through serotonin and cortisol. The baby is restored by his mother's holding. Her responsive repair once again re-establishes and maintains his well-being, which is mapped through oxytocin. These ongoing formative experiences of life are mapped into memory in precisely these two basic ways.

The article is here.

Sunday, April 15, 2018

What If There Is No Ethical Way to Act in Syria Now?

Sigal Samel
The Atlantic
Originally posted April 13, 2018

For seven years now, America has been struggling to understand its moral responsibility in Syria. For every urgent argument to intervene against Syrian President Bashar al-Assad to stop the mass killing of civilians, there were ready responses about the risks of causing more destruction than could be averted, or even escalating to a major war with other powers in Syria. In the end, American intervention there has been tailored mostly to a narrow perception of American interests in stopping the threat of terror. But the fundamental questions are still unresolved: What exactly was the moral course of action in Syria? And more urgently, what—if any—is the moral course of action now?

The war has left roughly half a million people dead—the UN has stopped counting—but the question of moral responsibility has taken on new urgency in the wake of a suspected chemical attack over the weekend. As President Trump threatened to launch retaliatory missile strikes, I spoke about America’s ethical responsibility with some of the world’s leading moral philosophers. These are people whose job it is to ascertain the right thing to do in any given situation. All of them suggested that, years ago, America might have been able to intervene in a moral way to stop the killing in the Syrian civil war. But asked what America should do now, they all gave the same startling response: They don’t know.

The article is here.

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Friday, April 13, 2018

The Farmbots Are Coming

Matt Jancer
www.wired.com
Originally published March 9, 2018

The first fully autonomous ground vehicles hitting the market aren’t cars or delivery trucks—they’re ­robo­-farmhands. The Dot Power Platform is a prime example of an explosion in advanced agricultural technology, which Goldman Sachs predicts will raise crop yields 70 percent by 2050. But Dot isn’t just a tractor that can drive without a human for backup. It’s the Transformer of ag-bots, capable of performing 100-plus jobs, from hay baler and seeder to rock picker and manure spreader, via an ­arsenal of tool modules. And though the hulking machine can carry 40,000 pounds, it navigates fields with balletic precision.

The information is here.

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

Thursday, April 12, 2018

CA’s Tax On Millionaires Yields Big Benefits For People With Mental Illness

Anna Gorman
Kaiser Health News
Originally published March 14, 2018

A statewide tax on the wealthy has significantly boosted mental health programs in California’s largest county, helping to reduce homelessness, incarceration and hospitalization, according to a report released Tuesday.

Revenue from the tax, the result of a statewide initiative passed in 2004, also expanded access to therapy and case management to almost 130,000 people up to age 25 in Los Angeles County, according to the report by the Rand Corp. Many were poor and from minority communities, the researchers said.

“Our results are encouraging about the impact these programs are having,” said Scott Ashwood, one of the authors and an associate policy researcher at Rand. “Overall we are seeing that these services are reaching a vulnerable population that needs them.”

The positive findings came just a few weeks after a critical state audit accused California counties of hoarding the mental health money — and the state of failing to ensure that the money was being spent. The February audit said that the California Department of Health Care Services allowed local mental health departments to accumulate $231 million in unspent funds by the end of the 2015-16 fiscal year — which should have been returned to the state because it was not spent in the allowed time frame.

Proposition 63, now known as the Mental Health Services Act, imposed a 1 percent tax on people who earn more than $1 million annually to pay for expanded mental health care in California. The measure raises about $2 billion each year for such services, such as preventing mental illness from progressing, reducing stigma and improving treatment. Altogether, counties have received $16.53 billion.

The information is here.

The Tech Industry’s War on Kids

Richard Freed
Medium.com
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Wednesday, April 11, 2018

What to do with those divested billions? The only way is ethics

Juliette Jowit
The Guardian
Originally posted March 15, 2018

Here is an excerpt:

“I would not feel comfortable gaining from somebody else’s misery,” explains company owner and private investor Rebecca Hughes.

Institutions too are heading in the same direction: nearly 80% of investors across 30 countries told last year’s Schroders’ Global Investor Study that sustainability had become more important to them over the last five years.

“While profitability remains the central investment consideration, interest in sustainability is increasing,” said Jessica Ground, Schroders’ global head of stewardship. “But investors also see sustainability and profits as intertwined.”

UBS’s Doing well by doing good report claims more than half the UK public would pay more for goods or services with a conscience. Many more people will want better ethical standards, even if they don’t want or can’t afford to pay for them.

“It’s in my upbringing: you treat others in the way you’d like to be treated,” says Hughes.

More active financial investors are also taking the issues seriously. Several have indices to track the value of shares in companies which are not doing ‘bad’, or actively doing ‘good’. One is Morgan Stanley, whose two environmental, social and governance (ESG) indices – also covering weapons and women’s progress – were worth $62bn by last summer.

The information is here.

How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock and William C. Gerken
Harvard Business Review
Originally posted March 5, 2018

One bad apple, the saying goes, can ruin the bunch. So, too, with employees.

Our research on the contagiousness of employee fraud tells us that even your most honest employees become more likely to commit misconduct if they work alongside a dishonest individual. And while it would be nice to think that the honest employees would prompt the dishonest employees to better choices, that’s rarely the case.

Among co-workers, it appears easier to learn bad behavior than good.

For managers, it is important to realize that the costs of a problematic employee go beyond the direct effects of that employee’s actions — bad behaviors of one employee spill over into the behaviors of other employees through peer effects. By under-appreciating these spillover effects, a few malignant employees can infect an otherwise healthy corporate culture.

History — and current events — are littered with outbreaks of misconduct among co-workers: mortgage underwriters leading up to the financial crisis, stock brokers at boiler rooms such as Stratton Oakmont, and cross-selling by salespeople at Wells Fargo.

The information is here.

Tuesday, April 10, 2018

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Lily Frank and Sven Nyholm
Artificial Intelligence and Law
September 2017, Volume 25, Issue 3, pp 305–323

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

The article is here.

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Monday, April 9, 2018

Use Your Brain: Artificial Intelligence Isn't Close to Replacing It

Leonid Bershidsky
Bloomberg.com
Originally posted March 19, 2018

Nectome promises to preserve the brains of terminally ill people in order to turn them into computer simulations -- at some point in the future when such a thing is possible. It's a startup that's easy to mock. 1  Just beyond the mockery, however, lies an important reminder to remain skeptical of modern artificial intelligence technology.

The idea behind Nectome is known to mind uploading enthusiasts (yes, there's an entire culture around the idea, with a number of wealthy foundations backing the research) as "destructive uploading": A brain must be killed to map it. That macabre proposition has resulted in lots of publicity for Nectome, which predictably got lumped together with earlier efforts to deep-freeze millionaires' bodies so they could be revived when technology allows it. Nectome's biggest problem, however, isn't primarily ethical.

The company has developed a way to embalm the brain in a way that keeps all its synapses visible with an electronic microscope. That makes it possible to create a map of all of the brain's neuron connections, a "connectome." Nectome's founders believe that map is the most important element of the reconstructed human brain and that preserving it should keep all of a person's memories intact. But even these mind uploading optimists only expect the first 10,000-neuron network to be reconstructed sometime between 2021 and 2024.

The information is here.

Do Evaluations Rise With Experience?

Kieran O’Connor and Amar Cheema
Psychological Science 
First Published March 1, 2018

Abstract

Sequential evaluation is the hallmark of fair review: The same raters assess the merits of applicants, athletes, art, and more using standard criteria. We investigated one important potential contaminant in such ubiquitous decisions: Evaluations become more positive when conducted later in a sequence. In four studies, (a) judges’ ratings of professional dance competitors rose across 20 seasons of a popular television series, (b) university professors gave higher grades when the same course was offered multiple times, and (c) in an experimental test of our hypotheses, evaluations of randomly ordered short stories became more positive over a 2-week sequence. As judges completed repeated evaluations, they experienced more fluent decision making, producing more positive judgments (Study 4 mediation). This seemingly simple bias has widespread and impactful consequences for evaluations of all kinds. We also report four supplementary studies to bolster our findings and address alternative explanations.

The article is here.

Sunday, April 8, 2018

Can Bots Help Us Deal with Grief?

Evan Selinger
Medium.com
Originally posted March 13, 2018

Here are two excerpts:

Muhammad is under no illusion that he’s speaking with the dead. To the contrary, Muhammad is quick to point out the simulation he created works well when generating scripts of predictable answers, but it has difficulty relating to current events, like a presidential election. In Muhammad’s eyes, this is a feature, not a bug.

Muhammad said that “out of good conscience” he didn’t program the simulation to be surprising, because that capability would deviate too far from the goal of “personality emulation.”

This constraint fascinates me. On the one hand, we’re all creatures of habit. Without habits, people would have to deliberate before acting every single time. This isn’t practically feasible, so habits can be beneficial when they function as shortcuts that spare us from paralysis resulting from overanalysis.

(cut)

The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair. When talking to the empty chair, the patient engages in role-playing and acts as if a person is seated right in front of her — someone to whom she has something to say. After making a statement, launching an accusation, or asking a question, the patient then responds to herself by taking on the absent interlocutor’s perspective.

In the case of unresolved parental issues, the dialog could have the scripted format of the patient saying something to her “mother,” and then having her “mother” respond to what she said, going back and forth in a dialog until something that seems meaningful happens. The prop of an actual chair isn’t always necessary, and the context of the conversations can vary. In a bereavement context, for example, a widow might ask the chair-as-deceased-spouse for advice about what to do in a troubling situation.

The article is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Friday, April 6, 2018

Complaint: Allina ignored intern’s sexual harassment allegations

Barbara L. Jones
Minnesota Lawyer
Originally published March 7, 2018

Here is an excerpt:

Abel’s complaint stems from the practicum at Abbott, partially under Gottlieb’s supervision.  She began her practicum in September 2015. According to the complaint, she immediately encountered sexualized conversation with Gottlieb and he attempted to control any conversations she and other students had with anybody other than him.

On her first day at the clinic, Gottlieb took students outside and instructed Abel to lie down in the street, ostensibly to measure a parking space. She refused and Gottlieb told her that “obeying” him would be an area for growth. When speaking with other people, he frequently referred to Abel, of Asian-Indian descent, as “the graduate student of color” or “the brown one.”  He also refused to provide her with access to the IT chart system, forcing her to ask him for “favors,” the complaint alleges. Gottlieb repeatedly threatened to fire Abel and other students from the practicum, the complaint said.

Gottlieb spent time in individual supervision sessions with Abel and also group sessions that involved role play. He told students to mimic having sex with him in his role as therapist and tell him he was good in bed, the complaint states. At these times he sometimes had a visible erection, the complaint also says. Abel raised these and other concerns but was brushed off by Abbott personnel, her complaint alleges.  Abel asked Dr. Michael Schmitz, the clinical director of hospital-based psychology services, for help but was told that she had to be “emotionally tough” and put up with Gottlieb, the complaint continues. She sought some assistance from Finch, whose job was to assist Gottlieb in the clinical psychology training program and supervise interns.  Gottlieb was displeased and threatening about her discussions with Schmitz and Finch, the complaint says.

The article is here.

Schools are a place for students to grow morally and emotionally — let's encourage them

William Eidtson
The Hill
Originally posted March 10, 2018

Here is an excerpt:

However, if schools were truly a place for students to grow “emotionally and morally,” wouldn’t engaging in a demonstration of solidarity to protest the all too recurrent slaughter of concertgoers, church assemblies, and schoolchildren be one of the most emotionally engaging and morally relevant activities they could undertake?

And if life is all about choices and consequences, wouldn’t the choice to allow students to engage in one of the most cherished traditions of our democracy — namely, political dissent — potentially result in a profound and historically significant educational experience?

The fact is that our educational institutions are often not places that foster emotional and moral growth within students. Why? Part of the reason is because while our schools are pretty good at teaching students how to do things, they fail at teaching why things matter.

School officials tend to assume that if you simply teach students how things work, the “why it’s important” will naturally follow. But this is precisely the opposite of how we learn and grow in the world. People need reasons, stories, and context to direct their skills.

We need the why to give us a context to understand and use the how. We need the why to give us good reasons to learn the how. The why makes the how relevant. The why makes the how endurable. The why makes the how possible.

The article is here.

Thursday, April 5, 2018

Would You Opt for Immortality?

Michael Shermer
Quillette
Originally posted March 2, 2018

Here is an excerpt:

The idea of living forever, in fact, is not such a radical idea when you consider the fact that the vast majority of people already believe that they will do so in the next life. Since the late 1990s Gallup has consistently found that between 72 and 83 percent of Americans believe in heaven. Globally, rates of belief in heaven in other countries typically lag behind those found in America, but they are nonetheless robust. A 2011 Ipsos/Reuters poll, for example, found that of 18,829 people surveyed across 23 countries,2 51 percent said they were convinced that an afterlife exists, ranging from a high of 62 percent of Indonesians and 52 percent of South Africans and Turks, to a low of 28 percent of Brazilians and only 3 percent of the very secular Swedes.

So powerful and pervasive are such convictions that even a third of agnostics and atheists proclaim belief in an afterlife. Say what? A 2014 survey conducted by the Austin Institute for the Study of Family and Culture on 15,738 Americans between the ages of 18 and 60 found that 13.2 percent identify as atheist or agnostic, and 32 percent of those answered in the affirmative to the question: “Do you think there is life, or some sort of conscious existence, after death?”

Depending on what these people believe about what, exactly, is resurrected in the next life—just your soul, or both your body and your soul—the belief among religious people that “you” will continue indefinitely in some form in the hereafter is not so different in principle from what the scientific immortalists are trying to accomplish in the here and now.

The article is here.

Moral Injury and Religiosity in US Veterans With Posttraumatic Stress Disorder Symptoms

Harold Koenig and others
The Journal of Nervous and Mental Disease: February 28, 2018

Abstract

Moral injury (MI) involves feelings of shame, grief, meaninglessness, and remorse from having violated core moral beliefs related to traumatic experiences. This multisite cross-sectional study examined the association between religious involvement (RI) and MI symptoms, mediators of the relationship, and the modifying effects of posttraumatic stress disorder (PTSD) severity in 373 US veterans with PTSD symptoms who served in a combat theater. Assessed were demographic, military, religious, physical, social, behavioral, and psychological characteristics using standard measures of RI, MI symptoms, PTSD, depression, and anxiety. MI was widespread, with over 90% reporting high levels of at least one MI symptom and the majority reporting at least five symptoms or more. In the overall sample, religiosity was inversely related to MI in bivariate analyses (r = −0.25, p < 0.0001) and multivariate analyses (B = −0.40, p = 0.001); however, this relationship was present only among veterans with severe PTSD (B = −0.65, p = 0.0003). These findings have relevance for the care of veterans with PTSD.

The paper is here.

Wednesday, April 4, 2018

Musk and Zuckerberg are fighting over whether we rule technology—or it rules us

Michael Coren
Quartz.com
Originally posted April 1, 2018

Here is an excerpt:

Musk wants to rein in AI, which he calls “a fundamental risk to the existence of human civilization.” Zuckerberg has dismissed such views calling their proponents “naysayers.” During a Facebook live stream last July, he added, “In some ways I actually think it is pretty irresponsible.” Musk was quick to retort on Twitter. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”

Both men’s views on the risks and rewards of technology are embodied in their respective companies. Zuckerberg has famously embraced the motto “Move fast and break things.” That served Facebook well as it exploded from a college campus experiment in 2004 to an aggregator of the internet for more than 2 billion users.

Facebook has treated the world as an infinite experiment, a game of low-stakes, high-volume tests that reliably generate profits, if not always progress. Zuckerberg’s main concern has been to deliver the fruits of digital technology to as many people as possible, as soon as possible. “I have pretty strong opinions on this,” Zuckerberg has said. “I am optimistic. I think you can build things and the world gets better.”

The information is here.

Simple moral code supports cooperation

Charles Efferson & Ernst Fehr
Nature
Originally posted March 7, 2018

The evolution of cooperation hinges on the benefits of cooperation being shared among those who cooperate. In a paper in Nature, Santos et al. investigate the evolution of cooperation using computer-based modelling analyses, and they identify a rule for moral judgements that provides an especially powerful system to drive cooperation.

Cooperation can be defined as a behaviour that is costly to the individual providing help, but which provides a greater overall societal benefit. For example, if Angela has a sandwich that is of greater value to Emmanuel than to her, Angela can increase total societal welfare by giving her sandwich to Emmanuel. This requires sacrifice on her part if she likes sandwiches. Reciprocity offers a way for benefactors to avoid helping uncooperative individuals in such situations. If Angela knows Emmanuel is cooperative because she and Emmanuel have interacted before, her reciprocity is direct. If she has heard from others that Emmanuel is a cooperative person, her reciprocity is indirect — a mechanism of particular relevance to human societies.

A strategy is a rule that a donor uses to decide whether or not to cooperate, and the evolution of reciprocal strategies that support cooperation depends crucially on the amount of information that individuals process. Santos and colleagues develop a model to assess the evolution of cooperation through indirect reciprocity. The individuals in their model can consider a relatively large amount of information compared with that used in previous studies.

The review is here.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Monday, April 2, 2018

Ethics and sport have long been strangers to one another

Kenan Malik
The Guardian
Originally posted March 8, 2018

Here is an excerpt:

Today’s great ethical debate is not about payment but drugs. Last week, the digital, culture, media and sport select committee accused Bradley Wiggins of “crossing the ethical line” for allegedly misusing drugs allowed for medical purposes to enhance performance.

The ethical lines over drug use are, however, as arbitrary and irrational as earlier ones about payment. Drugs are said to be “unnatural” and to provide athletes with an “unfair advantage”. But virtually everything an athlete does, from high-altitude training to high-protein dieting, is unnatural and seeks to gain an advantage.

EPO is a naturally produced hormone that stimulates red blood cell production, so helping endurance athletes. Injections of EPO are banned in sport. Yet Chris Froome is permitted to sleep in a hypoxic chamber, which reduces oxygen in the air, forcing his body to produce more red blood cells. It has the same effect as EPO, is equally unnatural and provides an advantage. Why is one banned but not the other?

The article is here.

The Grim Conclusions of the Largest-Ever Study of Fake News

Robinson Meyer
The Atlantic
Originally posted March 8, 2018

Here is an excerpt:

“It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”

The study has already prompted alarm from social scientists. “We must redesign our information ecosystem for the 21st century,” write a group of 16 political scientists and legal scholars in an essay also published Thursday in Science. They call for a new drive of interdisciplinary research “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

“How can we create a news ecosystem … that values and promotes truth?” they ask.

The new study suggests that it will not be easy. Though Vosoughi and his colleagues only focus on Twitter—the study was conducted using exclusive data which the company made available to MIT—their work has implications for Facebook, YouTube, and every major social network. Any platform that regularly amplifies engaging or provocative content runs the risk of amplifying fake news along with it.

The article is here.

Sunday, April 1, 2018

Sudden-Death Aversion: Avoiding Superior Options Because They Feel Riskier

Jesse Walker, Jane L. Risen, Thomas Gilovich, and Richard Thaler
Journal of Personality and Social Psychology, in press

Abstract

We present evidence of Sudden-Death Aversion (SDA) – the tendency to avoid “fast” strategies that provide a greater chance of success, but include the possibility of immediate defeat, in favor of “slow” strategies that reduce the possibility of losing quickly, but have lower odds of ultimate success. Using a combination of archival analyses and controlled experiments, we explore the psychology behind SDA. First, we provide evidence for SDA and its cost to decision makers by tabulating how often NFL teams send games into overtime by kicking an extra point rather than going for the 2-point conversion (Study 1) and how often NBA teams attempt potentially game-tying 2-point shots rather than potentially game-winning 3-pointers (Study 2). To confirm that SDA is not limited to sports, we demonstrate SDA in a military scenario (Study 3). We then explore two mechanisms that contribute to SDA: myopic loss aversion and concerns about “tempting fate.” Studies 4 and 5 show that SDA is due, in part, to myopic loss aversion, such that decision makers narrow the decision frame, paying attention to the prospect of immediate loss with the “fast” strategy, but not the downstream consequences of the “slow” strategy. Study 6 finds people are more pessimistic about a risky strategy that needn’t be pursued (opting for sudden death) than the same strategy that must be pursued. We end by discussing how these twin mechanisms lead to differential expectations of blame from the self and others, and how SDA influences decisions in several different walks of life.

The research is here.

Saturday, March 31, 2018

Individual Moral Development and Moral Progress

Schinkel, A. & de Ruyter, D.J.
Ethical Theory and Moral Practice (2017) 20: 121.
https://doi.org/10.1007/s10677-016-9741-6

Abstract

At first glance, one of the most obvious places to look for moral progress is in individuals, in particular in moral development from childhood to adulthood. In fact, that moral progress is possible is a foundational assumption of moral education. Beyond the general agreement that moral progress is not only possible but even a common feature of human development things become blurry, however. For what do we mean by ‘progress’? And what constitutes moral progress? Does the idea of individual moral progress presuppose a predetermined end or goal of moral education and development, or not? In this article we analyze the concept of moral progress to shed light on the psychology of moral development and vice versa; these analyses are found to be mutually supportive. We suggest that: moral progress should be conceived of as development that is evaluated positively on the basis of relatively stable moral criteria that are the fruit and the subject of an ongoing conversation; moral progress does not imply the idea of an end-state; individual moral progress is best conceived of as the development of various components of moral functioning and their robust integration in a person’s identity; both children and adults can progress morally - even though we would probably not speak in terms of progress in the case of children - but adults’ moral progress is both more hard-won and to a greater extent a personal project rather than a collective effort.

Download the paper here.

Friday, March 30, 2018

Trump Wants More Asylums — and Some Psychiatrists Agree

Benedict Carey
The New York Times
Originally published March 5, 2018

Here is an excerpt:

The third, and perhaps most critical, point of agreement in the asylum debate is that money is lacking in a nation that puts mental health at the bottom of the health budget. These disorders are expensive to treat in any setting, and funds for hospital care and community supports often come out of the same budget.

In his paper arguing for the return of asylums, Dr. Sisti singled out the Worcester Recovery Center and Hospital in Massachusetts.

This $300 million state hospital, opened in 2012, has an annual budget of $80 million, 320 private rooms, a range of medical treatments and nonmedical supports, like family and group therapy, and vocational training. Its progress is closely watched among mental health experts.

The average length of stay for adolescents is 28 days, and the average for continuing care (for the more serious cases) is 85 days, according to Daniela Trammell, a spokeswoman for the Massachusetts Department of Mental Health.

“Some individuals are hospitalized for nine months to a year; a smaller number is hospitalized for one to three years,” she wrote in an email.

Proponents of modern asylums insist that this kind of money is well spent, considering the alternatives for people with mental disabilities in prison or on the streets. Opponents are not convinced.

The article is here.

Not Noble Savages After All: Limits to Early Altruism

Karen Wynn, Paul Bloom, Ashley Jordan, Julia Marshall, Mark Sheskin
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 3 - 8
First Published December 22, 2017

Abstract

Many scholars draw on evidence from evolutionary biology, behavioral economics, and infant research to argue that humans are “noble savages,” endowed with indiscriminate kindness. We believe this is mistaken. While there is evidence for an early-emerging moral sense—even infants recognize and favor instances of fairness and kindness among third parties—altruistic behaviors are selective from the start. Babies and young children favor people who have been kind to them in the past and favor familiar individuals over strangers. They hold strong biases for in-group over out-group members and for themselves over others, and indeed are more unequivocally selfish than older children and adults. Much of what is most impressive about adult morality arises not through inborn capacities but through a fraught developmental process that involves exposure to culture and the exercise of rationality.

The article is here.

Thursday, March 29, 2018

Government watchdog files 30 ethics complaints against Trump administration

Julia Manchester
The Hill
Originally posted March 26, 2018

Here is an excerpt:

"The bottom line is that neither Trump nor his administration take conflicts of interest and ethics seriously," Lisa Gilbert, the group's vice president of legislative affairs, told the network.

" 'Drain the swamp' was far more campaign rhetoric than a commitment to ethics, and the widespread lack of compliance and enforcement of Trump's ethics executive order shows that ethics do not matter in the Trump administration."

NBC News reports Public Citizen filed complaints with the White House Office of Management and Budget, the Environmental Protection Agency, and the departments of Defense, Homeland Security, Housing and Urban Development, Transportation, Health and Human Services, Commerce and Interior, among others.

Trump signed an executive order shortly after he took office in 2017 that was aimed at cracking down on lobbyists' influence in the U.S. government.

The order allowed officials who departed the administration to lobby the government, except the agency for which they worked, and permitted lobbyists to enter the administration as long as they didn't work on specific issues that would impact former clients or employers for two years.

The article is here.

Authors of premier medical textbook didn’t disclose $11 million in industry payments

Adam Marcus and Ivan Oransky
www.statnews.com
Originally published March 6, 2018

Here is an excerpt:

“These findings indicate that full transparency of [author conflicts] should become a standard practice among the authors of biomedical educational materials,” according to the authors, whose study appears in the journal AJOB Empirical Bioethics.

McGraw-Hill, which publishes Harrison’s, did not respond to STAT’s requests for comment.

Financial disclosures have become de rigueur in scientific journals, where many of Harrison’s authors also publish and are subject to guidelines for such disclosures. Textbooks, however, have typically not required disclosures — and that means they’ve fallen even more behind standard practices.

The researchers, led by Brian Piper, a neuroscientist at the Geisinger Commonwealth School of Medicine in Scranton, Pa., acknowledge that simply looking at patent awards and fees from biomedical companies doesn’t prove the existence of biased work. But they note that medical textbooks are enormously influential due to their perceived authority and the wide readership they receive.

The article is here.

Wednesday, March 28, 2018

Mental Health Crisis for Grad Students

Colleen Flaherty
Inside Higher Ed
Originally published March 6, 2018

Several studies suggest that graduate students are at greater risk for mental health issues than those in the general population. This is largely due to social isolation, the often abstract nature of the work and feelings of inadequacy -- not to mention the slim tenure-track job market. But a new study in Nature Biotechnology warns, in no uncertain terms, of a mental health “crisis” in graduate education.

“Our results show that graduate students are more than six times as likely to experience depression and anxiety as compared to the general population,” the study says, urging action on the part of institutions. “It is only with strong and validated interventions that academia will be able to provide help for those who are traveling through the bioscience workforce pipeline.”

The paper is based on a survey including clinically validated scales for anxiety and depression, deployed to students via email and social media. The survey’s 2,279 respondents were mostly Ph.D. candidates (90 percent), representing 26 countries and 234 institutions. Some 56 percent study humanities or social sciences, while 38 percent study the biological and physical sciences. Two percent are engineering students and 4 percent are enrolled in other fields.

Some 39 percent of respondents scored in the moderate-to-severe depression range, as compared to 6 percent of the general population measured previously with the same scale.

The article is here.

The Academic Mob and Its Fatal Toll

Brad Cran
Quillette.com
Originally published March 2, 2018

Here is an excerpt:

In her essay “The Anatomy of an Academic Mobbing,” Joan Friedenberg states that “most mobbers see their actions as perfectly justified by the perceived depravity of their target, at least until they are asked to account for it with some degree of thoughtfulness, such as in a court deposition, by a journalist or in a judicial hearing.”

The flip side to the depravity of the target is the righteousness of the mob. What makes members of the mob so passionately inhumane is that their position as righteous becomes instantly wrapped up in the successful destruction of the target. As Friedenberg writes “An unsuccessful account leaves the mobber entirely morally culpable.”

Moral culpability creates fear and stokes irrational behavior, not within the target but within the mob itself. If a mob fails to cast out the target then eventually the mob will have to come to terms with the rights of the person they tried to destroy and the fact that all people, regardless of manufactured depravity, are deserving of humanity and basic fair treatment.

Every effort will be made to increase the allegation count, magnify the severity of each accusation, reinterpret any past actions of the target as malicious, and wipe away any sign that the target ever had a single redeemable quality that could point to the fact that they are undeserving of total destruction and shunning. For this reason “bullying” is a common accusation levelled against mobbing targets.

The article is here.

Tuesday, March 27, 2018

"My Brain Made Me Do It" Is Becoming a More Common Criminal Defense

Dina Fine Maron
Scientific American
Originally published March 5, 2018

Here is an excerpt:

But experts looking back at the 2007 case now say Hodges was part of a burgeoning trend: Criminal defense strategies are increasingly relying on neurological evidence—psychological evaluations, behavioral tests or brain scans—to potentially mitigate punishment. Defendants may cite earlier head traumas or brain disorders as underlying reasons for their behavior, hoping this will be factored into a court’s decisions. Such defenses have been employed for decades, mostly in death penalty cases. But as science has evolved in recent years, the practice has become more common in criminal cases ranging from drug offenses to robberies.

“The number of cases in which people try to introduce neurotechnological evidence in the trial or sentencing phase has gone up by leaps and bounds,” says Joshua Sanes, director of the Center for Brain Science at Harvard University. But such attempts may be outpacing the scientific evidence behind the technology, he adds.

“In 2012 alone over 250 judicial opinions—more than double the number in 2007—cited defendants arguing in some form or another that their ‘brains made them do it,’” according to an analysis by Nita Farahany, a law professor and director of Duke University’s Initiative for Science and Society. More recently, she says, that number has climbed to around 420 each year.

The article is here.

Neuroblame?

Stephen Rainey
Practical Ethics
Originally posted February 15, 2018

Here is an excerpt:

Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.

Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.

The blog post is here.

Monday, March 26, 2018

Bill to Bar LGBTQ Discrimination Stokes New Nebraska Debate

Tess Williams
US News and World Report
Originally published February 22, 2018

A bill that would prevent psychologists from discriminating against patients based on their sexual orientation or gender identity is reviving a nearly decade-old dispute in Nebraska state government.

Sen. Patty Pansing Brooks of Lincoln said Thursday that her bill would adopt the code of conduct from the American Psychiatric Association, which prevents discrimination of protected classes of people, but does not require professionals to treat patients if they lack expertise or it conflicts with their personal beliefs. The professional would have to provide an adequate referral instead.

Pansing Brooks said the bill will likely not become law, but she hopes it will bring attention to the ongoing problem. She said she hopes it will be resolved internally, but if a conclusion is not reached, she plans to call for a hearing later this year and will "not let this issue die."

The state Board of Psychology proposed new regulations in 2008, and the following year, the Department of Health and Human Services sent the changes to the Nebraska Catholic Conference for review. Pansing Brooks said she is unsure why the religious organization was given special review.

The article is here.

Non cogito, ergo sum

Ian Leslie
The Economist
Originally published May/June 2012

Here is an excerpt:

Researchers from Columbia Business School, New York, conducted an experiment in which people were asked to predict outcomes across a range of fields, from politics to the weather to the winner of “American Idol”. They found that those who placed high trust in their feelings made better predictions than those who didn’t. The result only applied, however, when the participants had some prior knowledge.

This last point is vital. Unthinking is not the same as ignorance; you can’t unthink if you haven’t already thought. Djokovic was able to pull off his wonder shot because he had played a thousand variations on it in previous matches and practice; Dylan’s lyrical outpourings drew on his immersion in folk songs, French poetry and American legends. The unconscious minds of great artists and sportsmen are like dense rainforests, which send up spores of inspiration.

The higher the stakes, the more overthinking is a problem. Ed Smith, a cricketer and author of “Luck”, uses the analogy of walking along a kerbstone: easy enough, but what if there was a hundred-foot drop to the street—every step would be a trial. In high-performance fields it’s the older and more successful performers who are most prone to choke, because expectation is piled upon them. An opera singer launching into an aria at La Scala cannot afford to think how her technique might be improved. When Federer plays a match point these days, he may feel as if he’s standing on the cliff edge of his reputation.

The article is here.

Sunday, March 25, 2018

Did Iraq Ever Become A Just War?

Matt Peterson
The Atlantic
Originally posted March 24, 2018

Here is an excerpt:

There’s a broader sense of moral confusion about the conduct of America’s wars. In Iraq, what started as a war of choice came to resemble much more a war of necessity. Can a war that started unjustly ever become righteous? Or does the stain permanently taint anything that comes after it?

The answers to these questions come from the school of philosophy called “just war” theory, which tries to explain whether and when war is permissible, and under what circumstances. It offers two big ways to think about the justice of war. One is whether it’s appropriate to go to war in the first place. Take North Korea, for example. Is there a cause worth killing thousands—millions—of North and South Korean civilians over? Invoking “national security” isn’t enough to make a war just. Kim Jong Un’s nuclear weapons pose an obvious threat to South Korea, Japan, and the United States. But that alone doesn’t make war an acceptable choice, given the lives at stake. The ethics of war require the public to assess how certain it is that innocents will be killed if the military doesn’t act (Will Kim really use his nukes offensively?), whether there’s any way to remove the threat without violence (Has diplomacy been exhausted?), and whether the scale of the deaths that would come from intervention is truly in line with the danger war is meant to avert (If the peninsula has to be burned down to be saved, is it really worth it?)—among other considerations.

The other questions to ask are about the nature of the combat. Are soldiers taking care to target only North Korea’s military? Once the decision has been made that Kim’s nuclear weapons pose an imminent threat, hypothetically, that still wouldn’t make it acceptable to firebomb Pyongyang to turn the population against him. Similarly, American forces could not, say, blow up a bus full of children just because one of Kim’s generals was trying to escape on it.

The article is here.

Deadly gene mutations removed from human embryos in landmark study

Ian Sample
The Guardian
Originally published August 2, 2017

Scientists have modified human embryos to remove genetic mutations that cause heart failure in otherwise healthy young people in a landmark demonstration of the controversial procedure.

It is the first time that human embryos have had their genomes edited outside China, where researchers have performed a handful of small studies to see whether the approach could prevent inherited diseases from being passed on from one generation to the next.

While none of the research so far has created babies from modified embryos, a move that would be illegal in many countries, the work represents a milestone in scientists’ efforts to master the technique and brings the prospect of human clinical trials one step closer.

The work focused on an inherited form of heart disease, but scientists believe the same approach could work for other conditions caused by single gene mutations, such as cystic fibrosis and certain kinds of breast cancer.

The article is here.

Saturday, March 24, 2018

Facebook employs psychologist whose firm sold data to Cambridge Analytica

Paul Lewis and Julia Carrie Wong
The Guardian
Originally published March 18, 2018

Here are two excerpts:

The co-director of a company that harvested data from tens of millions of Facebook users before selling it to the controversial data analytics firms Cambridge Analytica is currently working for the tech giant as an in-house psychologist.

Joseph Chancellor was one of two founding directors of Global Science Research (GSR), the company that harvested Facebook data using a personality app under the guise of academic research and later shared the data with Cambridge Analytica.

He was hired to work at Facebook as a quantitative social psychologist around November 2015, roughly two months after leaving GSR, which had by then acquired data on millions of Facebook users.

Chancellor is still working as a researcher at Facebook’s Menlo Park headquarters in California, where psychologists frequently conduct research and experiments using the company’s vast trove of data on more than 2 billion users.

(cut)

In the months that followed the creation of GSR, the company worked in collaboration with Cambridge Analytica to pay hundreds of thousands of users to take the test as part of an agreement in which they agreed for their data to be collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions strong.

That data sold to Cambridge Analytica as part of a commercial agreement.

Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising.

The information is here.

Breakthrough as scientists grow sheep embryos containing human cells

Nicola Davis
The Guardian
Originally published February 17, 2018

Growing human organs inside other animals has taken another step away from science-fiction, with researchers announcing they have grown sheep embryos containing human cells.

Scientists say growing human organs inside animals could not only increase supply, but also offer the possibility of genetically tailoring the organs to be compatible with the immune system of the patient receiving them, by using the patient’s own cells in the procedure, removing the possibility of rejection.

According to NHS Blood and Transplant, almost 460 people died in 2016 waiting for organs, while those who do receive transplants sometimes see organs rejected.

“Even today the best matched organs, except if they come from identical twins, don’t last very long because with time the immune system continuously is attacking them,” said Dr Pablo Ross from the University of California, Davis, who is part of the team working towards growing human organs in other species.

Ross added that if it does become possible to grow human organs inside other species, it might be that organ transplants become a possibility beyond critical conditions.

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Facebook Woes: Data Breach, Securities Fraud, or Something Else?

Matt Levine
Bloomberg.com
Originally posted March 21, 2018

Here is an excerpt:

But the result is always "securities fraud," whatever the nature of the underlying input. An undisclosed data breach is securities fraud, but an undisclosed sexual-harassment problem or chicken-mispricing conspiracy will get you to the same place. There is an important practical benefit to a legal regime that works like this: It makes it easy to punish bad behavior, at least by public companies, because every sort of bad behavior is also securities fraud. You don't have to prove that the underlying chicken-mispricing conspiracy was illegal, or that the data breach was due to bad security procedures. All you have to prove is that it happened, and it wasn't disclosed, and the stock went down when it was. The evaluation of the badness is in a sense outsourced to the market: We know that the behavior was illegal, not because there was a clear law against it, but because the stock went down. Securities law is an all-purpose tool for punishing corporate badness, a one-size-fits-all approach that makes all badness commensurable using the metric of stock price. It has a certain efficiency.

On the other hand it sometimes makes me a little uneasy that so much of our law ends up working this way. "In a world of dysfunctional government and pervasive financial capitalism," I once wrote, "more and more of our politics is contested in the form of securities regulation." And: "Our government's duty to its citizens is mediated by their ownership of our public companies." When you punish bad stuff because it is bad for shareholders, you are making a certain judgment about what sort of stuff is bad and who is entitled to be protected from it.

Anyway Facebook Inc. wants to make it very clear that it did not suffer a data breach. When a researcher got data about millions of Facebook users without those users' explicit permission, and when the researcher turned that data over to Cambridge Analytica for political targeting in violation of Facebook's terms, none of that was a data breach. Facebook wasn't hacked. What happened was somewhere between a contractual violation and ... you know ... just how Facebook works? There is some splitting of hairs over this, and you can understand why -- consider that SEC guidance about when companies have to disclose data breaches -- but in another sense it just doesn't matter. You don't need to know whether the thing was a "data breach" to know how bad it was. You can just look at the stock price. The stock went down...

The article is here.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Have our tribes become more important than our country?

Jonathan Rauch
The Washington Post
Originally published February 16, 2018

Here is an excerpt:

Moreover, tribalism is a dynamic force, not a static one. It exacerbates itself by making every group feel endangered by the others, inducing all to circle their wagons still more tightly. “Today, no group in America feels comfortably dominant,” Chua writes. “The Left believes that right-wing tribalism — bigotry, racism — is tearing the country apart. The Right believes that left-wing tribalism — identity politics, political correctness — is tearing the country apart. They are both right.” I wish I could disagree.

Remedies? Chua sees hopeful signs. Psychological research shows that tribalism can be countered and overcome by teamwork: by projects that join individuals in a common task on an equal footing. One such task, it turns out, can be to reduce tribalism. In other words, with conscious effort, humans can break the tribal spiral, and many are trying. “You’d never know it from cable news or social media,” Chua writes, “but all over the country there are signs of people trying to cross divides and break out of their political tribes.”

She lists examples, and I can add my own. My involvement with the Better Angels project, a grass-roots depolarization movement that is gaining traction in communities across the country, has convinced me that millions of Americans are hungry for conciliation and willing to work for it. Last summer, at a Better Angels workshop in Virginia, I watched as eight Trump supporters and eight Hillary Clinton supporters participated in a day of structured interactions.

The article is here.