Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

HHS issues voluntary guidelines amid rise of cyberattacks

Samantha Liss
Originally published January 2, 2019

Dive Brief:

  • To combat security threats in the health sector, HHS issued a voluminous report that details ways small, local clinics and large hospital systems alike can reduce their cybersecurity risks. The guidelines are voluntary, so providers will not be required to adopt the practices identified in the report. 
  • The four-volume report is the culmination of work by a task force, convened in May 2017, that worked to identify the five most common threats in the industry and 10 ways to prepare against those threats.  
  • The five most common threats are email phishing attacks, ransomware attacks, loss or theft of equipment or data, accidental or intentional data loss by an insider and attacks against connected medical devices.

Wednesday, January 30, 2019

Experts Reveal Their Tech Ethics Wishes For The New Year

Jessica Baron
Originally published December 30, 2018

Here is an excerpt:

"Face recognition technology is the technology to keep our eyes on in 2019.

The debates surrounding it have expressed our worst fears about surveillance and injustice and the tightly coupled links between corporate and state power. They’ve also triggered a battle amongst big tech companies, including Amazon, Microsoft, and Google, over how to define the parameters of corporate social responsibility at a time when external calls for greater accountability from civil rights groups, privacy activists and scholars, and internal demands for greater moral leadership, including pleas from employees and shareholders, are expressing concern over face surveillance governance having the potential to erode the basic fabric of democracy.

With aggressive competition fueling the global artificial intelligence race, it remains to be seen which values will guide innovation."

The info is here.

Trump Has Officially Made ‘Conservative Ethics’ an Oxymoron

Jonathan Chait
New York Magazine

The conservative intelligentsia initially greeted the rise of Donald Trump with revulsion. After some of them peeled off, a minority remained within the party tent on the grounds that they could support Trump’s policy goals without endorsing his grotesque character. Mitt Romney’s op-ed attacking Trump’s lack of virtue, however, has put this question squarely on the table. And the conservative response seems clear: Republicans will not abide attacks on Trump’s character, either.

A couple of recent columns nakedly illustrate the moral depravity into which conservatives have descended. It would be easy to mock some blow-dried Fox News bobblehead, but I’m going to focus on two samples from a pair of the more esteemed intellectuals the conservative movement has produced. The first is a column by Roger Kimball, and the second by Henry Olsen.

Kimball is an esteemed, long-standing conservative critic, who writes for a wide array of literary, scholarly, and pseudo-scholarly journals, and is frequently photographed in a bow tie. Like many conservative intellectuals, Kimball once devoted himself to the evils of moral relativism. “What a relativist really believes (or believes he believes) is that 1) there is no such thing as value and 2) there is no such thing as truth,” he wrote in one such essay, in 2009. Kimball explained that by attacking fixed truths, relativism allows the strongman to impose his own values. “Relativism and tyranny, far from being in opposition, are in fact regular collaborators,” he wrote. And also: “Relativism, which begins with a beckoning promise of liberation from ‘oppressive’ moral constraints, so often end in the embrace of immoral constraints that are politically obnoxious.”

The info is here.

Tuesday, January 29, 2019

Must Bill Barr Abide Ethics Advice on Recusal? A Debate

Barbara McQuade and Chuck Rosenberg 
Originally posted January 22, 2019

Here is an excerpt:

But we respectfully disagree on an important point that surfaced during Attorney General-nominee Bill Barr’s confirmation hearing before the Senate Judiciary Committee on January 15 and 16: whether, if confirmed, he should agree to abide ethics advice from Justice Department officials, before he receives that advice, regarding whether to recuse himself from supervision of the Mueller investigation.

Barr previously criticized the Mueller probe, including in an unsolicited legal memo he circulated to the Justice Department and President Trump’s legal team in the spring of 2018, and he commented favorably on the merits of investigating Hillary Clinton for what seems to us to be a bogus accusation. During his hearing, Barr was asked whether he would seek ethics advice regarding recusal. He said he would. When asked whether he would follow that advice, he said that as “head of the agency,” he would make the decision as to his own recusal. He would not follow that ethics advice, he said, if he “disagreed” with it. Is that appropriate? McQuade says no; Rosenberg says yes.

The Justice Department has a strict set of rules and norms that govern recusals. In some cases—for instance, where a prosecutor has a political, financial, or familial interest in a matter—a recusal is mandatory. Other situations can give rise to an appearance of a conflict – a set of conditions that call into question a prosecutor’s impartiality. In those cases, a prosecutor might be advised to recuse, but it is not mandatory. We both believe it is crucial that the work of the Justice Department be impartial and that it appear to be impartial. Thus, we believe that these recusal rules should be scrupulously followed. So far, so good.

The blog post debate is here.

Even arbitrary norms influence moral decision-making

Campbell Pryor, Amy Perfors & Piers D. L. Howe
Nature Human Behaviour (2018)


It is well known that individuals tend to copy behaviours that are common among other people—a phenomenon known as the descriptive norm effect. This effect has been successfully used to encourage a range of real-world prosocial decisions, such as increasing organ donor registrations. However, it is still unclear why it occurs. Here, we show that people conform to social norms, even when they understand that the norms in question are arbitrary and do not reflect the actual preferences of other people. These results hold across multiple contexts and when controlling for confounds such as anchoring or mere-exposure effects. Moreover, we demonstrate that the degree to which participants conform to an arbitrary norm is determined by the degree to which they self-identify with the group that exhibits the norm. Two prominent explanations of norm adherence—the informational and social sanction accounts—cannot explain these results, suggesting that these theories need to be supplemented by an additional mechanism that takes into account self-identity.

The info is here.

Monday, January 28, 2019

Second woman carrying gene-edited baby, Chinese authorities confirm

Zhou Xiaoqin, left, loads Cas9 protein and PCSK9 sgRNA molecules into a fine glass pipette as Qin Jinzhou watches at a laboratory in Shenzhen in southern ChinaAgence France-Presse
Originally posted January 21, 2019

A second woman became pregnant during the experiment to create the world’s first genetically edited babies, Chinese authorities have confirmed, as the researcher behind the claim faces a police investigation.

He Jiankui shocked the scientific community last year after announcing he had successfully altered the genes of twin girls born in November to prevent them contracting HIV.

He had told a human genome forum in Hong Kong there had been “another potential pregnancy” involving a second couple.

A provincial government investigation has since confirmed the existence of the second mother and that the woman was still pregnant, the official Xinhua news agency reported.

The expectant mother and the twin girls from the first pregnancy will be put under medical observation, an investigator told Xinhua.

The info is here.

Artificial intelligence turns brain activity into speech

Kelly Servick
Originally published January 2, 2019

Here is an excerpt:

Finally, neurosurgeon Edward Chang and his team at the University of California, San Francisco, reconstructed entire sentences from brain activity captured from speech and motor areas while three epilepsy patients read aloud. In an online test, 166 people heard one of the sentences and had to select it from among 10 written choices. Some sentences were correctly identified more than 80% of the time. The researchers also pushed the model further: They used it to re-create sentences from data recorded while people silently mouthed words. That's an important result, Herff says—"one step closer to the speech prosthesis that we all have in mind."

However, "What we're really waiting for is how [these methods] are going to do when the patients can't speak," says Stephanie Riès, a neuroscientist at San Diego State University in California who studies language production. The brain signals when a person silently "speaks" or "hears" their voice in their head aren't identical to signals of speech or hearing. Without external sound to match to brain activity, it may be hard for a computer even to sort out where inner speech starts and ends.

Decoding imagined speech will require "a huge jump," says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany. "It's really unclear how to do that at all."

One approach, Herff says, might be to give feedback to the user of the brain-computer interface: If they can hear the computer's speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.

The info is here.

Sunday, January 27, 2019

Expectations Bias Moral Evaluations

Derek Powell and Zachary Horne
PsyArXiv Preprints
Originally created on December 23, 2018


People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The preprint is here.

Saturday, January 26, 2019

People use less information than they think to make up their minds

Nadav Klein and Ed O’Brien
PNAS December 26, 2018 115 (52) 13222-13227


A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.


People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Friday, January 25, 2019

Decision-Making and Self-Governing Systems

Adina L. Roskies
October 2018, Volume 11, Issue 3, pp 245–257


Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Here is an excerpt:

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

The article is here.

Study Links Drug Maker Gifts for Doctors to More Overdose Deaths

Abby Goodnough
The New York Times
Originally posted January 18, 2019

A new study offers some of the strongest evidence yet of the connection between the marketing of opioids to doctors and the nation’s addiction epidemic.

It found that counties where opioid manufacturers offered a large number of gifts and payments to doctors had more overdose deaths involving the drugs than counties where direct-to-physician marketing was less aggressive.

The study, published Friday in JAMA Network Open, said the industry spent about $40 million promoting opioid medications to nearly 68,000 doctors from 2013 through 2015, including by paying for meals, trips and consulting fees. And it found that for every three additional payments that companies made to doctors per 100,000 people in a county, overdose deaths involving prescription opioids there a year later were 18 percent higher.

Even as the opioid epidemic was killing more and more Americans, such marketing practices remained widespread. From 2013 through 2015, roughly 1 in 12 doctors received opioid-related marketing, according to the study, including 1 in 5 family practice doctors.

The info is here.

Thursday, January 24, 2019

Facebook’s Suicide Algorithms are Invasive

Michael Spencer
Originally published January 6, 2019

Here is an excerpt:

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk. Sadly, Facebook has a long history of conducting “experiments” on its users. It’s hard to own a stock that itself isn’t trustworthy either for democracy or our personal data.

Facebook acts a bit like a social surveillance program, where it passes the information (suicide score) along to law enforcement for wellness checks. That’s pretty much like state surveillance, what’s the difference?

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse. Facebook has a history with sharing our personal data with other technology companies. So we are being profiled in the most intimate ways by third parties we didn’t even know had our data.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence but what is the real reason they make these contructs? It’s to monetize our data, it’s not to “help humanity” or connect the world.

The info is here.

What Could Be Wrong with a Little ‘Moral Clarity’?

Frank Guan
The New York Times Magazine
Originally posted January 2, 2019

If, in politics, words are weapons, they often prove themselves double—edged. So it was when, on the summer night that Alexandria Ocasio—Cortez learned that she had won a Democratic congressional primary over a 10-term incumbent, she provided a resonant quote to a TV reporter. “I think what we’ve seen is that working—class Americans want a clear champion,” she said, “and there is nothing radical about moral clarity in 2018.” Dozens of news videos and articles would cite those words as journalists worked to interpret what Ocasio—Cortez’s triumph, repeated in November’s general election, might represent for the American left and its newest star.

Until recently, “moral clarity” was more likely to signal combativeness toward the left, not from it: It served for decades as a badge of membership among conservative hawks and cultural crusaders. But in the Trump era, militant certainty takes precedence across the political spectrum. On the left, “moral clarity” can mean taking an unyielding stand against economic inequality or social injustice, climate change or gun violence. Closer to the center, it can take on a sonorous, transpartisan tone, as when Senator Robert Menendez, a Democrat, and former Speaker Paul Ryan, a Republican, each called for “moral clarity” in the White House reaction to the murder of the journalist Jamal Khashoggi. And it can fly beyond politics altogether, as when the surgeon and author Atul Gawande writes that better health care “does not take genius. It takes diligence. It takes moral clarity.” We hear about moral clarity any time there is impatience with equivocation, delay, conciliation and confusion — whenever people long for rapid action based on truths they hold to be self—evident.

The info is here.

Wednesday, January 23, 2019

New tech doorbells can record video, and that's an ethics problem

Molly Wood
Originally posted January 17, 2019

Here is an excerpt:

Ring is pretty clear in its terms and conditions that people are allowing Ring employees to access videos, not live streams, but cached videos. And that's in order to train that artificial intelligence to be better at recognizing neighbors, because they're trying to roll out a feature where they use facial recognition to match with the people that are considered safe. So if I have the Ring cameras, I can say, "All these are safe people. Here's pictures of my kids, my neighbors. If it's not one of these people, consider them unsafe." So that's a new technology. They need to be able to train their algorithms to recognize who's a person, what's a car, what's a cat. Some subset of the videos that are being uploaded just for typical usage are then being shared with their research team in the Ukraine.

The info is here.

What if consciousness is just a product of our non-conscious brain?

Peter Halligan and David A Oakley
The Conversation
Originally published December 20, 2018

Here is an excerpt:

The non-conscious nature of being

Previously, we argued that while undeniably real, the “experience of consciousness” or subjective awareness is precisely that – awareness. No more, no less. We proposed that while consciousness is created by brain systems, it has no causal relationship with or control over mental processes. The fact that personal awareness accompanies the contents of the personal narrative is causally compelling. But it is not necessarily relevant to understanding and explaining the psychological processes underpinning them.

This quote from George Miller – one of the founders of cognitive psychology – helps explain this idea. When one recalls something from memory, “consciousness gives no clue as to where the answer comes from; the processes that produce it are unconscious. It is the result of thinking, not the process of thinking, that appears spontaneously in consciousness”.

Taking this further, we propose that subjective awareness – the intimate signature experience of what it is like to be conscious – is itself a product of non-conscious processing. This observation, was well captured by pioneering social psychologist Daniel Wegner when he wrote that, “unconscious mechanisms create both conscious thought about action and the action, and also produce the sense of will we experience by perceiving the thought as the cause of the action”.

The info is here.

Tuesday, January 22, 2019

Proceedings Start Against ‘Sokal Squared’ Hoax Professor

Katherine Mangan
The Chronicle of Higher Education
Originally posted January 7, 2019

Here is an excerpt:

The Oregon university’s institutional review board concluded that Boghossian’s participation in the elaborate hoax had violated Portland State’s ethical guidelines, according to documents Boghossian posted online. The university is considering a further charge that he had falsified data, the documents indicate.

Last month Portland State’s vice president for research and graduate studies, Mark R. McLellan, ordered Boghossian to undergo training on human-subjects research as a condition for getting further studies approved. In addition, McLellan said he had referred the matter to the president and provost because Boghossian’s behavior "raises ethical issues of concern."

Boghossian and his supporters have gone on the offensive with an online press kit that links to emails from Portland State administrators. It also includes a video filmed by a documentary filmmaker that shows Boghossian reading an email that asks him to appear before the institutional review board in October. In the video, Boghossian discusses the implications of potentially being found responsible for professional misconduct. He’s speaking with his co-authors, Helen Pluckrose, a self-described "exile from the humanities" who studies medieval religious writings about women, and James A. Lindsay, an author and mathematician.

The info is here.

Kaiser settled 2014 patient-dumping class-action suit earlier this year

Michael McCough
The Sacramento Bee
Originally posted December 20, 2018

Kaiser Foundation Health Plan recently settled a 2014 class-action lawsuit stemming from two allegations that it dumped patients with severe mental illness.

Plaintiffs Douglas Kerr and Barbara Knighton alleged that in separate incidents, Kaiser psychiatrists told them their sons needed to be transferred to locked residential facilities called IMDs (institutions for mental disease) for treatment, according to court documents. Knighton and Kerr claimed they were both told they should remove their children from their Kaiser health plans in 2014 to be transferred to these county-run institutions — a change that shifted the costs of treatment from Kaiser to government-funded programs such as Medi-Cal.

Despite the settlement, Kaiser said in a statement it continues to dispute some of the claims included in the lawsuit.

“In certain relatively rare cases, Kaiser Permanente members entered a specialized type of locked mental health facility that often preferred Medi-Cal coverage to private insurance,” Kaiser Vice President of Communications John Nelson said in an emailed statement. “In some of these cases, cancellation of Kaiser Permanente coverage was required to enter the facility. However, this was not Kaiser Permanente’s requirement, and we cover many members’ care at such facilities. Any decision to cancel coverage was made by a court-appointed conservator.”

The info is here.

Monday, January 21, 2019

Do Recruiters Need a Code of Ethics?

Steve Bates
Society for Human Resource Management
Originally posted January 9, 2019

Here is an excerpt:

Most recruiters behave ethically, knowing that their reputation and their company's brand are on the line, said Joe Shaker Jr., president of Oak Park, Ill.-based Shaker Recruitment Marketing. "They're selling the organization."

But for some external recruiters attempting to beat their competitors, "there's a tremendous temptation to be unethical," said Kevin Wheeler, founder and president of the Future of Talent Institute, a think tank in Fremont, Calif.

"You'll hear about the good, the bad and the ugly," said Wanda Parker, president of The HealthField Alliance, a physician recruiting and consulting firm in Danbury, Conn. She is also president of the National Association of Physician Recruiters (NAPR), which is based in Altamonte Springs, Fla. "There are some recruiters who cut all kinds of corners and will do whatever they can to make a buck."

"It's very much like the Wild West," said Fred Coon, founder, chairman and CEO of Stewart, Cooper & Coon, a human capital strategies firm based in Phoenix. "It's a free-for-all."

The info is here.

The fallacy of obviousness

Teppo Felin
Originally posted July 5, 2018

Here is an excerpt:

The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts’, or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans – and other organisms for that matter – instead actively look for things. The implication (contrary to psychophysics) is that mind-to-world processes drive perception rather than world-to-mind processes. The gorilla experiment itself can be reinterpreted to support this view of perception, showing that what we see depends on our expectations and questions – what we are looking for, what question we are trying to answer.

At first glance that might seem like a rather mundane interpretation, particularly when compared with the startling claim that humans are ‘blind to the obvious’. But it’s more radical than it might seem. This interpretation of the gorilla experiment puts humans centre-stage in perception, rather than relegating them to passively recording their surroundings and environments. It says that what we see is not so much a function of what is directly in front of us (Kahneman’s natural assessments), or what one is in camera-like fashion recording or passively looking at, but rather determined by what we have in our minds, for example, by the questions we have in mind. People miss the gorilla not because they are blind, but because they were prompted – in this case, by the scientists themselves – to pay attention to something else. The question – ‘How many basketball passes’ (just like any question: ‘Where are my keys?’) – primes us to see certain aspects of a visual scene, at the expense of any number of other things.

The info is here.

Sunday, January 20, 2019

The Ethics of Paternalism

Ingrid M. Paulin, Jenna Clark, & Julie O'Brien
Scientific American
Originally published on December 21, 2018

Here is an excerpt:

Choosing what to do and which approach to take requires making a decision about paternalism, or influencing someone’s behavior for their own good. Every time someone designs policies, products or services, they make a decision about paternalism, whether they are aware of it or not. They will inevitably influence how people behave; there's no such thing as a neutral choice.

Arguments about paternalism have traditionally focused on the extreme ends of the spectrum; you either let people have complete autonomy, or you completely restrict undesirable behaviors. In reality, however, there are many options in between, and there are few guidelines about how one should navigate the complex moral landscape of influence to decide which approach is justified in a given situation.

Traditional economists may argue for more autonomy on the grounds that people will always behave in line with their own best interest. In their view, people have stable preferences and are always weighing the costs and benefits of every option before making decisions. Because they know their preferences better than do others, they should be able to act autonomously to maximize their own positive outcomes.

The info is here.

Saturday, January 19, 2019

There Is No Such Thing as Conscious Thought

Steve Ayan
Scientific American
Originally posted December 20, 2018

Here is an excerpt:

What makes you think conscious thought is an illusion?

I believe that the whole idea of conscious thought is an error. I came to this conclusion by following out the implications of the two of the main theories of consciousness. The first is what is called the Global Workspace Theory, which is associated with neuroscientists Stanislas Dehaene and Bernard Baars. Their theory states that to be considered conscious a mental state must be among the contents of working memory (the “user interface” of our minds) and thereby be available to other mental functions, such as decision-making and verbalization. Accordingly, conscious states are those that are “globally broadcast,” so to speak. The alternative view, proposed by Michael Graziano, David Rosenthal and others, holds that conscious mental states are simply those that you know of, that you are directly aware of in a way that doesn’t require you to interpret yourself. You do not have to read you own mind to know of them. Now, whichever view you adopt, it turns out that thoughts such as decisions and judgments should not be considered to be conscious. They are not accessible in working memory, nor are we directly aware of them. We merely have what I call “the illusion of immediacy”—the false impression that we know our thoughts directly.

The info is here.

Here is a link to Keith Frankish's chapter on the Illusion of Consciousness.

Friday, January 18, 2019

House Democrats Look to Crack Down on Feds With Conflicts of Interest, Ethics Violations

Eric Katz
Government Executive
Originally posted January 3, 2018

Federal employees who pass through the revolving door with the private sector and engage in other actions that could present conflicts of interest would come under intensified scrutiny in a slew of reforms House Democrats introduced on Friday aimed at boosting ethics oversight in government.

The new House majority put forward the For the People Act (H.R. 1) as its first legislative priority, after the more immediate concern of reopening the full government. The package involves an array of issues House Speaker Nancy Pelosi, D-Calif., said were critical to “restoring integrity in government,” such as voting rights access and campaign finance changes. It would also place new restrictions on federal workers before, during and after their government service, with special obligations for senior officials and the president.

“Over the last two years President Trump set the tone from the top of his administration that behaving ethically and complying with the law is optional,” said newly minted House Oversight and Reform Committee Chairman Rep. Elijah Cummings, D-Md. “That is why we are introducing the For the People Act. This bill contains a number of reforms that will strengthen our accountability for the executive branch officials, including the president.”

All federal employees would face a ban on using their official positions to participate in matters related to their former employers. Violators would face fines and one-to-five years in prison. Agency heads, in consultation with the director of the Office of Government Ethics, could issue waivers if it were deemed in the public interest.

The info is here.

CRISPR in China: Why Did the Parents Give Consent?

Dena Davis
The Hastings Center
Originally posted December 7, 2018

The global scientific community has been unanimous in condemning Chinese scientist He Jiankui, who announced last week that he used the gene-editing technology called CRISPR to make permanent, heritable changes to the genes of two baby girls who were born this month in China. Criticism has focused on Dr. He’s violation of worldwide acknowledgement that CRISPR has not been proven to be safe and ready to use in humans. Because CRISPR edits the actual germline, there are safety implications not only for these two girls, but for their progeny. There is also fear, expressed by the American Society for Reproductive Medicine, that this one renegade scientist could spark a backlash that would result in overly restrictive regulation.

Largely missing from the discussion is whether the twins’ parents understood what was happening and the unproven nature of the technology.  Was the informed consent process adequate, and if so, why on earth would they have given their consent?

The info is here.

Thursday, January 17, 2019

Americans' trust in honesty, ethics of clergy hits all-time low in Gallup ranking of professions

Stoyan Zaimov
Originally posted December 25, 2018

Americans' view of the honesty and ethics of clergy has fallen to an all-time low in a ranking of different professions released by Gallup.

The Gallup poll, conducted between Dec. 3-12 of 1,025 U.S. adults, found that only 37 percent of respondents had a "very high" or "high” opinion of the honesty and ethical standards of clergy. Forty-three percent of people gave them an average rating, while 15 percent said they had a “low” or “very low” opinion, according to the poll that was released on Dec. 21.

The margin of sampling error for the survey was identified as plus or minus 4 percentage points at the 95 percent confidence level.

Gallup noted that the 37 percent "very high" or "high" score for clergy is the lowest since it began asking the question in 1977. The historical high of 67 percent occurred back in 1985, and the score has been dropping below the overall average positive rating of 54 percent since 2009.

"The public's views of the honesty and ethics of the clergy continue to decline after the Catholic Church was rocked again this year by more abuse scandals,” Gallup noted in its observations.

The info is here.

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018


Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Debate ethics of embryo models from stem cells

Nicolas Rivron, Martin Pera, Janet Rossant, Alfonso Martinez Arias, and others
Originally posted December 12, 2018

Here are some excerpts:

Four questions

Future progress depends on addressing now the ethical and policy issues that could arise.

Ultimately, individual jurisdictions will need to formulate their own policies and regulations, reflecting their values and priorities. However, we urge funding bodies, along with scientific and medical societies, to start an international discussion as a first step. Bioethicists, scientists, clinicians, legal and regulatory specialists, patient advocates and other citizens could offer at least some consensus on an appropriate trajectory for the field.

Two outputs are needed. First, guidelines for researchers; second, a reliable source of information about the current state of the research, its possible trajectory, its potential medical benefits and the key ethical and policy issues it raises. Both guidelines and information should be disseminated to journalists, ethics committees, regulatory bodies and policymakers.

Four questions in particular need attention.

Should embryo models be treated legally and ethically as human embryos, now or in the future?

Which research applications involving human embryo models are ethically acceptable?

How far should attempts to develop an intact human embryo in a dish be allowed to proceed?

Does a modelled part of a human embryo have an ethical and legal status similar to that of a complete embryo?

The info is here.

Tuesday, January 15, 2019

Cheyenne Psychologist And His Wife Sentenced To 37 Months In Prison For Health Care Fraud

Department of Justice
U.S. Attorney’s Office
District of Wyoming
Press Release of December 4, 2018

John Robert Sink, Jr., 68, and Diane Marie Sink, 63, of Cheyenne, Wyoming, were sentenced on December 3, 2018, to serve 37 months in prison for making false statements as part of a scheme to fraudulently bill Wyoming Medicaid for mental health services, which were never provided, announced United States Attorney Mark A. Klaassen. The Sinks, who are married, were also ordered to pay over $6.2 million in restitution to the Wyoming Department of Health and the United States Department of Health and Human Services, and to forfeit over $750,000 in assets traceable to the fraud, including cash, retirement accounts, vehicles, and a residence.

The Sinks were indicted in March 2018 by a federal grand jury for health care fraud, making false statements, and money laundering. At all times relevant to the indictment, John and Diane Sink operated a psychological practice in Cheyenne. John Sink, who was a licensed Ph.D. psychologist, directed mental health services. Diane Sink submitted bills to Wyoming Medicaid and managed the business and its employees. The Sinks provided services to developmentally disabled Medicaid beneficiaries and billed Medicaid for those services.

Between February 2012 and December 2016, the Sinks submitted bills to Wyoming Medicaid for $6.2 million in alleged group therapy. These bills were false and fraudulent because the services provided did not qualify as group therapy as defined by Wyoming Medicaid. The Sinks also falsely billed Medicaid for beneficiaries who were not participating in any activities, and therefore did not receive any of the claimed mental health services. When Wyoming Medicaid audited the Sinks in May 2016, the Sinks did not have necessary documentation to support their billing, so they ordered an employee to create backdated treatment plans. The Sinks then submitted these phony treatment plans to Wyoming Medicaid to justify the Sinks’ false group therapy bills, and to cover up their fraudulent billing scheme.

The pressor is here.

The ends justify the meanness: An investigation of psychopathic traits and utilitarian moral endorsement

JustinBalasha and Diana M.Falkenbach
Personality and Individual Differences
Volume 127, 1 June 2018, Pages 127-132


Although psychopathy has traditionally been synonymous with immorality, little research exists on the ethical reasoning of psychopathic individuals. Recent examination of psychopathy and utilitarianism suggests that psychopaths' moral decision-making differs from nonpsychopaths (Koenigs et al., 2012). The current study examined the relationship between psychopathic traits (PPI-R, Lilienfeld & Widows, 2005; TriPM, Patrick, 2010) and utilitarian endorsement (moral dilemmas, Greene et al., 2001) in a college sample (n = 316). The relationships between utilitarian decisions and triarchic dimensions were explored and empathy and aggression were examined as mediating factors. Hypotheses were partially supported, with Disinhibition and Meanness traits relating to personal utilitarian decisions; aggression partially mediated the relationship between psychopathic traits and utilitarian endorsements. Implications and future directions are further discussed.


• Authors examined the relationship between psychopathy and utilitarian decision-making.

• Empathy and aggression were explored as mediating factors.

• Disinhibition and Meanness were positively related to personal utilitarian decisions.

• Meanness, Coldheartedness, and PPI-R-II were associated with personal utilitarian decisions.

• Aggression partially mediated the relationship between psychopathy and utilitarian decisions.

The research can be found here.

Monday, January 14, 2019

Air Force Psychologist Found Guilty of Sexual Assault Under Guise of Exposure Therapy

Caitlin Foster
Business Insider
Originally published Dec. 10, 2018

A psychologist at Travis Air Force Base in California was found guilty on Friday of sexually assaulting military-officer patients who were seeking treatment for post-traumatic stress disorder, The Daily Republic reported.

Heath Sommer may face up to 11 years and eight months in prison after receiving a guilty verdict on six felony counts of sexual assault, according to the Republic.

Sommer used a treatment known as "exposure therapy" to lure his patients, who were military officers with previous sexual-assault experiences, into performing sexual activity, the Republic reported.

According to charges brought by Brian Roberts, the deputy district attorney who prosecuted the case, Sommer assaulted his patients through "fraudulent representation that the sexual penetration served a professional purpose when it served no professional purpose," the Republic reported.

The Amazing Ways Artificial Intelligence Is Transforming Genomics and Gene Editing

Bernard Marr
Originally posted November 16, 2018

Here is an excerpt:

Another thing experts are working to resolve in the process of gene editing is how to prevent off-target effects—when the tools mistakenly work on the wrong gene because it looks similar to the target gene.

Artificial intelligence and machine learning help make gene editing initiatives more accurate, cheaper and easier.

The future for AI and gene technology is expected to include pharmacogenomics, genetic screening tools for newborns, enhancements to agriculture and more. While we can't predict the future, one thing is for sure: AI and machine learning will accelerate our understanding of our own genetic makeup and those of other living organisms.

The info is here.

Sunday, January 13, 2019

The bad news on human nature, in 10 findings from psychology

Christian Jarrett
Originally published 

Here is an excerpt:

We are vain and overconfident. Our irrationality and dogmatism might not be so bad were they married to some humility and self-insight, but most of us walk about with inflated views of our abilities and qualities, such as our driving skills, intelligence and attractiveness – a phenomenon that’s been dubbed the Lake Wobegon Effect after the fictional town where ‘all the women are strong, all the men are good-looking, and all the children are above average’. Ironically, the least skilled among us are the most prone to overconfidence (the so-called Dunning-Kruger effect). This vain self-enhancement seems to be most extreme and irrational in the case of our morality, such as in how principled and fair we think we are. In fact, even jailed criminals think they are kinder, more trustworthy and honest than the average member of the public.

We are moral hypocrites. It pays to be wary of those who are the quickest and loudest in condemning the moral failings of others – the chances are that moral preachers are as guilty themselves, but take a far lighter view of their own transgressions. In one study, researchers found that people rated the exact same selfish behaviour (giving themselves the quicker and easier of two experimental tasks on offer) as being far less fair when perpetuated by others. Similarly, there is a long-studied phenomenon known as actor-observer asymmetry, which in part describes our tendency to attribute other people’s bad deeds, such as our partner’s infidelities, to their character, while attributing the same deeds performed by ourselves to the situation at hand. These self-serving double standards could even explain the common feeling that incivility is on the increase – recent research shows that we view the same acts of rudeness far more harshly when they are committed by strangers than by our friends or ourselves.

Saturday, January 12, 2019

Monitoring Moral Virtue: When the Moral Transgressions of In-Group Members Are Judged More Severely

Karim Bettache, Takeshi Hamamura, J.A. Idrissi, R.G.J. Amenyogbo, & C. Chiu
Journal of Cross-Cultural Psychology
First Published December 5, 2018


Literature indicates that people tend to judge the moral transgressions committed by out-group members more severely than those of in-group members. However, these transgressions often conflate a moral transgression with some form of intergroup harm. There is little research examining in-group versus out-group transgressions of harmless offenses, which violate moral standards that bind people together (binding foundations). As these moral standards center around group cohesiveness, a transgression committed by an in-group member may be judged more severely. The current research presented Dutch Muslims (Study 1), American Christians (Study 2), and Indian Hindus (Study 3) with a set of fictitious stories depicting harmless and harmful moral transgressions. Consistent with our expectations, participants who strongly identified with their religious community judged harmless moral offenses committed by in-group members, relative to out-group members, more severely. In contrast, this effect was absent when participants judged harmful moral transgressions. We discuss the implications of these results.

Friday, January 11, 2019

10 ways to detect health-care lies

Lawton R. Burns and Mark V. Pauly
Originally posted December 9, 2018

Here is an excerpt:

Why does this kind of behavior occur? While flat-out dishonesty for short-term financial gains is an obvious answer, a more common explanation is the need to say something positive when there is nothing positive to say.

This problem is acute in health care. Suppose you are faced with the assignment of solving the ageless dilemma of reducing costs while simultaneously raising quality of care. You could respond with a message of failure or a discussion of inevitable tradeoffs.

But you could also pick an idea with some internal plausibility and political appeal, fashion some careful but conditional language and announce the launch of your program. Of course, you will add that it will take a number of years before success appears, but you and your experts will argue for the idea in concept, with the details to be worked out later.

At minimum, unqualified acceptance of such proposed ideas, even (and especially) by apparently qualified people, will waste resources and will lead to enormous frustration for your audience of politicians and outraged critics of the current system. The incentives to generate falsehoods are not likely to diminish — if anything, rising spending and stagnant health outcomes strengthen them — so it is all the more important to have an accurate and fast way to detect and deter lies in health care.

The info is here.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Julia Powles
Originally posted December 7, 2018

Here is an excerpt:

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

The info is here.

Thursday, January 10, 2019

China Uses "Ethics" as Censorship

China sets up a video game ethics panel in its new approval process

Owen S. Good
Originally posted December 8, 2018

In China, it’s about ethics in video games.

The South China Morning Post reports that the nation now has an “Online Game Ethics Committee,” as a part of the government’s laborious process for game censorship approvals. China Central Television, the state’s broadcaster, said this ethics-in-games committee was formed to address national concerns over internet addiction, “unsuitable content” and childhood myopia (nearsightedness, apparently with video games as a cause?)

The state TV report said the committee has already looked at 20 games, rejecting nine and ruling that the other 11 have to change “certain content.” The titles of the games were not revealed.

The info is here.

Every Leader’s Guide to the Ethics of AI

Thomas H. Davenport and Vivek Katyal
MIT Sloan Management Review Blog
Originally published

Here is an excerpt:

Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.

Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.

Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.

The info is here.

Wednesday, January 9, 2019

Why It’s Easier to Make Decisions for Someone Else

Evan Polman
Harvard Business Review
Originally posted November 13, 2018

Here is an excerpt:

What we found was two-fold: Not only did participants choose differently when it was for themselves rather than for someone else, but the way they chose was different. When choosing for themselves, participants focused more on a granular level, zeroing in on the minutiae, something we described in our research as a cautious mindset. Employing a cautious mindset when making a choice means being more reserved, deliberate, and risk averse. Rather than exploring and collecting a plethora of options, the cautious mindset prefers to consider a few at a time on a deeper level, examining a cross-section of the larger whole.

But when it came to deciding for others, study participants looked more at the array of options and focused on their overall impression. They were bolder, operating from what we called an adventurous mindset. An adventurous mindset prioritizes novelty over a deeper dive into what those options actually consist of; the availability of numerous choices is more appealing than their viability. Simply put, they preferred and examined more information before making a choice, and as my previous research has shown, they recommended their choice to others with more gusto.

These findings align with my earlier work with Kyle Emich of University of Delaware on how people are more creative on behalf of others. When we are brainstorming ideas to other people’s problems, we’re inspired; we have a free flow of ideas to spread out on the table without judgment, second-guessing, or overthinking.

The info is here.

'Should we even consider this?' WHO starts work on gene editing ethics

Agence France-Presse
Originally published 3 Dec 2018

The World Health Organization is creating a panel to study the implications of gene editing after a Chinese scientist controversially claimed to have created the world’s first genetically edited babies.

“It cannot just be done without clear guidelines,” Tedros Adhanom Ghebreyesus, the head of the UN health agency, said in Geneva.

The organisation was gathering experts to discuss rules and guidelines on “ethical and social safety issues”, added Tedros, a former Ethiopian health minister.

Tedros made the comments after a medical trial, which was led by Chinese scientist He Jiankui, claimed to have successfully altered the DNA of twin girls, whose father is HIV-positive, to prevent them from contracting the virus.

His experiment has prompted widespread condemnation from the scientific community in China and abroad, as well as a backlash from the Chinese government.

The info is here.

Tuesday, January 8, 2019

The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

Sandra Monteiro, Geoff Norman, & Jonathan Sherbino
J Eval Clin Pract. 2018 Jun;24(3):666-673.


There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy.

A pdf can be downloaded here.

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Danaher, Michael J Hogan, Chris Noone, Ronan Kennedy, et.al
Big Data & Society
July–December 2017: 1–21


We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

The paper is here.

Monday, January 7, 2019

Ethics of missionary work called into question after death of American missionary John Allen Chau

Holly Meyer
Nashville Tennessean
Originally published December 2, 2018

Christians are facing scrutiny for evangelizing in remote parts of the world after members of an isolated tribe in the Bay of Bengal killed a U.S. missionary who was trying to tell them about Jesus.

The death of John Allen Chau raises questions about the ethics of missionary work and whether he acted appropriately by contacting the Sentinelese, a self-sequestered Indian tribe that has resisted outside contact for thousands of years.

It is tragic, but figuring out what can be learned from Chau's death honors his memory and passion, said Scott Harris, the missions minister at Brentwood Baptist Church and a former trustee chairman of the Southern Baptist Convention's International Mission Board.

"In general, evaluation and accountability is so needed," Harris said. "Maturing fieldworkers that have a heart for the cultures of the world will welcome honest, hard questions." 

The info is here.

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Sunday, January 6, 2019

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philosophy and Technology:1-25 (forthcoming)


Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The paper is here.

Saturday, January 5, 2019

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS July 11, 2017 114 (28) 7313-7318; published ahead of print June 26, 2017 https://doi.org/10.1073/pnas.1618923114


Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.


Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.

Friday, January 4, 2019

The Objectivity Illusion in Medical Practice

Donald Redelmeier & Lee Ross
The Association for Psychological Science
Published November 2018

Insights into pitfalls in judgment and decision-making are essential for the practice of medicine. However, only the most exceptional physicians recognize their own personal biases and blind spots. More typically, they are like most humans in believing that they see objects, events, or issues “as they really are” and, accordingly, that others who see things differently are mistaken. This illusion of personal objectivity reflects the implicit conviction of a one-to-one correspondence between the perceived properties and the real nature of an object or event. For patients, such naïve realism means a world of red apples, loud sounds, and solid chairs. For practitioners, it means a world of red rashes, loud murmurs, and solid lymph nodes. However, a lymph node that feels normal to one physician may seem suspiciously enlarged and hard to another physician, with a resulting disagreement about the indications for a lymph node biopsy. A research study supporting a new drug or procedure may seem similarly convincing to one physician but flawed to another.

Convictions about whose perceptions are more closely attuned to reality can be a source of endless interpersonal friction. Spouses, for example, may disagree about appropriate thermostat settings, with one perceiving the room as too cold while the other finds the temperature just right. Moreover, each attributes the other’s perceptions to some pathology or idiosyncrasy.

The info is here.

Beyond safety questions, gene editing will force us to deal with a moral quandary

Josephine Johnston
Originally published November 29, 2018

Here is an excerpt:

The majority of this criticism is motivated by major concerns about safety — we simply do not yet know enough about the impact of CRISPR-Cas9, the powerful new gene-editing tool, to use it create children. But there’s a second, equally pressing concern mixed into many of these condemnations: that gene-editing human eggs, sperm, or embryos is morally wrong.

That moral claim may prove more difficult to resolve than the safety questions, because altering the genomes of future persons — especially in ways that can be passed on generation after generation — goes against international declarations and conventions, national laws, and the ethics codes of many scientific organizations. It also just feels wrong to many people, akin to playing God.

As a bioethicist and a lawyer, I am in no position to say whether CRISPR will at some point prove safe and effective enough to justify its use in human reproductive cells or embryos. But I am willing to predict that blanket prohibitions on permanent changes to the human genome will not stand. When those prohibitions fall — as today’s announcement from the Second International Summit on Human Genome Editing suggests they will — what ethical guideposts or moral norms should replace them?

The info is here.

Thursday, January 3, 2019

As China Seeks Scientific Greatness, Some Say Ethics Are an Afterthought

Sui-Lee Wee and Elsie Chen
The New York Times
Originally published November 30, 2018

First it was a proposal to transplant a head to a new body. Then it was the world’s first cloned primates. Now it is genetically edited babies.

Those recent scientific announcements, generating reactions that went from unease to shock, had one thing in common: All involved scientists from China.

China has set its sights on becoming a leader in science, pouring millions of dollars into research projects and luring back top Western-educated Chinese talent. The country’s scientists are accustomed to attention-grabbing headlines by their colleagues as they race to dominate their fields.

But when He Jiankui announced on Monday that he had created the world’s first genetically edited babies, Chinese scientists — like those elsewhere — denounced it as a step too far. Now many are asking whether their country’s intense focus on scientific achievement has come at the expense of ethical standards.

The info is here.

Why We Need to Audit Algorithms

James Guszcza, Iyad Rahwan Will, Bible Manuel Cebrian, & Vic Katyal
Harvard Business Review
Originally published November 28, 2018

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

The info is here.

Wednesday, January 2, 2019

When Fox News staffers break ethics rules, discipline follows — or does it?

Margaret Sullivan
The Washington Post
Originally published November 29, 2018

There are ethical standards at Fox News, we’re told.

But just what they are, or how they’re enforced, is an enduring mystery.

When Sean Hannity and Jeanine Pirro appeared onstage with President Trump at a Missouri campaign rally, the network publicly acknowledged that this ran counter to its practices.

“Fox News does not condone any talent participating in campaign events,” the network said in a statement. “This was an unfortunate distraction and has been addressed.”

Or take what happened this week.

When the staff of “Fox & Friends” was found to have provided a pre-interview script for Scott Pruitt, then the Environmental Protection Agency head, the network frowned: “This is not standard practice whatsoever and the matter is being addressed internally with those involved.”

“Not standard practice” is putting it mildly, as the Daily Beast’s Maxwell Tani — who broke the story — noted, quoting David Hawkins, formerly of CBS News and CNN, who teaches journalism at Fordham University...

The info is here.

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.

In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).


This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.