Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, August 31, 2019

Unraveling the Ethics of New Neurotechnologies

Nicholas Weiler
www.ucsf.edu
Originally posted July 30, 2019

Here is an excerpt:

“In unearthing these ethical issues, we try as much as possible to get out of our armchairs and actually observe how people are interacting with these new technologies. We interview everyone from patients and family members to clinicians and researchers,” Chiong said. “We also work with philosophers, lawyers, and others with experience in biomedicine, as well as anthropologists, sociologists and others who can help us understand the clinical challenges people are actually facing as well as their concerns about new technologies.”

Some of the top issues on Chiong’s mind include ensuring patients understand how the data recorded from their brains are being used by researchers; protecting the privacy of this data; and determining what kind of control patients will ultimately have over their brain data.

“As with all technology, ethical questions about neurotechnology are embedded not just in the technology or science itself, but also the social structure in which the technology is used” Chiong added. “These questions are not just the domain of scientists, engineers, or even professional ethicists, but are part of larger societal conversation we’re beginning to have about the appropriate applications of technology, and personal data, and when it's important for people to be able to opt out or say no.”

The info is here.

Friday, August 30, 2019

The Technology of Kindness—How social media can rebuild our empathy—and why it must.

Jamil Zaki
Scientific American
Originally posted August 6, 2019

Here is an excerpt:

Technology also builds new communities around kindness. Consider the paradox of rare illnesses such as cystic fibrosis or myasthenia gravis. Each affects fewer than one in 1,000 people but there are many such conditions, meaning there are many people who suffer in ways their friends and neighbors don’t understand. Millions have turned to online forums, such as Facebook groups or the site RareConnect. In 2011 Priya Nambisan, a health policy expert, surveyed about 800 members of online health forums. Users reported that these groups offer helpful tips and information but also described them as heartfelt communities, full of compassion and commiseration.

allowing anyone to count on the kindness of strangers. These sites train users to provide empathetic social support and then unleash their goodwill on one another. Some express their struggles; others step in to provide support. Users find these platforms deeply soothing. In a 2015 survey, 7cups users described the kindness they received on the site to be as helpful as professional psychotherapy. Users on these sites also benefit from helping others. In a 2017 study, psychologist Bruce Doré and his colleagues assigned people to use either Koko or another Web site and tested their subsequent well-being. Koko users’ levels of depression dropped after spending time on the site, especially when they used it to support others.

The info is here.

Cryonics: Medicine, Or The Modern Mummy?

Patrick Lin
Futuristic cryo-pod. Photocredit: GettyForbes.com
Originally posted July 8, 2019

Here is an excerpt:

Meanwhile, others argued that death is a natural and necessary part of the circle of life.  Ecologically, keeping people around long past their “natural lives” may upset an already fragile balance, potentially exacerbating overpopulation, resource consumption, waste, and so on.

This is to suggest that cryonics isn’t just a difference in degree from, say, saving heart-attack victims, but it becomes a difference in kind.  It’s not an incremental improvement, as medicine makes in slowly raising average lifespans, but it's potentially a radical disruption with major systemic effects.

Culturally, Joseph Weizenbaum— who was a MIT computer science professor and creator of ELIZA—wrote, “Our death is the last service we can provide to the world:  Would we not go out of the way, the following generations would not need to re-create human culture.  Culture would become fixed, unchangeable and die.  And with the death of culture, humanity would also perish.”

Beyond external effects, the desire for more life may express bad character.  Wanting more than one’s fair share—of life or anything else—seems egotistical and expresses ingratitude for what we already have.  If not for death, we might not appreciate our time on earth.  We appreciate many things, such as beauty and flowers, not despite their impermanence but because of it.

Thursday, August 29, 2019

Wireless optofluidic brain probes for chronic neuropharmacology and photostimulation

Raza Qazi, Adrian M. Gomez, Daniel C. Castro, and others
Nature Biomedical Engineering, volume 3, pages 655–669 (2019)

Abstract

Both in vivo neuropharmacology and optogenetic stimulation can be used to decode neural circuitry, and can provide therapeutic strategies for brain disorders. However, current neuronal interfaces hinder long-term studies in awake and freely behaving animals, as they are limited in their ability to provide simultaneous and prolonged delivery of multiple drugs, are often bulky and lack multifunctionality, and employ custom control systems with insufficiently versatile selectivity for output mode, animal selection and target brain circuits. Here, we describe smartphone-controlled, minimally invasive, soft optofluidic probes with replaceable plug-like drug cartridges for chronic in vivo pharmacology and optogenetics with selective manipulation of brain circuits. We demonstrate the use of the probes for the control of the locomotor activity of mice for over four weeks via programmable wireless drug delivery and photostimulation. Owing to their ability to deliver both drugs and photopharmacology into the brain repeatedly over long time periods, the probes may contribute to uncovering the basis of neuropsychiatric diseases.

The paper is here.

Why Businesses Need Ethics to Survive Disruption

Mathew Donald
Business EthicsHR Technologist
Originally posted July 29, 2019

Here is an excerpt:

Using Ethics as the Guideline

An alternative model for an organization in disruption may be to connect staff and their organization to society values. Whilst these standards may not all be written, the staff will generally know right from wrong, where they live in harmony with the broad rule of society. People do not normally steal, drive on the wrong side of the road or take advantage of the poor. Whilst written laws may prevail and guide society, it is clear that most people follow unwritten society values. People make decisions on moral grounds daily, each based on their beliefs, refraining from actions that may be frowned upon by their friends and neighbors.

Ethics may be a key ingredient to add to your organization in a disruptive environment, as it may guide your staff through new situations without the necessity for a written rule or government law. It would seem that ethics based on a sense of fair play, not taking undue advantage, not overusing power and control, alignment with everyday society values may address some of this heightened risk in the disruption. Once the set of ethics is agreed upon and imbibed by the staff, it may be possible for them to review new transactions, new situations, and potential opportunities without necessarily needing to see written guidelines.

The info is here.

Wednesday, August 28, 2019

Profit Versus Prejudice: Harnessing Self-Interest to Reduce In-Group Bias

Stagnaro, M. N., Dunham, Y., & Rand, D. G. (2018).
Social Psychological and Personality Science, 9(1), 50–58.
https://doi.org/10.1177/1948550617699254

Abstract

We examine the possibility that self-interest, typically thought to undermine social welfare, might reduce in-group bias. We compared the dictator game (DG), where participants unilaterally divide money between themselves and a recipient, and the ultimatum game (UG), where the recipient can reject these offers. Unlike the DG, there is a self-interested motive for UG giving: If participants expect the rejection of unfair offers, they have a monetary incentive to be fair even to out-group members. Thus, we predicted substantial bias in the DG but little bias in the UG. We tested this hypothesis in two studies (N = 3,546) employing a 2 (in-group/out-group, based on abortion position) × 2 (DG/UG) design. We observed the predicted significant group by game interaction, such that the substantial in-group favoritism observed in the DG was almost entirely eliminated in the UG: Giving the recipient bargaining power reduced the premium offered to in-group members by 77.5%.

Discussion
Here we have provided evidence that self-interest has the potential to override in-group bias based on a salient and highly charged real-world grouping (abortion stance). In the DG, where participants had the power to offer whatever they liked, we saw clear evidence of behavior favoring in-group members. In the UG, where the recipient could reject the offer, acting on such biases had the potential to severely reduce earnings. Participants anticipated this, as shown by their expectations of partner behavior, and made fair offers to both in-group and out-group participants.

Traditionally, self-interest is considered a negative force in intergroup relations. For example, an individual might give free reign to a preference for interacting with similar others, and even be willing to pay a cost to satisfy those preferences, resulting in what has been called “taste-based” discrimination (Becker, 1957). Although we do not deny that such discrimination can (and often does) occur, we suggest that in the right context, the costs it can impose serve as a disincentive. In particular, when strategic concerns are heightened, as they are in multilateral interactions where the parties must come to an agreement and failing to do so is both salient and costly (such as the UG), self-interest has the opportunity to mitigate biased behavior. Here, we provide one example of such a situation: We find that participants successfully withheld bias in the UG, making equally fair offers to both in-group and out-group recipients.

Asymmetrical genetic attributions for prosocial versus antisocial behaviour

Matthew S. Lebowitz, Kathryn Tabb &
Paul S. Appelbaum
Nature Human Behaviour (2019)

Abstract

Genetic explanations of human behaviour are increasingly common. While genetic attributions for behaviour are often considered relevant for assessing blameworthiness, it has not yet been established whether judgements about blameworthiness can themselves impact genetic attributions. Across six studies, participants read about individuals engaging in prosocial or antisocial behaviour, and rated the extent to which they believed that genetics played a role in causing the behaviour. Antisocial behaviour was consistently rated as less genetically influenced than prosocial behaviour. This was true regardless of whether genetic explanations were explicitly provided or refuted. Mediation analyses suggested that this asymmetry may stem from people’s motivating desire to hold wrongdoers responsible for their actions. These findings suggest that those who seek to study or make use of genetic explanations’ influence on evaluations of, for example, antisocial behaviour should consider whether such explanations are accepted in the first place, given the possibility of motivated causal reasoning.

The research is here.

Tuesday, August 27, 2019

Engineering Ethics Isn't Always Black And White

Elizabeth Fernandez
Forbes.com
Originally posted August 6, 2019

Here is an excerpt:

Dr. Stephan's has thought a lot about engineering ethics. He goes on to say that, while there are not many courses completely devoted to engineering ethics, many students now at least have some exposure to it before graduating.

Education may fall into one of several categories. Students may encounter a conflict of interest or why it may be unethical to accept gifts as an engineer. Some examples may be clear. For example, a toy may be found to have a defective part which could harm a child. Ethically, the toy should be pulled from the market, even if it causes the company loss of revenue.

But other times, the ethical choice may be less clear. For example, how should a civil engineer make a decision about which intersection should receive funds for a safety upgrade, which may come down to weighing some lives against others? Or what ethical decisions are involved in creating a device that eliminates second-hand smoke from cigarettes, but might reinforce addiction or increase the incidence of children who smoke?

Now engineering ethics may even be more important. "The advances in artificial intelligence that have occurred over the last decade are raising serious questions about how this technology should be controlled with respect to privacy, politics, and even personal safety," says Dr. Stephan.

The info is here.

Neuroscience and mental state issues in forensic assessment

David Freedman and Simona Zaami
International Journal of Law and Psychiatry
Available online 2 April 2019

Abstract

Neuroscience has already changed how the law understands an individual's cognitive processes, how those processes shape behavior, and how bio-psychosocial history and neurodevelopmental approaches provide information, which is critical to understanding mental states underlying behavior, including criminal behavior. In this paper, we briefly review the state of forensic assessment of mental conditions in the relative culpability of criminal defendants, focused primarily on the weaknesses of current approaches. We then turn to focus on neuroscience approaches and how they have the potential to improve assessment, but with significant risks and limitations.

From the Conclusion:

This approach is not a cure-all. Understanding and explaining specific behaviors is a difficult undertaking, and explaining the mental condition of the person engaged in those behaviors at the time the behaviors took place is even more difficult. Yet, the law requires some degree of reliability and rigorous, honest presentation of the strengths and weaknesses of the science being relied upon to form opinions.  Despite the dramatic advances understanding the neural bases of cognition and functioning, neuroscience does not yet reliably describe how those processes emerge in a specific environmental context (Poldrack et al., 2018), nor what an individual was thinking, feeling, experiencing, understanding, or intending at a particular moment in time (Freedman & Woods, 2018; Greely & Farahany, 2019).

The info is here.

Monday, August 26, 2019

Psychological reactions to human versus robotic job replacement

Armin Granulo, Christoph Fuchs & Stefano Puntoni
Nature.com
Originally posted August 5, 2019

Abstract

Advances in robotics and artificial intelligence are increasingly enabling organizations to replace humans with intelligent machines and algorithms. Forecasts predict that, in the coming years, these new technologies will affect millions of workers in a wide range of occupations, replacing human workers in numerous tasks, but potentially also in whole occupations. Despite the intense debate about these developments in economics, sociology and other social sciences, research has not examined how people react to the technological replacement of human labour. We begin to address this gap by examining the psychology of technological replacement. Our investigation reveals that people tend to prefer workers to be replaced by other human workers (versus robots); however, paradoxically, this preference reverses when people consider the prospect of their own job loss. We further demonstrate that this preference reversal occurs because being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat. In contrast, being replaced by robots is associated with a greater perceived threat to one’s economic future. These findings suggest that technological replacement of human labour has unique psychological consequences that should be taken into account by policy measures (for example, appropriately tailoring support programmes for the unemployed).

The info is here.

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Sunday, August 25, 2019

Chances are, you’re not as open-minded as you think

David Epstein
The Washington Post
Originally published July 20, 2019

Here is an excerpt:

The lesson is clear enough: Most of us are probably not as open-minded as we think. That is unfortunate and something we can change. A hallmark of teams that make good predictions about the world around them is something psychologists call “active open mindedness.” People who exhibit this trait do something, alone or together, as a matter of routine that rarely occurs to most of us: They imagine their own views as hypotheses in need of testing.

They aim not to bring people around to their perspective but to encourage others to help them disprove what they already believe. This is not instinctive behavior. Most of us, armed with a Web browser, do not start most days by searching for why we are wrong.

As our divisive politics daily feed our tilt toward confirmation bias, it is worth asking if this instinct to think we know enough is hardening into a habit of poor judgment. Consider that, in a study during the run-up to the Brexit vote, a small majority of both Remainers and Brexiters could correctly interpret made-up statistics about the efficacy of a rash-curing skin cream. But when the same voters were given similarly false data presented as if it indicated that immigration either increased or decreased crime, hordes of Brits suddenly became innumerate and misinterpreted statistics that disagreed with their beliefs.

The info is here.

Saturday, August 24, 2019

Decoding the neuroscience of consciousness

Emily Sohn
Nature.com
Originally published July 24, 2019

Here is an excerpt:

That disconnect might also offer insight into why current medications for anxiety do not always work as well as people hope, LeDoux says. Developed through animal studies, these medications might target circuits in the amygdala and affect a person’s behaviours, such as their level of timidity — making it easier for them to go to social events. But such drugs don’t necessarily affect the conscious experience of fear, which suggests that future treatments might need to address both unconscious and conscious processes separately. “We can take a brain-based approach that sees these different kinds of symptoms as products of different circuits, and design therapies that target the different circuits systematically,” he says. “Turning down the volume doesn’t change the song — only its level.”

Psychiatric disorders are another area of interest for consciousness researchers, Lau says, on the basis that some mental-health conditions, including schizophrenia, obsessive–compulsive disorder and depression, might be caused by problems at the unconscious level — or even by conflicts between conscious and unconscious pathways. The link is only hypothetical so far, but Seth has been probing the neural basis of hallucinations with a ‘hallucination machine’ — a virtual-reality program that uses machine learning to simulate visual hallucinatory experiences in people with healthy brains. Through experiments, he and his colleagues have shown that these hallucinations resemble the types of visions that people experience while taking psychedelic drugs, which have increasingly been used as a tool to investigate the neural underpinnings of consciousness.

If researchers can uncover the mechanisms behind hallucinations, they might be able to manipulate the relevant areas of the brain and, in turn, treat the underlying cause of psychosis — rather than just address the symptoms. By demonstrating how easy it is to manipulate people’s perceptions, Seth adds, the work suggests that our sense of reality is just another facet of how we experience the world.

The info is here.

Friday, August 23, 2019

Medical Acts and Conscientious Objection: What Can a Physician be Compelled to Do?

Nathan K. Gamble and Michael Pruski
The New Bioethics
DOI: 10.1080/20502877.2019.1649871

Abstract

A key question has been underexplored in the literature on conscientious objection: if a physician is required to perform ‘medical activities,’ what is a medical activity? This paper explores the question by employing a teleological evaluation of medicine and examining the analogy of military conscripts, commonly cited in the conscientious objection debate. It argues that physicians (and other healthcare professionals) can only be expected to perform and support medical acts – acts directed towards their patients’ health. That is, physicians cannot be forced to provide or support services that are not medical in nature, even if such activities support other socially desirable pursuits. This does not necessarily mean that medical professionals cannot or should not provide non-medical services, but only that they are under no obligation to provide them.

Moral Grandstanding

Justin Tosi and Brandon Warmke
Philosophy & Public Affairs
First published: 27 December 2016
https://doi.org/10.1111/papa.12075

Here is an excerpt:

We suspect that most people would agree that grandstanding is annoying. We think that it is also morally problematic. In our view, the vast majority of moral grandstanding is bad, and, in general, one should not grandstand. We will adduce some reasons for this view shortly, but we should make a few preliminary points.

First, we will not argue that grandstanding should never be done. We are open to the possibility that there are circumstances in which either an instance of grandstanding possesses no bad‐making features or, even if an instance does have bad‐making features, the option of not grandstanding will be even worse.

Second, we will not claim that people who grandstand are bad people in virtue of engaging in grandstanding. We all have flaws that are on occasion revealed in the public square. Engaging in grandstanding is not obviously worse than many other flaws, and a propensity to grandstand is not indefeasible evidence that someone lacks good character.

Third, although we do believe that grandstanding is typically bad and should not be done, we are not prescribing any particular social enforcement mechanisms to deal with it. Presently, our concerns are the nature of grandstanding and its moral status. It does not follow, at least in any straightforward way, that people should intervene in public moral discourse to discourage others from grandstanding, or to blame them for grandstanding.

The info is here.

Thursday, August 22, 2019

New Jersey will allow terminally ill patients to end their lives

Taylor Romine
CNN.com
Originally posted July 1, 2019

Terminally ill adults in New Jersey will now be able to ask for medical help to end their lives.

In April, Gov. Phil Murphy signed the Medical Aid in Dying for the Terminally Ill Act. It goes into effect Thursday.

It allows adults with a prognosis of six months or less to live to get a prescription for life-ending medication.

Other jurisdictions that allow physician-assisted suicide are: California, Colorado, Oregon, Vermont, Washington, Hawaii, Montana and the District of Columbia.

The law requires either a psychiatrist or psychologist determine that the patient has the mental capacity to make the decision. The prescription is a series of self-administered pills that can be taken at home.

"Allowing residents with terminal illnesses to make end-of-life choices for themselves is the right thing to do," Murphy said in a statement.

The info is here.

Repetition increases perceived truth equally for plausible and implausible statements

Lisa Fazio David Rand Gordon Pennycook
PsyArXiv
Originally created February 28, 2019

Abstract

Repetition increases the likelihood that a statement will be judged as true. This illusory truth effect is well-established; however, it has been argued that repetition will not affect belief in unambiguous statements. When individuals are faced with obviously true or false statements, repetition should have no impact. We report a simulation study and a preregistered experiment that investigate this idea. Contrary to many intuitions, our results suggest that belief in all statements is increased by repetition. The observed illusory truth effect is largest for ambiguous items, but this can be explained by the psychometric properties of the task, rather than an underlying psychological mechanism that blocks the impact of repetition for implausible items. Our results indicate that the illusory truth effect is highly robust and occurs across all levels of plausibility. Therefore, even highly implausible statements will become more plausible with enough repetition.

The research is here.

The conclusion:

In conclusion, our findings are consistent with the hypothesis that repetition increases belief in all statements equally, regardless of their plausibility. However, there is an important difference between this internal mechanism (equal increase across plausibility) and the observable effect. The observable effect of repetition on truth ratings is greatest for items near the midpoint of perceived truth, and small or nonexistent for items at the extremes. While repetition effects are difficult to observe for very high and very low levels of perceived truth, our results suggest that repetition increases participants’ internal representation of truth equally for all statements. These findings have large implications for daily life where people are often repeatedly exposed to both plausible and implausible falsehoods. Even implausible falsehoods may slowly become more plausible with repetition.

Wednesday, August 21, 2019

Personal infidelity and professional conduct in 4 settings

John M. Griffin, Samuel Kruger, and Gonzalo Maturana
PNAS first published July 30, 2019
https://doi.org/10.1073/pnas.1905329116

Abstract

We study the connection between personal and professional behavior by introducing usage of a marital infidelity website as a measure of personal conduct. Police officers and financial advisors who use the infidelity website are significantly more likely to engage in professional misconduct. Results are similar for US Securities and Exchange Commission (SEC) defendants accused of white-collar crimes, and companies with chief executive officers (CEOs) or chief financial officers (CFOs) who use the website are more than twice as likely to engage in corporate misconduct. The relation is not explained by a wide range of regional, firm, executive, and cultural variables. These findings suggest that personal and workplace behavior are closely related.

Significance

The relative importance of personal traits compared with context for predicting behavior is a long-standing issue in psychology. This debate plays out in a practical way every time an employer, voter, or other decision maker has to infer expected professional conduct based on observed personal behavior. Despite its theoretical and practical importance, there is little academic consensus on this question. We fill this void with evidence connecting personal infidelity to professional behavior in 4 different settings.

The Conclusion:

More broadly, our findings suggest that personal and professional lives are connected and cut against the common view that ethics are predominantly situational. This supports the classical view that virtues such as honesty and integrity influence a person’s thoughts and actions across diverse contexts and has potentially important implications for corporate recruiting and codes of conduct. A possible implication of our findings is that the recent focus on eliminating sexual misconduct in the workplace may have the auxiliary effect of reducing fraudulent workplace activity.

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.

Tuesday, August 20, 2019

What Alan Dershowitz taught me about morality

Molly Roberts
The Washington Post
Originally posted August 2, 2019

Here are two excerpts:

Dershowitz has been defending Donald Trump on television for years, casting himself as a warrior for due process. Now, Dershowitz is defending himself on TV, too, against accusations at the least that he knew about Epstein allegedly trafficking underage girls for sex with men, and at the worst that he was one of the men.

These cases have much in common, and they both bring me back to the classroom that day when no one around the table — not the girl who invoked Ernest Hemingway’s hedonism, nor the boy who invoked God’s commandments — seemed to know where our morality came from. Which was probably the point of the exercise.

(cut)

You can make a convoluted argument that investigations of the president constitute irresponsible congressional overreach, but contorting the Constitution is your choice, and the consequences to the country of your contortion are yours to own, too. Everyone deserves a defense, but lawyers in private practice choose their clients — and putting a particular focus on championing those Dershowitz calls the “most unpopular, most despised” requires grappling with what it means for victims when an abuser ends up with a cozy plea deal.

When the alleged abuser is your friend Jeffrey, whose case you could have avoided precisely because you have a personal relationship, that grappling is even more difficult. Maybe it’s still all worth it to keep the system from falling apart, because next time it might not be a billionaire financier who wanted to seed the human race with his DNA on the stand, but a poor teenager framed for a crime he didn’t commit.

Dershowitz once told the New York Times he regretted taking Epstein’s case. He told me, “I would do it again.”

The info is here.

Can Neuroscience Understand Free Will?

Brian Gallagher
nautil.us
Originally posted on July 19, 2019

Here is an excerpt:

Clinical neuroscientists and neurologists have identified the brain networks responsible for this sense of free will. There seems to be two: the network governing the desire to act, and the network governing the feeling of responsibility for acting. Brain-damaged patients show that these can come apart—you can have one without the other.

Lacking essentially all motivation to move or speak has a name: akinetic mutism. The researchers, lead by neurologists Michael Fox, of Harvard Medical School, and Ryan Darby, of Vanderbilt University, analyzed 28 cases of this condition, not all of them involving damage in the same departments. “We found that brain lesions that disrupt volition occur in many different locations, but fall within a single brain network, defined by connectivity to the anterior cingulate,” which has links to both the “emotional” limbic system and the “cognitive” prefrontal cortex, the researchers wrote. Feeling like you’re moving under the direction of outside forces has a name, too: alien limb syndrome. The researchers analyzed 50 cases of this condition, which again involved brain damage in different spots. “Lesions that disrupt agency also occur in many different locations, but fall within a separate network, defined by connectivity to the precuneus,” which is involved, among other things, in the experience of agency.

The results may not map onto “free will” as we understand it ethically—the ability to choose between right and wrong. “It remains unknown whether the network of brain regions we identify as related to free will for movements is the same as those important for moral decision-making, as prior studies have suggested important differences,” the researchers wrote. For instance, in a 2017 study, he and Darby analyzed many cases of brain lesions in various regions predisposing people to criminal behavior, and found that “these lesions all fall within a unique functionally connected brain network involved in moral decision making.”

The info is here.

Monday, August 19, 2019

The Case Against A.I. Controlling Our Moral Compass

Image result for moral compassBrian Gallagher
ethicalsystems.org
Originally published June 25, 2019


Here is an excerpt:

Morality, the researchers found, isn’t like any other decision space. People were averse to machines having the power to choose what to do in life and death situations—specifically in driving, legal, medical, and military contexts. This hinged on their perception of machine minds as incomplete, or lacking in agency (the capacity to reason, plan, and communicate effectively) and subjective experience (the possession of a human-like consciousness, with the ability to empathize and to feel pain and other emotions).

For example, when the researchers presented subjects with hypothetical medical and military situations—where a human or machine would decide on a surgery as well as a missile strike, and the surgery and strike succeeded—subjects still found the machine’s decision less permissible, due to its lack of agency and subjective experience relative to the human. Not having the appropriate sort of mind, it seems, disqualifies machines, in the judgement of these subjects, from making moral decisions even if they are the same decisions that a human made. Having a machine sound human, with an emotional and expressive voice, and claim to experience emotion, doesn’t help—people found a compassionate-sounding machine just as unqualified for moral choice as one that spoke robotically.

Only in certain circumstances would a machine’s moral choice trump a human’s. People preferred an expert machine’s decision over an average doctor’s, for instance, but just barely. Bigman and Gray also found that some people are willing to have machines support human moral decision-making as advisors. A substantial portion of subjects, 32 percent, were even against that, though, “demonstrating the tenacious aversion to machine moral decision-making,” the researchers wrote. The results “suggest that reducing the aversion to machine moral decision-making is not easy, and depends upon making very salient the expertise of machines and the overriding authority of humans—and even then, it still lingers.”

The info is here.

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.

Saturday, August 17, 2019

DC Types Have Been Flocking to Shrinks Ever Since Trump Won.

And a Lot of the Therapists Are Miserable.

Britt Peterson
www.washingtonian.com
Originally published July 14 2019

Here two excerpts:

In Washington, the malaise appears especially pronounced. I spent the last several months talking to nearly two dozen local therapists who described skyrocketing levels of interest in their services. They told me about cases of ordinary stress blossoming into clinical conditions, patients who can’t get through a session without invoking the President’s name, couples and families falling apart over politics—a broad category of concerns that one practitioner, Beth Sperber Richie, says she and her colleagues have come to categorize as “Trump trauma.”

In one sense, that’s been good news for the people who help keep us sane: Their calendars are full. But Trump trauma has also created particular clinical challenges for therapists like Guttman and her students. It’s one thing to listen to a client discuss a horrible personal incident. It’s another when you’re experiencing the same collective trauma.

“I’ve been a therapist for a long time,” says Delishia Pittman, an assistant professor at George Washington University who has been in private practice for 14 years. “And this has been the most taxing two years of my entire career.”

(cut)

For many, in other words, Trump-related anxieties originate from something more serious than mere differences about policy. The therapists I spoke to are equally upset—living through one unnerving news cycle after another, personally experiencing the same issues as their patients in real time while being expected to offer solace and guidance. As Bindeman told her clients the day after Trump’s election, “I’m processing it just as you are, so I’m not sure I can give you the distance that might be useful.”

This is a unique situation in therapy, where you’re normally discussing events in the client’s private life. How do you counsel a sexual-assault victim agitated by the Access Hollywood tape, for example, when the tape has also disturbed you—and when talking about it all day only upsets you further? How about a client who echoes your own fears about climate change or the treatment of minorities or the government shutdown, which had a financial impact on therapists just as it did everyone else?

Again and again, practitioners described different versions of this problem.

The info is here.

Friday, August 16, 2019

Physicians struggle with their own self-care, survey finds

Jeff Lagasse
Healthcare Finance
Originally published July 26, 2019

Despite believing that self-care is a vitally important part of health and overall well-being, many physicians overlook their own self-care, according to a new survey conducted by The Harris Poll on behalf of Samueli Integrative Health Programs. Lack of time, job demands, family demands, being too tired and burnout are the most common reasons for not practicing their desired amount of self-care.

The authors said that while most doctors acknowledge the physical, mental and social importance of self-care, many are falling short, perhaps contributing to the epidemic of physician burnout currently permating the nation's healthcare system.

What's The Impact

The survey -- involving more than 300 family medicine and internal medicine physicians as well as more than 1,000 U.S. adults ages 18 and older -- found that although 80 percent of physicians say practicing self-care is "very important" to them personally, only 57 percent practice it "often" and about one-third (36%) do so only "sometimes."

Lack of time is the primary reason physicians say they aren't able to practice their desired amount of self-care (72%). Other barriers include mounting job demands (59%) and burnout (25%). Additionally, almost half of physicians (45%) say family demands interfere with their ability to practice self-care, and 20 percent say they feel guilty taking time for themselves.

The info is here.

Federal Watchdog Reports EPA Ignored Ethics Rules

Alyssa Danigelis
www.environmentalleader.com
Originally published July 17, 2019

The Environmental Protection Agency failed to comply with federal ethics rules for appointing advisory committee members, the General Accounting Office concluded this week. President Trump’s EPA skipped disclosure requirements for new committee members last year, according to the federal watchdog.

Led by Andrew Wheeler, the EPA currently manages 22 committees that advise the agency on a wide range of issues, including developing regulations and managing research programs.

However, in fiscal year 2018, the agency didn’t follow a key step in its process for appointing 20 committee members to the Science Advisory Board (SAB) and Clean Air Scientific Advisory Committee (CASAC), the report says.

“SAB is the agency’s largest committee and CASAC is responsible for, among other things, reviewing national ambient air-quality standards,” the report noted. “In addition, when reviewing the step in EPA’s appointment process related specifically to financial disclosure reporting, we found that EPA did not consistently ensure that [special government employees] appointed to advisory committees met federal financial disclosure requirements.”

The GAO also pointed out that the number of committee members affiliated with academic institutions shrank.

The info is here.

Thursday, August 15, 2019

World’s first ever human-monkey hybrid grown in lab in China

Henry Holloway
www.dailystar.co.uk
Originally posted August 1, 2019

Here is an excerpt:

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The team, made up of members of the Salk Institute in the United States and the Murcia Catholic University, genetically modified the monkey embryos.

Researchers deactivates the genes which form organs, and replaced them with human stem cells.

And it is hoped that one day these hybrid-grown organs will be able to be translated into humans.

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The info is here.

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elizabeth Lopatto
www.theverge.com
Originally published July 16, 2019

Here is an excerpt:

“It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

“Neuralink didn’t come out of nowhere, there’s a long history of academic research here,” Hodak said at the presentation on Tuesday. “We’re, in the greatest sense, building on the shoulders of giants.” However, none of the existing technologies fit Neuralink’s goal of directly reading neural spikes in a minimally invasive way.

The system presented today, if it’s functional, may be a substantial advance over older technology. BrainGate relied on the Utah Array, a series of stiff needles that allows for up to 128 electrode channels. Not only is that fewer channels than Neuralink is promising — meaning less data from the brain is being picked up — it’s also stiffer than Neuralink’s threads. That’s a problem for long-term functionality: the brain shifts in the skull but the needles of the array don’t, leading to damage. The thin polymers Neuralink is using may solve that problem.

The info is here.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Why You Should Develop a Personal Ethics Statement

Charlene Walters
www.entrepreneur.com
Originally posted July 16, 2019

As an entrepreneur, it can be helpful to create a personal ethics statement. A personal ethics statement is an assertion that defines your core ethical values and beliefs. It also delivers a strong testimonial about your code of conduct when dealing with people.

This statement can differentiate you from other businesses and entrepreneurs in your space. It should include information regarding your position on honesty and be reflective of how you interact with others. You can use your personal ethics statement or video on your website or when speaking with clients.

When you create it, you should include information about your fundamental beliefs, opinions and values. Your statement will give potential customers some insight into what it’s like to do business with you. You should also talk about anything that’s happened in your life that has impacted your ethical stance. Were you wronged in the past or affected by some injustice you witnessed? How did that shape and define you?

Remember that you’re basically telling clients why it’s better to do business with you than other entrepreneurs and communicating what you value as a person. Give creating a personal ethics statement a try. It’s a wonderful exercise and can provide value to your customers.

The info is here.

Tuesday, August 13, 2019

The arc of the moral universe won't bend on its own

Adam Fondren
Rapid City Journal
Originally posted August 11, 2019

Here are two excerpts:

My favorite Martin Luther King Jr. quote -- one of 14 engraved on a monument to his legacy in Washington, D.C. -- is, "We shall overcome because the arc of the moral universe is long, but it bends toward justice."

I like that quote because I hope he was right. But do we have evidence to support that?

A man just drove for hours in order to kill people whose skin is a little darker and food a little spicier than his culture's. He opened fire in a mass shooting inspired by the words or politicians and pundits who stoke racist fears in order to win votes for their side. Calling groups of refugees invasions, infestations, or criminals and worrying about racial replacement are not the sentiments of a society whose moral arc is bending toward justice.

(cut)

Racism isn't solved. White nationalists are not a hoax, and they are a big problem.

There are no spectators in this fight. You either condemn, condone or contribute to the problem.

Racism isn't a partisan issue. Both parties can come together to make these beliefs unacceptable in our society.

Another King quote from his Letter from a Birmingham Jail sums it up, "Injustice anywhere is a threat to justice everywhere. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly."

Rev. King was right about the moral arc of the universe bending toward justice, but it won't bend on its own. That's where we come in. We all have to do our part to make sure that our words and actions make racists uncomfortable.

The info is here.

UNRWA Leaders Accused of Sexual Misconduct, Ethics’ Violations

Image result for unrwa logojns.org
Originally published July 29, 2019

An internal ethics report sent to the UN secretary-general in December alleges that the commissioner-general of the United Nations Relief and Works Agency (UNRWA) and other officials at the highest levels of the UN agency have committed a series of serious ethics violations, AFP has reported.

According to AFP, Commissioner-General Pierre Krähenbühl and other top officials at the UN agency are being accused of abuses including “sexual misconduct, nepotism, retaliation, discrimination and other abuses of authority, for personal gain, to suppress legitimate dissent, and to otherwise achieve their personal objectives.”

The allegations are currently being probed by UN investigators.

In one instance, Krähenbühl, a married father of three from Switzerland, is accused of having a lover appointed to a newly-created role of senior adviser to the commissioner-general after an “extreme fast-track” process in 2015, which also entitled her to travel with him around the world with top accommodations.

The info is here.