Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, August 20, 2019

What Alan Dershowitz taught me about morality

Molly Roberts
The Washington Post
Originally posted August 2, 2019

Here are two excerpts:

Dershowitz has been defending Donald Trump on television for years, casting himself as a warrior for due process. Now, Dershowitz is defending himself on TV, too, against accusations at the least that he knew about Epstein allegedly trafficking underage girls for sex with men, and at the worst that he was one of the men.

These cases have much in common, and they both bring me back to the classroom that day when no one around the table — not the girl who invoked Ernest Hemingway’s hedonism, nor the boy who invoked God’s commandments — seemed to know where our morality came from. Which was probably the point of the exercise.

(cut)

You can make a convoluted argument that investigations of the president constitute irresponsible congressional overreach, but contorting the Constitution is your choice, and the consequences to the country of your contortion are yours to own, too. Everyone deserves a defense, but lawyers in private practice choose their clients — and putting a particular focus on championing those Dershowitz calls the “most unpopular, most despised” requires grappling with what it means for victims when an abuser ends up with a cozy plea deal.

When the alleged abuser is your friend Jeffrey, whose case you could have avoided precisely because you have a personal relationship, that grappling is even more difficult. Maybe it’s still all worth it to keep the system from falling apart, because next time it might not be a billionaire financier who wanted to seed the human race with his DNA on the stand, but a poor teenager framed for a crime he didn’t commit.

Dershowitz once told the New York Times he regretted taking Epstein’s case. He told me, “I would do it again.”

The info is here.

Can Neuroscience Understand Free Will?

Brian Gallagher
nautil.us
Originally posted on July 19, 2019

Here is an excerpt:

Clinical neuroscientists and neurologists have identified the brain networks responsible for this sense of free will. There seems to be two: the network governing the desire to act, and the network governing the feeling of responsibility for acting. Brain-damaged patients show that these can come apart—you can have one without the other.

Lacking essentially all motivation to move or speak has a name: akinetic mutism. The researchers, lead by neurologists Michael Fox, of Harvard Medical School, and Ryan Darby, of Vanderbilt University, analyzed 28 cases of this condition, not all of them involving damage in the same departments. “We found that brain lesions that disrupt volition occur in many different locations, but fall within a single brain network, defined by connectivity to the anterior cingulate,” which has links to both the “emotional” limbic system and the “cognitive” prefrontal cortex, the researchers wrote. Feeling like you’re moving under the direction of outside forces has a name, too: alien limb syndrome. The researchers analyzed 50 cases of this condition, which again involved brain damage in different spots. “Lesions that disrupt agency also occur in many different locations, but fall within a separate network, defined by connectivity to the precuneus,” which is involved, among other things, in the experience of agency.

The results may not map onto “free will” as we understand it ethically—the ability to choose between right and wrong. “It remains unknown whether the network of brain regions we identify as related to free will for movements is the same as those important for moral decision-making, as prior studies have suggested important differences,” the researchers wrote. For instance, in a 2017 study, he and Darby analyzed many cases of brain lesions in various regions predisposing people to criminal behavior, and found that “these lesions all fall within a unique functionally connected brain network involved in moral decision making.”

The info is here.

Monday, August 19, 2019

The Case Against A.I. Controlling Our Moral Compass

Image result for moral compassBrian Gallagher
ethicalsystems.org
Originally published June 25, 2019


Here is an excerpt:

Morality, the researchers found, isn’t like any other decision space. People were averse to machines having the power to choose what to do in life and death situations—specifically in driving, legal, medical, and military contexts. This hinged on their perception of machine minds as incomplete, or lacking in agency (the capacity to reason, plan, and communicate effectively) and subjective experience (the possession of a human-like consciousness, with the ability to empathize and to feel pain and other emotions).

For example, when the researchers presented subjects with hypothetical medical and military situations—where a human or machine would decide on a surgery as well as a missile strike, and the surgery and strike succeeded—subjects still found the machine’s decision less permissible, due to its lack of agency and subjective experience relative to the human. Not having the appropriate sort of mind, it seems, disqualifies machines, in the judgement of these subjects, from making moral decisions even if they are the same decisions that a human made. Having a machine sound human, with an emotional and expressive voice, and claim to experience emotion, doesn’t help—people found a compassionate-sounding machine just as unqualified for moral choice as one that spoke robotically.

Only in certain circumstances would a machine’s moral choice trump a human’s. People preferred an expert machine’s decision over an average doctor’s, for instance, but just barely. Bigman and Gray also found that some people are willing to have machines support human moral decision-making as advisors. A substantial portion of subjects, 32 percent, were even against that, though, “demonstrating the tenacious aversion to machine moral decision-making,” the researchers wrote. The results “suggest that reducing the aversion to machine moral decision-making is not easy, and depends upon making very salient the expertise of machines and the overriding authority of humans—and even then, it still lingers.”

The info is here.

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.

Saturday, August 17, 2019

DC Types Have Been Flocking to Shrinks Ever Since Trump Won.

And a Lot of the Therapists Are Miserable.

Britt Peterson
www.washingtonian.com
Originally published July 14 2019

Here two excerpts:

In Washington, the malaise appears especially pronounced. I spent the last several months talking to nearly two dozen local therapists who described skyrocketing levels of interest in their services. They told me about cases of ordinary stress blossoming into clinical conditions, patients who can’t get through a session without invoking the President’s name, couples and families falling apart over politics—a broad category of concerns that one practitioner, Beth Sperber Richie, says she and her colleagues have come to categorize as “Trump trauma.”

In one sense, that’s been good news for the people who help keep us sane: Their calendars are full. But Trump trauma has also created particular clinical challenges for therapists like Guttman and her students. It’s one thing to listen to a client discuss a horrible personal incident. It’s another when you’re experiencing the same collective trauma.

“I’ve been a therapist for a long time,” says Delishia Pittman, an assistant professor at George Washington University who has been in private practice for 14 years. “And this has been the most taxing two years of my entire career.”

(cut)

For many, in other words, Trump-related anxieties originate from something more serious than mere differences about policy. The therapists I spoke to are equally upset—living through one unnerving news cycle after another, personally experiencing the same issues as their patients in real time while being expected to offer solace and guidance. As Bindeman told her clients the day after Trump’s election, “I’m processing it just as you are, so I’m not sure I can give you the distance that might be useful.”

This is a unique situation in therapy, where you’re normally discussing events in the client’s private life. How do you counsel a sexual-assault victim agitated by the Access Hollywood tape, for example, when the tape has also disturbed you—and when talking about it all day only upsets you further? How about a client who echoes your own fears about climate change or the treatment of minorities or the government shutdown, which had a financial impact on therapists just as it did everyone else?

Again and again, practitioners described different versions of this problem.

The info is here.

Friday, August 16, 2019

Physicians struggle with their own self-care, survey finds

Jeff Lagasse
Healthcare Finance
Originally published July 26, 2019

Despite believing that self-care is a vitally important part of health and overall well-being, many physicians overlook their own self-care, according to a new survey conducted by The Harris Poll on behalf of Samueli Integrative Health Programs. Lack of time, job demands, family demands, being too tired and burnout are the most common reasons for not practicing their desired amount of self-care.

The authors said that while most doctors acknowledge the physical, mental and social importance of self-care, many are falling short, perhaps contributing to the epidemic of physician burnout currently permating the nation's healthcare system.

What's The Impact

The survey -- involving more than 300 family medicine and internal medicine physicians as well as more than 1,000 U.S. adults ages 18 and older -- found that although 80 percent of physicians say practicing self-care is "very important" to them personally, only 57 percent practice it "often" and about one-third (36%) do so only "sometimes."

Lack of time is the primary reason physicians say they aren't able to practice their desired amount of self-care (72%). Other barriers include mounting job demands (59%) and burnout (25%). Additionally, almost half of physicians (45%) say family demands interfere with their ability to practice self-care, and 20 percent say they feel guilty taking time for themselves.

The info is here.

Federal Watchdog Reports EPA Ignored Ethics Rules

Alyssa Danigelis
www.environmentalleader.com
Originally published July 17, 2019

The Environmental Protection Agency failed to comply with federal ethics rules for appointing advisory committee members, the General Accounting Office concluded this week. President Trump’s EPA skipped disclosure requirements for new committee members last year, according to the federal watchdog.

Led by Andrew Wheeler, the EPA currently manages 22 committees that advise the agency on a wide range of issues, including developing regulations and managing research programs.

However, in fiscal year 2018, the agency didn’t follow a key step in its process for appointing 20 committee members to the Science Advisory Board (SAB) and Clean Air Scientific Advisory Committee (CASAC), the report says.

“SAB is the agency’s largest committee and CASAC is responsible for, among other things, reviewing national ambient air-quality standards,” the report noted. “In addition, when reviewing the step in EPA’s appointment process related specifically to financial disclosure reporting, we found that EPA did not consistently ensure that [special government employees] appointed to advisory committees met federal financial disclosure requirements.”

The GAO also pointed out that the number of committee members affiliated with academic institutions shrank.

The info is here.

Thursday, August 15, 2019

World’s first ever human-monkey hybrid grown in lab in China

Henry Holloway
www.dailystar.co.uk
Originally posted August 1, 2019

Here is an excerpt:

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The team, made up of members of the Salk Institute in the United States and the Murcia Catholic University, genetically modified the monkey embryos.

Researchers deactivates the genes which form organs, and replaced them with human stem cells.

And it is hoped that one day these hybrid-grown organs will be able to be translated into humans.

Scientists have successfully formed a hybrid human-monkey embryo  – with the experiment taking place in China to avoid “legal issues”.

Researchers led by scientist Juan Carlos Izpisúa spliced together the genes to grow a monkey with human cells.

It is said the creature could have grown and been born, but scientists aborted the process.

The info is here.

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elizabeth Lopatto
www.theverge.com
Originally published July 16, 2019

Here is an excerpt:

“It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

“Neuralink didn’t come out of nowhere, there’s a long history of academic research here,” Hodak said at the presentation on Tuesday. “We’re, in the greatest sense, building on the shoulders of giants.” However, none of the existing technologies fit Neuralink’s goal of directly reading neural spikes in a minimally invasive way.

The system presented today, if it’s functional, may be a substantial advance over older technology. BrainGate relied on the Utah Array, a series of stiff needles that allows for up to 128 electrode channels. Not only is that fewer channels than Neuralink is promising — meaning less data from the brain is being picked up — it’s also stiffer than Neuralink’s threads. That’s a problem for long-term functionality: the brain shifts in the skull but the needles of the array don’t, leading to damage. The thin polymers Neuralink is using may solve that problem.

The info is here.

Wednesday, August 14, 2019

Getting AI ethics wrong could 'annihilate technical progress'

Richard Gray
TechXplore
Originally published July 30, 2019

Here is an excerpt:

Biases

But these algorithms can also learn the biases that already exist in data sets. If a police database shows that mainly young, black men are arrested for a certain crime, it may not be a fair reflection of the actual offender profile and instead reflect historic racism within a force. Using AI taught on this kind of data could exacerbate problems such as racism and other forms of discrimination.

"Transparency of these algorithms is also a problem," said Prof. Stahl. "These algorithms do statistical classification of data in a way that makes it almost impossible to see how exactly that happened." This raises important questions about how legal systems, for example, can remain fair and just if they start to rely upon opaque 'black box' AI algorithms to inform sentencing decisions or judgements about a person's guilt.

The next step for the project will be to look at potential interventions that can be used to address some of these issues. It will look at where guidelines can help ensure AI researchers build fairness into their algorithms, where new laws can govern their use and if a regulator can keep negative aspects of the technology in check.

But one of the problems many governments and regulators face is keeping up with the fast pace of change in new technologies like AI, according to Professor Philip Brey, who studies the philosophy of technology at the University of Twente, in the Netherlands.

"Most people today don't understand the technology because it is very complex, opaque and fast moving," he said. "For that reason it is hard to anticipate and assess the impacts on society, and to have adequate regulatory and legislative responses to that. Policy is usually significantly behind."

The info is here.

Why You Should Develop a Personal Ethics Statement

Charlene Walters
www.entrepreneur.com
Originally posted July 16, 2019

As an entrepreneur, it can be helpful to create a personal ethics statement. A personal ethics statement is an assertion that defines your core ethical values and beliefs. It also delivers a strong testimonial about your code of conduct when dealing with people.

This statement can differentiate you from other businesses and entrepreneurs in your space. It should include information regarding your position on honesty and be reflective of how you interact with others. You can use your personal ethics statement or video on your website or when speaking with clients.

When you create it, you should include information about your fundamental beliefs, opinions and values. Your statement will give potential customers some insight into what it’s like to do business with you. You should also talk about anything that’s happened in your life that has impacted your ethical stance. Were you wronged in the past or affected by some injustice you witnessed? How did that shape and define you?

Remember that you’re basically telling clients why it’s better to do business with you than other entrepreneurs and communicating what you value as a person. Give creating a personal ethics statement a try. It’s a wonderful exercise and can provide value to your customers.

The info is here.

Tuesday, August 13, 2019

The arc of the moral universe won't bend on its own

Adam Fondren
Rapid City Journal
Originally posted August 11, 2019

Here are two excerpts:

My favorite Martin Luther King Jr. quote -- one of 14 engraved on a monument to his legacy in Washington, D.C. -- is, "We shall overcome because the arc of the moral universe is long, but it bends toward justice."

I like that quote because I hope he was right. But do we have evidence to support that?

A man just drove for hours in order to kill people whose skin is a little darker and food a little spicier than his culture's. He opened fire in a mass shooting inspired by the words or politicians and pundits who stoke racist fears in order to win votes for their side. Calling groups of refugees invasions, infestations, or criminals and worrying about racial replacement are not the sentiments of a society whose moral arc is bending toward justice.

(cut)

Racism isn't solved. White nationalists are not a hoax, and they are a big problem.

There are no spectators in this fight. You either condemn, condone or contribute to the problem.

Racism isn't a partisan issue. Both parties can come together to make these beliefs unacceptable in our society.

Another King quote from his Letter from a Birmingham Jail sums it up, "Injustice anywhere is a threat to justice everywhere. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly."

Rev. King was right about the moral arc of the universe bending toward justice, but it won't bend on its own. That's where we come in. We all have to do our part to make sure that our words and actions make racists uncomfortable.

The info is here.

UNRWA Leaders Accused of Sexual Misconduct, Ethics’ Violations

Image result for unrwa logojns.org
Originally published July 29, 2019

An internal ethics report sent to the UN secretary-general in December alleges that the commissioner-general of the United Nations Relief and Works Agency (UNRWA) and other officials at the highest levels of the UN agency have committed a series of serious ethics violations, AFP has reported.

According to AFP, Commissioner-General Pierre Krähenbühl and other top officials at the UN agency are being accused of abuses including “sexual misconduct, nepotism, retaliation, discrimination and other abuses of authority, for personal gain, to suppress legitimate dissent, and to otherwise achieve their personal objectives.”

The allegations are currently being probed by UN investigators.

In one instance, Krähenbühl, a married father of three from Switzerland, is accused of having a lover appointed to a newly-created role of senior adviser to the commissioner-general after an “extreme fast-track” process in 2015, which also entitled her to travel with him around the world with top accommodations.

The info is here.

Monday, August 12, 2019

Rural hospitals foundering in states that declined Obamacare

Michael Braga, Jennifer F. A. Borresen, Dak Le and Jonathan Riley
GateHouse Media
Originally published July 28, 2019

Here is an excerpt:

While experts agree embracing Obamacare is not a cure-all for rural hospitals and would not have saved many of those that closed, few believe it was wise to turn the money down.

The crisis facing rural America has been raging for decades and the carnage is not expected to end any time soon.

High rates of poverty in rural areas, combined with the loss of jobs, aging populations, lack of health insurance and competition from other struggling institutions will make it difficult for some rural hospitals to survive regardless of what government policies are implemented.

For some, there’s no point in trying. They say the widespread closures are the result of the free market economy doing its job and a continued shakeout would be helpful. But no rural community wants that shakeout to happen in its backyard.

“A hospital closure is a frightening thing for a small town,” said Patti Davis, president of the Oklahoma Hospital Association. “It places lives in jeopardy and has a domino effect on the community. Health care professionals leave, pharmacies can’t stay open, nursing homes have to close and residents are forced to rely on ambulances to take them to the next closest facility in their most vulnerable hours.”

The info is here.

Why it now pays for businesses to put ethics before economics

John Drummond
The National
Originally published July 14, 2019

Here is an excerpt:

All major companies today have an ethics code or a statement of business principles. I know this because at one time my company designed such codes for many FTSE companies. And all of these codes enshrine a commitment to moral standards. And these standards are often higher than those required by law.

When the boards of companies agree to these principles they largely do so because they believe in them – at the time. However, time moves on. People move on. The business changes. Along the way, company people forget.

So how can you tell if a business still believes in its stated principles? Actually, it is very simple. When an ethical problem, such as Mossmorran, happens, look to see who turns up to answer concerns. If it is a public relations man or woman, the company has lost the plot. By contrast, if it is the executive who runs the business, then the company is likely still in close touch with its ethical standards.

Economics and ethics can be seen as a spectrum. Ethics is at one side of the spectrum and economics at the other. Few organisations, or individuals for that matter, can operate on purely ethical lines alone, and few operate on solely economic considerations. Most organisations can be placed somewhere along this spectrum.

So, if a business uses public relations to shield top management from a problem, it occupies a position closer to economics than to ethics. On the other hand, where corporate executives face their critics directly, then the company would be located nearer to ethics.

The info is here.

Sunday, August 11, 2019

Challenges to capture the big five personality traits in non-WEIRD populations

Rachid Laajaj, Karen Macours, and others
Science Advances  10 Jul 2019:
Vol. 5, no. 7
DOI: 10.1126/sciadv.aaw5226

Abstract

Can personality traits be measured and interpreted reliably across the world? While the use of Big Five personality measures is increasingly common across social sciences, their validity outside of western, educated, industrialized, rich, and democratic (WEIRD) populations is unclear. Adopting a comprehensive psychometric approach to analyze 29 face-to-face surveys from 94,751 respondents in 23 low- and middle-income countries, we show that commonly used personality questions generally fail to measure the intended personality traits and show low validity. These findings contrast with the much higher validity of these measures attained in internet surveys of 198,356 self-selected respondents from the same countries. We discuss how systematic response patterns, enumerator interactions, and low education levels can collectively distort personality measures when assessed in large-scale surveys. Our results highlight the risk of misinterpreting Big Five survey data and provide a warning against naïve interpretations of personality traits without evidence of their validity.

The research is here.

Saturday, August 10, 2019

Emotions and beliefs about morality can change one another

Monica Bucciarelli and P.N. Johnson-Laird
Acta Psychologica
Volume 198, July 2019

Abstract

A dual-process theory postulates that belief and emotions about moral assertions can affect one another. The present study corroborated this prediction. Experiments 1, 2 and 3 showed that the pleasantness of a moral assertion – from loathing it to loving it – correlated with how strongly individuals believed it, i.e., its subjective probability. But, despite repeated testing, this relation did not occur for factual assertions. To create the correlation, it sufficed to change factual assertions, such as, “Advanced countries are democracies,” into moral assertions, “Advanced countries should be democracies”. Two further experiments corroborated the two-way causal relations for moral assertions. Experiment 4 showed that recall of pleasant memories about moral assertions increased their believability, and that the recall of unpleasant memories had the opposite effect. Experiment 5 showed that the creation of reasons to believe moral assertions increased the pleasantness of the emotions they evoked, and that the creation of reasons to disbelieve moral assertions had the opposite effect. Hence, emotions can change beliefs about moral assertions; and reasons can change emotions about moral assertions. We discuss the implications of these results for alternative theories of morality.

The research is here.

Here is a portion of the Discussion:

In sum, emotions and beliefs correlate for moral assertions, and a change in one can cause a change in the other. The main theoretical problem is to explain these results. They should hardly surprise Utilitarians. As we mentioned in the Introduction, one interpretation of their views (Jon Baron, p.c.) is that it is tautological to predict that if you believe a moral assertion then you will like it. And this interpretation implies that our experiments are studies in semantics, which corroborate the existence of tautologies depending on the meanings of words (contra to Quine, 1953; cf. Quelhas, Rasga, & Johnson-Laird, 2017). But, the degrees to which participants believed the moral assertions varied from certain to impossible.  An assertion that they rated as probable as not is hardly a tautology, and it tended to occur with an emotional reaction of indifference. The hypothesis of a tautological interpretation cannot explain this aspect of an overall correlation in ratings on scales.

Friday, August 9, 2019

The Human Brain Project Hasn’t Lived Up to Its Promise

Ed Yong
www.theatlantic.com
Originally published July 22, 2019

Here is an excerpt:

Markram explained that, contra his TED Talk, he had never intended for the simulation to do much of anything. He wasn’t out to make an artificial intelligence, or beat a Turing test. Instead, he pitched it as an experimental test bed—a way for scientists to test their hypotheses without having to prod an animal’s head. “That would be incredibly valuable,” Lindsay says, but it’s based on circular logic. A simulation might well allow researchers to test ideas about the brain, but those ideas would already have to be very advanced to pull off the simulation in the first place. “Once neuroscience is ‘finished,’ we should be able to do it, but to have it as an intermediate step along the way seems difficult.”

“It’s not obvious to me what the very large-scale nature of the simulation would accomplish,” adds Anne Churchland from Cold Spring Harbor Laboratory. Her team, for example, simulates networks of neurons to study how brains combine visual and auditory information. “I could implement that with hundreds of thousands of neurons, and it’s not clear what it would buy me if I had 70 billion.”

In a recent paper titled “The Scientific Case for Brain Simulations,” several HBP scientists argued that big simulations “will likely be indispensable for bridging the scales between the neuron and system levels in the brain.” In other words: Scientists can look at the nuts and bolts of how neurons work, and they can study the behavior of entire organisms, but they need simulations to show how the former create the latter. The paper’s authors drew a comparison to weather forecasts, in which an understanding of physics and chemistry at the scale of neighborhoods allows us to accurately predict temperature, rainfall, and wind across the whole globe.

The info is here.

Advice for technologists on promoting AI ethics

Joe McKendrick
www.zdnet.com
Originally posted July 13, 2019

Ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it's unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

Wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they'll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. "The pressure to implement AI is fueling ethical issues," the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini's Artificial Intelligence & Analytics Group, state. "When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI." Thirty-four percent cited this pressure to stay ahead with AI trends.

The info is here.

Thursday, August 8, 2019

Microsoft wants to build artificial general intelligence: an AI better than humans at everything

A humanoid robot stands in front of a screen displaying the letters “AI.”Kelsey Piper 
www.vox.com
Originally published July 22, 2019

Here is an excerpt:

Existing AI systems beat humans at lots of narrow tasks — chess, Go, Starcraft, image generation — and they’re catching up to humans at others, like translation and news reporting. But an artificial general intelligence would be one system with the capacity to surpass us at all of those things. Enthusiasts argue that it would enable centuries of technological advances to arrive, effectively, all at once — transforming medicine, food production, green technologies, and everything else in sight.

Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.

Getting AGI right may be one of the most important challenges ahead for humanity. Microsoft’s billion dollar investment has the potential to push the frontiers forward for AI development, but to get AGI right, investors have to be willing to prioritize safety concerns that might slow commercial development.

The info is here.

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Wednesday, August 7, 2019

First do no harm: the impossible oath

Kamran Abbasi
BMJ 2019; 366
doi: https://doi.org/10.1136/bmj.l4734

Here is the beginning:

Discussions about patient safety describe healthcare as an industry. If that’s the case then what is healthcare’s business? What does it manufacture? Health and wellbeing? Possibly. But we know for certain that healthcare manufactures harm. Look at the data from our new research paper on the prevalence, severity, and nature of preventable harm (doi:10.1136/bmj.l4185). Maria Panagioti and colleagues find that the prevalence of overall harm, preventable and non-preventable, is 12% across medical care settings. Around half of this is preventable.

These data make something of a mockery of our principal professional oath to first do no harm. Working in clinical practice, we do harm that we cannot prevent or avoid, such as by appropriately prescribing a drug that causes an adverse drug reaction. As our experience, evidence, and knowledge improve, what isn’t preventable today may well be preventable in the future.

The argument, then, isn’t over whether healthcare causes harm but about the exact estimates of harm and how much of it is preventable. The answer that Panagioti and colleagues deliver from their systematic review of the available evidence is the best we have at the moment, though it isn’t perfect. The definitions of preventable harm differ. Existing studies are heterogeneous and focused more on overall rather than preventable harm. The standard method is the retrospective case record review. The need, say the authors, is for better research in all fields and more research on preventable harms in primary care, psychiatry, and developing countries, and among children and older adults.

Veil-of-Ignorance Reasoning Favors the Greater Good

Karen Huang Joshua D. Greene Max Bazerman
PsyArXiv
Originally posted July 2, 2019

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

The research is here.

Tuesday, August 6, 2019

Dante, Trump and the moral cowardice of the G.O.P.

Charlie Sykes
www.americamagazine.com
Originally published July 21, 2019

One of John F. Kennedy’s favorite quotes was something he thought came from Dante: “The hottest places in Hell are reserved for those who in time of moral crisis preserve their neutrality.”

As it turns out, the quote is apocryphal. But what Dante did write was far better, and it came vividly to mind last week as Republicans failed to take a stand after President Trump’s racist tweets and chants of “Send her back,” directed at Representative Ilhan Omar of Minnesota, who immigrated here from Somalia, at a Trump rally in North Carolina.

In Dante’s Inferno, the moral cowards are not granted admission to Hell; they are consigned to the vestibule, where they are doomed to follow a rushing banner that is blown about by the wind.

(cut)

Despite some feeble attempts at rationalization, there was clarity to the president’s language and his larger intent. Mr. Trump was not merely using racist tropes; he was calling forth something dark and dangerous.

The president did not invent or create the racism, xenophobia and ugliness on display last week; they were all pre-existing conditions. But simply because something is latent does not mean it will metastasize into something malignant or fatal. Just because there is a hot glowing ember does not mean that it will explode into a raging conflagration.

The info is here.

Ethics and automation: What to do when workers are displaced

Tracy Mayor
MIT School of Management
Originally published July 8, 2019

As companies embrace automation and artificial intelligence, some jobs will be created or enhanced, but many more are likely to go away. What obligation do organizations have to displaced workers in such situations? Is there an ethical way for business leaders to usher their workforces through digital disruption?

Researchers wrestled with those questions recently at MIT Technology Review’s EmTech Next conference. Their conclusion: Company leaders need to better understand the negative repercussions of the technologies they adopt and commit to building systems that drive economic growth and social cohesion.

Pramod Khargonekar, vice chancellor for research at University of California, Irvine, and Meera Sampath, associate vice chancellor for research at the State University of New York, presented findings from their paper, “Socially Responsible Automation: A Framework for Shaping the Future.”

The research makes the case that “humans will and should remain critical and central to the workplace of the future, controlling, complementing and augmenting the strengths of technological solutions.” In this scenario, automation, artificial intelligence, and related technologies are tools that should be used to enrich human lives and livelihoods.

Aspirational, yes, but how do we get there?

The info is here.

Monday, August 5, 2019

Ethics working group to hash out what kind of company service is off limits

Chris Marquette
www.rollcall.com
Originally published July 22, 2019

A House Ethics Committee working group on Thursday will discuss proposed regulations to govern what kind of roles lawmakers may perform in companies, part of a push to head off the kind of ethical issues that led to the federal indictment of Rep. Chris Collins, who is accused of trading insider information while simultaneously serving as a company board member and public official.

(cut)

House Resolution 6 created a new clause in the Code of Official Conduct — set to take effect Jan. 1, 2020 — that prohibits members, delegates, resident commissioners, officers or employees in the House from serving as an officer or director of any public company.

The clause required the Ethics Committee to develop by Dec. 31 regulations addressing other prohibited service or positions that could lead to conflicts of interest.

The info is here.

Ethical considerations in assessment and behavioral treatment of obesity: Issues and practice implications for clinical health psychologists

Williamson, T. M., Rash, J. A., Campbell, T. S., & Mothersill, K. (2019).
Professional Psychology: Research and Practice. Advance online publication.
http://dx.doi.org/10.1037/pro0000249

Abstract

The obesity epidemic in the United States and Canada has been accompanied by an increased demand on behavioral health specialists to provide comprehensive behavior therapy for weight loss (BTWL) to individuals with obesity. Clinical health psychologists are optimally positioned to deliver BTWL because of their advanced competencies in multimodal assessment, training in evidence-based methods of behavior change, and proficiencies in interdisciplinary collaboration. Although published guidelines provide recommendations for optimal design and delivery of BTWL (e.g., behavior modification, cognitive restructuring, and mindfulness practice; group-based vs. individual therapy), guidelines on ethical issues that may arise during assessment and treatment remain conspicuously absent. This article reviews clinical practice guidelines, ethical codes (i.e., the Canadian Code of Ethics for Psychologists and the American Psychological Association Ethical Principles of Psychologists), and the extant literature to highlight obesity-specific ethical considerations for psychologists who provide assessment and BTWL in health care settings. Five key themes emerge from the literature: (a) informed consent (instilling realistic treatment expectations; reasonable alternatives to BTWL; privacy and confidentiality); (b) assessment (using a biopsychosocial approach; selecting psychological tests); (c) competence and scope of practice (self-assessment; collaborative care); (d) recognition of personal bias and discrimination (self-examination, diversity); and (e) maximizing treatment benefit while minimizing harm. Practical recommendations grounded in the American Psychological Association’s competency training model for clinical health psychologists are discussed to assist practitioners in addressing and mitigating ethical issues in practice.

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.