Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, December 31, 2015

How do you teach a machine to be moral?

By Francesca Rossi
The Washington Post
Originally published November 5, 2015

Here is an excerpt:

For this cooperation to work safely and beneficially for both humans and machines, artificial agents should follow moral values and ethical principles (appropriate to where they will act), as well as safety constraints. When directed to achieve a set of goals, agents should ensure that their actions do not violate these principles and values overtly, or through negligence by performing risky actions.

It would be easier for humans to accept and trust machines who behave as ethically as we do, and these principles would make it easier for artificial agents to determine their actions and explain their behavior in terms understandable by humans. Moreover, if machines and humans needed to make decisions together, shared moral values and ethical principles would facilitate consensus and compromise. Imagine a room full of physicians trying to decide on the best treatment for a patient with a difficult case. Now add an artificial agent that has read everything that has been written about the patient’s disease and similar cases, and thus can help the physicians compare the options and make a much more informed choice. To be trustworthy, the agent should care about the same values as the physicians: curing the disease should not at detriment of the patient’s well-being.

The entire article is here.

Wednesday, December 30, 2015

Why natural science needs phenomenological philosophy

Steven M. Rosen
Prog Biophys Mol Biol. 2015 Jul 2. pii: S0079-6107(15)00083-8.

Abstract

Through an exploration of theoretical physics, this paper suggests the need for regrounding natural science in phenomenological philosophy. To begin, the philosophical roots of the prevailing scientific paradigm are traced to the thinking of Plato, Descartes, and Newton. The crisis in modern science is then investigated, tracking developments in physics, science's premier discipline. Einsteinian special relativity is interpreted as a response to the threat of discontinuity implied by the Michelson-Morley experiment, a challenge to classical objectivism that Einstein sought to counteract. We see that Einstein's efforts to banish discontinuity ultimately fall into the "black hole" predicted in his general theory of relativity. The unavoidable discontinuity that haunts Einstein's theory is also central to quantum mechanics. Here too the attempt has been made to manage discontinuity, only to have this strategy thwarted in the end by the intractable problem of quantum gravity. The irrepressible discontinuity manifested in the phenomena of modern physics proves to be linked to a merging of subject and object that flies in the face of Cartesian philosophy. To accommodate these radically non-classical phenomena, a new philosophical foundation is called for: phenomenology. Phenomenological philosophy is elaborated through Merleau-Ponty's concept of depth and is then brought into focus for use in theoretical physics via qualitative work with topology and hypercomplex numbers. In the final part of this paper, a detailed summary is offered of the specific application of topological phenomenology to quantum gravity that was systematically articulated in The Self-Evolving Cosmos (Rosen, 2008a).

The article is here.

Tuesday, December 29, 2015

AI is different because it lets machines weld the emotional with the physical

By Peter McOwen
The Conversation
Originally published December 10, 2015

Here is an excerpt:

Creative intelligence

However, many are sensitive to the idea of artificial intelligence being artistic – entering the sphere of human intelligence and creativity. AI can learn to mimic the artistic process of painting, literature, poetry and music, but it does so by learning the rules, often from access to large datasets of existing work from which it extracts patterns and applies them. Robots may be able to paint – applying a brush to canvas, deciding on shapes and colours – but based on processing the example of human experts. Is this creating, or copying? (The same question has been asked of humans too.)

The entire article is here.

Is Anyone Competent to Regulate Artificial Intelligence?

By John Danaher
Philosophical Disquisitions
Posted November 21, 2015

Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

The entire blog post is here.

Monday, December 28, 2015

The role of emotion in ethics and bioethics: dealing with repugnance and disgust

Mark Sheehan
J Med Ethics 2016;42:1-2
doi:10.1136/medethics-2015-103294

Here is an excerpt:

But what generally are we to say about the role of emotions in ethics and in ethical judgement? We tend to sharply distinguish ‘mere’ emotions or emotional responses from reasoned or rational argument. Clearly, it would seem, if we are to make claims about rightness or wrongness they should be on the basis of reasons and rational argument. Emotions look to be outside of this paradigm concerned as they are with our responses to the world rather than the world itself and the clear articulation of inferential relationships within it. Most importantly emotions are felt subjectively and so cannot lay any generalised claim on others (particularly others who do not feel as the arguer does). The subjectivity of emotions means that they cannot function in arguments because, unless they are universal, they cannot form the basis of a claim on another person. The reason they cannot form this basis is because that other person may not have that emotion: relying on it means the argument can only apply to those who do. An argument that relies on feeling particular emotions, particularly emotions that we don't all feel in the same way, is weak to that extent and certainly weaker than one that does not.

In the case at hand, repugnance or disgust only have persuasive power to those who feel these emotions in response to human reproductive cloning. If all people felt one or the other, then claims based on an appeal to repugnance or disgust would have persuasive power over all of us. But even if these were generally or commonly felt emotions here, such persuasive power would be distinct from an argument's having persuasive power over us because of the reasons it provides for us independently of contingently felt emotions. An argument then that is based on an appeal to emotion apparently as Kass' and Kekes' apparently are, can, at best, be only as strong as the generalisability of the empirical claim about the relevant emotion.

The article is here.

Computer-based personality judgments are more accurate than those made by humans

By Wu Youyou, Michal Kosinski, and David Stillwell
PNAS January 27, 2015 vol. 112 no. 4 1036-1040

Abstract

Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

The article is here.

Sunday, December 27, 2015

Survey: 8 in 10 US doctors feel unprepared to treat mentally ill

By Sarah Ferris
The Hill
Originally published December 7, 2015

More than eight in 10 family doctors in the U.S. say they are not adequately prepared to care for severely mentally ill patients, according to a survey released Monday by the Commonwealth Fund.

Just 16 percent of U.S. doctors said their offices had the capacity to care for those with serious mental illnesses, the lowest of any other developed country besides Sweden, according to the annual international study.

Diagnosing and treating mental illnesses has come increasingly into focus this year as the number of mass shootings committed by mentally unstable individuals continues to rise. GOP leaders in Congress have repeatedly pointed to mental health reform as their best response to the nation's epidemic of shootings.

The entire article is here.

Saturday, December 26, 2015

'Highly ethical' business students don't like Wall Street

CNN Money
Originally published November 18, 2015

Students at business schools who think of themselves as "highly ethical" aren't interested in a career on Wall Street. They don't see the big banks as moral enough for their standards.

That's according to William Dudley, the president of the New York Federal Reserve. Dudley knows a thing or two about ethics at big banks. He used to work at Goldman Sachs and now Dudley leads one of the watchdogs in charge of overseeing Wall Street's activities.

Dudley was bothered by a recent conversation with business school deans. They told him that business school students who consider themselves "highly ethical" are choosing not to work in financial services.

"As long as we have that self selection out of the financial industry by people who view themselves as highly ethical...that tells you we have a problem," Dudley said at the Economic Club of New York Thursday.

The entire article is here.

Friday, December 25, 2015

Scientific Faith Is Different From Religious Faith

By Paul Bloom
The Atlantic
Originally published November 24, 2015

Here is an excerpt:

It’s better to get a cancer diagnosis from a radiologist than from a Ouija Board. It’s better to learn about the age of the universe from an astrophysicist than from a Rabbi. The New England Journal of Medicine is a more reliable source about vaccines than the actress Jenny McCarthy. These preferences are not ideological. We’re not talking about Fox News versus The Nation. They are rational, because the methods of science are demonstrably superior at getting at truths about the natural world.

I don’t want to fetishize science. Sociologists and philosophers deserve a lot of credit in reminding us that scientific practice is permeated by groupthink, bias, and financial, political, and personal motivations. The physicist Richard Feynman once wrote that the essence of science was “bending over backwards to prove ourselves wrong.” But he was talking about the collective cultural activity of science, not scientists as individuals, most of whom prefer to be proven right, and who are highly biased to see the evidence in whatever light most favors their preferred theory.

The entire article is here.