Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, May 21, 2018

A Mathematical Framework for Superintelligent Machines

Daniel J. Buehrer
IEEE Access

Here is an excerpt:

Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.

Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy. 

From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.

The information is here.

A ‘Master Algorithm’ may emerge sooner than you think

Tristan Greene
thenextweb.com
Originally posted April 18, 2018

Here is an excerpt:

It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.

Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.
 Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”

The information is here.

Sunday, May 20, 2018

Robot cognition requires machines that both think and feel

Luiz Pessosa
www.aeon.com
Originally published April 13, 2018

Here is an excerpt:

Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.

The information is here.

Friendly note: I don't agree with everything I post.  In this case, I do not believe that AI needs emotions and feelings.  Rather, AI will have a different form of consciousness.  We don't need to try to reproduce our experiences exactly.  AI consciousness will likely have flaws, like we do.  We need to be able to manage AI given the limitations we create.

Saturday, May 19, 2018

County Jail or Psychiatric Hospital? Ethical Challenges in Correctional Mental Health Care

Andrea G. Segal, Rosemary Frasso, Dominic A. Sisti
Qualitative Health Research
First published March 21, 2018

Abstract

Approximately 20% of the roughly 2.5 million individuals incarcerated in the United States have a serious mental illness (SMI). As a result of their illnesses, these individuals are often more likely to commit a crime, end up incarcerated, and languish in correctional settings without appropriate treatment. The objective of the present study was to investigate how correctional facility personnel reconcile the ethical challenges that arise when housing and treating individuals with SMI. Four focus groups and one group interview were conducted with employees (n = 24) including nurses, clinicians, correctional officers, administrators, and sergeants at a county jail in Pennsylvania. Results show that jail employees felt there are too many inmates with SMI in jail who would benefit from more comprehensive treatment elsewhere; however, given limited resources, employees felt they were doing the best they can. These findings can inform mental health management and policy in a correctional setting.

The information is here.

Friday, May 18, 2018

You don’t have a right to believe whatever you want to

Daniel DeNicola
aeon.co
Originally published May 14, 2018

Here is the conclusion:

Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.

Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked: ‘No one man’s belief is in any case a private matter which concerns himself alone.’ Beliefs shape attitudes and motives, guide choices and actions. Believing and knowing are formed within an epistemic community, which also bears their effects. There is an ethic of believing, of acquiring, sustaining, and relinquishing beliefs – and that ethic both generates and limits our right to believe. If some beliefs are false, or morally repugnant, or irresponsible, some beliefs are also dangerous. And to those, we have no right.

The information is here.

Increasing patient engagement in healthcare decision-making

Jennifer Blumenthal-Barby
Baylor College of Medicine Blogs
Originally posted March 10, 2017

Making decisions is hard. Anyone who has ever struggled to pick a restaurant for dinner knows well – choosing between options is difficult even when the stakes are low and you have full access to information.

But what happens when the information is incomplete or difficult to comprehend? How does navigating a health crisis impact our ability to choose between different treatment options?

The Wall Street Journal published an article about something I have spent considerable time studying: the importance of decision aids in helping patients make difficult medical decisions. They note correctly that simplifying medical jargon and complicated statistics helps patients take more control over their care.

But that is only part of the equation.

The blog post is here.

Thursday, May 17, 2018

Empathy and outcome meta-analysis

Elliott, Robert and Bohart, Arthur C. and Watson, Jeanne C. and Murphy, David
(2018) Psychotherapy 

Abstract


Put simply, empathy refers to understanding what another person is experiencing or trying to express. Therapist empathy has a long history as a hypothesized key change process in psychotherapy. We begin by discussing definitional issues and presenting an integrative definition. We then review measures of therapist empathy, including the conceptual problem of separating empathy from other relationship variables. We follow this with clinical examples illustrating different forms of therapist empathy and empathic response modes. The core of our review is a meta-analysis of research on the relation between therapist empathy and client outcome. Results indicated that empathy is a moderately strong predictor of therapy outcome: mean weighted r= .28 (p< .001; 95% confidence interval: .23 –.33; equivalent of d= .58) for 82 independent samples and 6,138 clients. In general, the empathy-outcome relation held for different theoretical orientations and client presenting problems; however, there was considerable heterogeneity in the effects. Client, observer, and therapist perception measures predicted client outcome better than empathic accuracy measures. We then consider the limitations of the current data. We conclude with diversity considerations and practice recommendations, including endorsing the different forms that empathy may take in therapy.

You can request a copy of the article here.

Ethics must be at heart of Artificial Intelligence technology

The Irish Times
Originally posted April 16, 2018

Artificial Intelligence (AI) must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.

Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.

With Britain poised to become a world leader in the controversial technological field international safeguards need to be set in place, the study said.

Peers state that AI needs to be developed for the common good and that the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.

The information is here.

Wednesday, May 16, 2018

Escape the Echo Chamber

C Thi Nguyen
www.medium.com
Originally posted April 12, 2018

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

The information is here.