Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, May 24, 2018

Is there a universal morality?

Massimo Pigliucci
The Evolution Institute
Originally posted March 2018

Here is the conclusion:

The first bit means that we are all deeply inter-dependent on other people. Despite the fashionable nonsense, especially in the United States, about “self-made men” (they are usually men), there actually is no such thing. Without social bonds and support our lives would be, as Thomas Hobbes famously put it, poor, nasty, brutish, and short. The second bit, the one about intelligence, does not mean that we always, or even often, act rationally. Only that we have the capability to do so. Ethics, then, especially (but not only) for the Stoics becomes a matter of “living according to nature,” meaning not to endorse whatever is natural (that’s an elementary logical fallacy), but rather to take seriously the two pillars of human nature: sociality and reason. As Marcus Aurelius put it, “Do what is necessary, and whatever the reason of a social animal naturally requires, and as it requires.” (Meditations, IV.24)

There is something, of course, the ancients did get wrong: they, especially Aristotle, thought that human nature was the result of a teleological process, that everything has a proper function, determined by the very nature of the cosmos. We don’t believe that anymore, not after Copernicus and especially Darwin. But we do know that human beings are indeed a particular product of complex and ongoing evolutionary processes. These processes do not determine a human essence, but they do shape a statistical cluster of characters that define what it means to be human. That cluster, in turn, constrains — without determining — what sort of behaviors are pro-social and lead to human flourishing, and what sort of behaviors don’t. And ethics is the empirically informed philosophical enterprise that attempts to understand and articulate that distinction.

The information is here.

Determined to be humble? Exploring the relationship between belief in free will and humility

Earp, B. D., Everett, J. A., Nadelhoffer, T., Caruso, G. D., Shariff, A., & Sinnott-Armstrong, W. (2018, April 24).
 
Abstract

In recent years, diminished belief in free will or increased belief in determinism have been associated with a range of antisocial or otherwise negative outcomes: unjustified aggression, cheating, prejudice, less helping behavior, and so on. Only a few studies have entertained the possibility of prosocial or otherwise positive outcomes, such as greater willingness to forgive and less motivation to punish retributively. Here, five studies explore the relationship between belief in determinism and another positive outcome or attribute, namely, humility. The reported findings suggest that relative disbelief in free will is reliably associated with at least one type of humility—what we call ‘Einsteinian’ humility—but is not associated with, or even negatively associated with, other types of humility described in the literature.

The preprint is here.

Wednesday, May 23, 2018

Double warning on impact of overworking on academic mental health

Sophie Inge
The Times of Higher Education
Originally published on April 4, 2018

Fresh calls have been made to tackle a crisis of overwork and poor mental health in academia in the wake of two worrying new studies.

US academics who conducted a global survey found that postgraduate students were more than six times more likely to experience depression or anxiety compared with the general population, with female researchers being worst affected.

Meanwhile, a survey of more than 5,500 staff in Norwegian universities found that academics reported higher levels of workaholism than their administrative colleagues and revealed that the group appears to be among the occupations most prone to workaholism in society as a whole. Young and female academics were more likely than their senior colleagues to indicate that this had an impact on their family life.

The information is here.

Growing brains in labs: why it's time for an ethical debate

Ian Sample
The Guardian
Originally published April 24, 2018

Here is an excerpt:

The call for debate has been prompted by a raft of studies in which scientists have made “brain organoids”, or lumps of human brain from stem cells; grown bits of human brain in rodents; and kept slivers of human brain alive for weeks after surgeons have removed the tissue from patients. Though it does not indicate consciousness, in one case, scientists recorded a surge of electrical activity from a ball of brain and retinal cells when they shined a light on it.

The research is driven by a need to understand how the brain works and how it fails in neurological disorders and mental illness. Brain organoids have already been used to study autism spectrum disorders, schizophrenia and the unusually small brain size seen in some babies infected with Zika virus in the womb.

“This research is essential to alleviate human suffering. It would be unethical to halt the work,” said Nita Farahany, professor of law and philosophy at Duke University in North Carolina. “What we want is a discussion about how to enable responsible progress in the field.”

The article is here.

Tuesday, May 22, 2018

Truckers Line Up Under Bridge To Save Man Threatening Suicide

Vanessa Romo
www.npr.org
Originally published April 24, 2018

Here is an excerpt:

"It provides a safety net for the person in case they happen to lose their grip and fall or if they decide to jump," Shaw said. "With the trucks lined up underneath they're only falling about five to six feet as opposed 15 or 16."

After about two hours of engaging with officials the distressed man willingly backed off the edge and is receiving help, Shaw said.

"He was looking to take his own life but we were able to talk to him and find out what his specific trigger was and helped correct it," Shaw said.

In all, the ordeal lasted about three hours.

The article is here.

Institutional Betrayal: Inequity, Discrimination, Bullying, and Retaliation in Academia

Karen Pyke
Sociological Perspectives
Volume: 61 issue: 1, page(s): 5-13
Article first published online: January 9, 2018

Abstract

Institutions of higher learning dedicated to the pursuit of knowledge and committed to diversity should be exemplars of workplace equity. Sadly, they are not. Their failure to take appropriate action to protect employees from inequity, discrimination, bullying, and retaliation amounts to institutional betrayal. The professional code of ethics for sociology, a discipline committed to the study of inequality, instructs sociologists to “strive to eliminate bias in their professional activities” and not to “tolerate any forms of discrimination.” As such, sociologists should be the leaders on our campuses in recognizing institutional betrayals by academic administrators and in promoting workplace equity. Regrettably, we have not accepted this charge. In this address, I call for sociologists to embrace our professional responsibilities and apply our scholarly knowledge and commitments to the reduction of inequality in our own workplace. If we can’t do it here, can we do it anywhere?

The article is here.

Monday, May 21, 2018

A Mathematical Framework for Superintelligent Machines

Daniel J. Buehrer
IEEE Access

Here is an excerpt:

Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.

Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy. 

From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.

The information is here.

A ‘Master Algorithm’ may emerge sooner than you think

Tristan Greene
thenextweb.com
Originally posted April 18, 2018

Here is an excerpt:

It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.

Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.
 Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”

The information is here.

Sunday, May 20, 2018

Robot cognition requires machines that both think and feel

Luiz Pessosa
www.aeon.com
Originally published April 13, 2018

Here is an excerpt:

Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.

The information is here.

Friendly note: I don't agree with everything I post.  In this case, I do not believe that AI needs emotions and feelings.  Rather, AI will have a different form of consciousness.  We don't need to try to reproduce our experiences exactly.  AI consciousness will likely have flaws, like we do.  We need to be able to manage AI given the limitations we create.