Jon Tilburt, Megan Allyse, and Frederic Hafferty
AMA Journal of Ethics
February 2017, Volume 19, Number 2: 199-206.
Abstract
Dr. Mehmet Oz is widely known not just as a successful media personality donning the title “America’s Doctor®,” but, we suggest, also as a physician visibly out of step with his profession. A recent, unsuccessful attempt to censure Dr. Oz raises the issue of whether the medical profession can effectively self-regulate at all. It also raises concern that the medical profession’s self-regulation might be selectively activated, perhaps only when the subject of professional censure has achieved a level of public visibility. We argue here that the medical profession must look at itself with a healthy dose of self-doubt about whether it has sufficient knowledge of or handle on the less visible Dr. “Ozes” quietly operating under the profession’s presumptive endorsement.
The information is here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Thursday, May 31, 2018
What did Hannah Arendt really mean by the banality of evil?
Thomas White
Aeon.co
Originally published April 23, 2018
Here is an excerpt:
Arendt dubbed these collective characteristics of Eichmann ‘the banality of evil’: he was not inherently evil, but merely shallow and clueless, a ‘joiner’, in the words of one contemporary interpreter of Arendt’s thesis: he was a man who drifted into the Nazi Party, in search of purpose and direction, not out of deep ideological belief. In Arendt’s telling, Eichmann reminds us of the protagonist in Albert Camus’s novel The Stranger (1942), who randomly and casually kills a man, but then afterwards feels no remorse. There was no particular intention or obvious evil motive: the deed just ‘happened’.
This wasn’t Arendt’s first, somewhat superficial impression of Eichmann. Even 10 years after his trial in Israel, she wrote in 1971:
The information is here.
Aeon.co
Originally published April 23, 2018
Here is an excerpt:
Arendt dubbed these collective characteristics of Eichmann ‘the banality of evil’: he was not inherently evil, but merely shallow and clueless, a ‘joiner’, in the words of one contemporary interpreter of Arendt’s thesis: he was a man who drifted into the Nazi Party, in search of purpose and direction, not out of deep ideological belief. In Arendt’s telling, Eichmann reminds us of the protagonist in Albert Camus’s novel The Stranger (1942), who randomly and casually kills a man, but then afterwards feels no remorse. There was no particular intention or obvious evil motive: the deed just ‘happened’.
This wasn’t Arendt’s first, somewhat superficial impression of Eichmann. Even 10 years after his trial in Israel, she wrote in 1971:
I was struck by the manifest shallowness in the doer [ie Eichmann] which made it impossible to trace the uncontestable evil of his deeds to any deeper level of roots or motives. The deeds were monstrous, but the doer – at least the very effective one now on trial – was quite ordinary, commonplace, and neither demonic nor monstrous.The banality-of-evil thesis was a flashpoint for controversy. To Arendt’s critics, it seemed absolutely inexplicable that Eichmann could have played a key role in the Nazi genocide yet have no evil intentions. Gershom Scholem, a fellow philosopher (and theologian), wrote to Arendt in 1963 that her banality-of-evil thesis was merely a slogan that ‘does not impress me, certainly, as the product of profound analysis’. Mary McCarthy, a novelist and good friend of Arendt, voiced sheer incomprehension: ‘[I]t seems to me that what you are saying is that Eichmann lacks an inherent human quality: the capacity for thought, consciousness – conscience. But then isn’t he a monster simply?’
The information is here.
Wednesday, May 30, 2018
Reining It In: Making Ethical Decisions in a Forensic Practice
Donna M. Veraldi and Lorna Veraldi
A Paper Presented to American College of Forensic Psychology
34th Annual Symposium, San Diego, CA
Here is an excerpt:
Ethical dilemmas sometimes require making difficult choices among competing ethical principles and values. This presentation will discuss ethical dilemmas arising from the use of coercion and deception in forensic practice. In a forensic practice, the choice is not as simple as “do no harm” or “tell the truth.” What is and is not acceptable in terms of using various forms of pressure on individuals or of assisting agencies that put pressure on individuals? How much information should forensic psychologists share with individuals about evaluation techniques? What does informed consent
mean in the context of a forensic practice where many of the individuals with whom we interact are not there by choice?
The information is here.
A Paper Presented to American College of Forensic Psychology
34th Annual Symposium, San Diego, CA
Here is an excerpt:
Ethical dilemmas sometimes require making difficult choices among competing ethical principles and values. This presentation will discuss ethical dilemmas arising from the use of coercion and deception in forensic practice. In a forensic practice, the choice is not as simple as “do no harm” or “tell the truth.” What is and is not acceptable in terms of using various forms of pressure on individuals or of assisting agencies that put pressure on individuals? How much information should forensic psychologists share with individuals about evaluation techniques? What does informed consent
mean in the context of a forensic practice where many of the individuals with whom we interact are not there by choice?
The information is here.
Google's Mysterious AI Ethics Board Should Be Transparent Like Axon's
Sam Shead
Forbes.com
Originally published April 27, 2018
A new artificial intelligence ethics (AI) board was announced this week by Axon — the US company behind the taser weapon — but the AI ethics board many people still want to know about remains shrouded in mystery.
Google quietly set up an AI ethics board in 2014 following the £400 million acquisition of a London AI lab called DeepMind, which hopes to one day build machines with human-level intelligence that will have a profound impact on the society we live in. Who sits on that board, how often that board meets, or what that board discusses, has remained a closely guarded company secret, despite DeepMind cofounder Mustafa Suleyman (who lobbied for the creation of the board) saying in 2016 that Google will publicise the names of those on it.
This week, Axon, a US company that develops body cameras for police officers and weapons for the law enforcement market, demonstrated the kind of transparency that Google should aspire towards when it announced an AI ethics board to "help guide the development of Axon's AI-powered devices and services".
The information is here.
Forbes.com
Originally published April 27, 2018
A new artificial intelligence ethics (AI) board was announced this week by Axon — the US company behind the taser weapon — but the AI ethics board many people still want to know about remains shrouded in mystery.
Google quietly set up an AI ethics board in 2014 following the £400 million acquisition of a London AI lab called DeepMind, which hopes to one day build machines with human-level intelligence that will have a profound impact on the society we live in. Who sits on that board, how often that board meets, or what that board discusses, has remained a closely guarded company secret, despite DeepMind cofounder Mustafa Suleyman (who lobbied for the creation of the board) saying in 2016 that Google will publicise the names of those on it.
This week, Axon, a US company that develops body cameras for police officers and weapons for the law enforcement market, demonstrated the kind of transparency that Google should aspire towards when it announced an AI ethics board to "help guide the development of Axon's AI-powered devices and services".
The information is here.
Tuesday, May 29, 2018
Ethics debate as pig brains kept alive without a body
Pallab Ghosh
BBC.com
Originally published April 27, 2018
Researchers at Yale University have restored circulation to the brains of decapitated pigs, and kept the organs alive for several hours.
Their aim is to develop a way of studying intact human brains in the lab for medical research.
Although there is no evidence that the animals were aware, there is concern that some degree of consciousness might have remained.
Details of the study were presented at a brain science ethics meeting held at the National Institutes of Health (NIH) in Bethesda in Maryland on 28 March.
The work, by Prof Nenad Sestan of Yale University, was discussed as part of an NIH investigation of ethical issues arising from neuroscience research in the US.
Prof Sestan explained that he and his team experimented on more than 100 pig brains.
The information is here.
BBC.com
Originally published April 27, 2018
Researchers at Yale University have restored circulation to the brains of decapitated pigs, and kept the organs alive for several hours.
Their aim is to develop a way of studying intact human brains in the lab for medical research.
Although there is no evidence that the animals were aware, there is concern that some degree of consciousness might have remained.
Details of the study were presented at a brain science ethics meeting held at the National Institutes of Health (NIH) in Bethesda in Maryland on 28 March.
The work, by Prof Nenad Sestan of Yale University, was discussed as part of an NIH investigation of ethical issues arising from neuroscience research in the US.
Prof Sestan explained that he and his team experimented on more than 100 pig brains.
The information is here.
Choosing partners or rivals
The Harvard Gazette
Originally published April 27, 2018
Here is the conclusion:
“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”
In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.
“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.
“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”
The information is here.
Originally published April 27, 2018
Here is the conclusion:
“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”
In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.
“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.
“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”
The information is here.
Monday, May 28, 2018
This Suicide Pod Dubbed 'the Tesla of Death' Lets You Kill Yourself Peacefully
Loukia Papadopoulos
Interesting Engineering
Originally posted April 27, 2018
A new controversial pod for ending one’s life is on the market and it is being dubbed the Tesla of death and its founder, the Elon Musk of suicide. The pod, developed by euthanasia campaigner Dr. Philip Nitschke, is called the Sarco and it seeks to revolutionize the way we die.
The Sarco's website features a thought-provoking question on its landing page. “What if we had more than mere dignity to look forward to on our last day on this planet?” reads the site.
A description of the pod goes on to explain that “the elegant design was intended to suggest a sense of occasion: of travel to a ‘new destination’, and to dispel the ‘yuk’ factor.” If this sounds like a macabre joke, rest assured it is not.
The article is here.
Interesting Engineering
Originally posted April 27, 2018
A new controversial pod for ending one’s life is on the market and it is being dubbed the Tesla of death and its founder, the Elon Musk of suicide. The pod, developed by euthanasia campaigner Dr. Philip Nitschke, is called the Sarco and it seeks to revolutionize the way we die.
The Sarco's website features a thought-provoking question on its landing page. “What if we had more than mere dignity to look forward to on our last day on this planet?” reads the site.
A description of the pod goes on to explain that “the elegant design was intended to suggest a sense of occasion: of travel to a ‘new destination’, and to dispel the ‘yuk’ factor.” If this sounds like a macabre joke, rest assured it is not.
The article is here.
The ethics of experimenting with human brain tissue
Nita Farahany, and others
Nature
Originally published April 25, 2018
If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?
This question might seem outlandish. Certainly, today’s experimental models are far from having such capabilities. But various models are now being developed to better understand the human brain, including miniaturized, simplified versions of brain tissue grown in a dish from stem cells — brain organoids. And advances keep being made.
These models could provide a much more accurate representation of normal and abnormal human brain function and development than animal models can (although animal models will remain useful for many goals). In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people. Yet the closer the proxy gets to a functioning human brain, the more ethically problematic it becomes.
The information is here.
Nature
Originally published April 25, 2018
If researchers could create brain tissue in the laboratory that might appear to have conscious experiences or subjective phenomenal states, would that tissue deserve any of the protections routinely given to human or animal research subjects?
This question might seem outlandish. Certainly, today’s experimental models are far from having such capabilities. But various models are now being developed to better understand the human brain, including miniaturized, simplified versions of brain tissue grown in a dish from stem cells — brain organoids. And advances keep being made.
These models could provide a much more accurate representation of normal and abnormal human brain function and development than animal models can (although animal models will remain useful for many goals). In fact, the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people. Yet the closer the proxy gets to a functioning human brain, the more ethically problematic it becomes.
The information is here.
Sunday, May 27, 2018
The Ethics of Neuroscience - A Different Lens
New technologies are allowing us to have control over the human brain like never before. As we push the possibilities we must ask ourselves, what is neuroscience today and how far is too far?
The world’s best neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression. Many patients are cured, while others develop side effects such as erratic behaviour and changes in their personality.
Not only do we have greater understanding of clinical psychology, forensic psychology and criminal psychology, we also have more control. Professional athletes and gamers are now using this technology – some of it untested – to improve performance. However, with these amazing possibilities come great ethical concerns.
This manipulation of the brain has far-reaching effects, impacting the law, marketing, health industries and beyond. We need to investigate the capabilities of neuroscience and ask the ethical questions that will determine how far we can push the science of mind and behaviour.
Saturday, May 26, 2018
Illusionism as a Theory of Consciousness
Keith Frankish
Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.
Although it has powerful defenders — pre-eminently Daniel Dennett — illusionism remains a minority position, and it is often dismissed out of hand as failing to ‘take consciousness seriously’ (Chalmers, 1996). The aim of this article is to present the case for illusionism. It will not propose a detailed illusionist theory, but will seek to persuade the reader that the illusionist research programme is worth pursuing and that illusionists do take consciousness seriously — in some ways, more seriously than realists do.
The article/book chapter is here.
Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.
Although it has powerful defenders — pre-eminently Daniel Dennett — illusionism remains a minority position, and it is often dismissed out of hand as failing to ‘take consciousness seriously’ (Chalmers, 1996). The aim of this article is to present the case for illusionism. It will not propose a detailed illusionist theory, but will seek to persuade the reader that the illusionist research programme is worth pursuing and that illusionists do take consciousness seriously — in some ways, more seriously than realists do.
The article/book chapter is here.
Friday, May 25, 2018
What does it take to be a brain disorder?
Anneli Jefferson
Synthese (2018).
https://doi.org/10.1007/s11229-018-1784-x
Abstract
In this paper, I address the question whether mental disorders should be understood to be brain disorders and what conditions need to be met for a disorder to be rightly described as a brain disorder. I defend the view that mental disorders are autonomous and that a condition can be a mental disorder without at the same time being a brain disorder. I then show the consequences of this view. The most important of these is that brain differences underlying mental disorders derive their status as disordered from the fact that they realize mental dysfunction and are therefore non-autonomous or dependent on the level of the mental. I defend this view of brain disorders against the objection that only conditions whose pathological character can be identified independently of the mental level of description count as brain disorders. The understanding of brain disorders I propose requires a certain amount of conceptual revision and is at odds with approaches which take the notion of brain disorder to be fundamental or look to neuroscience to provide us with a purely physiological understanding of mental illness. It also entails a pluralistic understanding of psychiatric illness, according to which a condition can be both a mental disorder and a brain disorder.
The research is here.
Synthese (2018).
https://doi.org/10.1007/s11229-018-1784-x
Abstract
In this paper, I address the question whether mental disorders should be understood to be brain disorders and what conditions need to be met for a disorder to be rightly described as a brain disorder. I defend the view that mental disorders are autonomous and that a condition can be a mental disorder without at the same time being a brain disorder. I then show the consequences of this view. The most important of these is that brain differences underlying mental disorders derive their status as disordered from the fact that they realize mental dysfunction and are therefore non-autonomous or dependent on the level of the mental. I defend this view of brain disorders against the objection that only conditions whose pathological character can be identified independently of the mental level of description count as brain disorders. The understanding of brain disorders I propose requires a certain amount of conceptual revision and is at odds with approaches which take the notion of brain disorder to be fundamental or look to neuroscience to provide us with a purely physiological understanding of mental illness. It also entails a pluralistic understanding of psychiatric illness, according to which a condition can be both a mental disorder and a brain disorder.
The research is here.
The $3-Million Research Breakdown
Jodi Cohen
www.propublica.org
Originally published April 26, 2018
Here is an excerpt:
In December, the university quietly paid a severe penalty for Pavuluri’s misconduct and its own lax oversight, after the National Institute of Mental Health demanded weeks earlier that the public institution — which has struggled with declining state funding — repay all $3.1 million it had received for Pavuluri’s study.
In issuing the rare rebuke, federal officials concluded that Pavuluri’s “serious and continuing noncompliance” with rules to protect human subjects violated the terms of the grant. NIMH said she had “increased risk to the study subjects” and made any outcomes scientifically meaningless, according to documents obtained by ProPublica Illinois.
Pavuluri’s research is also under investigation by two offices in the U.S. Department of Health and Human Services: the inspector general’s office, which examines waste, fraud and abuse in government programs, according to subpoenas obtained by ProPublica Illinois, and the Office of Research Integrity, according to university officials.
The article is here.
www.propublica.org
Originally published April 26, 2018
Here is an excerpt:
In December, the university quietly paid a severe penalty for Pavuluri’s misconduct and its own lax oversight, after the National Institute of Mental Health demanded weeks earlier that the public institution — which has struggled with declining state funding — repay all $3.1 million it had received for Pavuluri’s study.
In issuing the rare rebuke, federal officials concluded that Pavuluri’s “serious and continuing noncompliance” with rules to protect human subjects violated the terms of the grant. NIMH said she had “increased risk to the study subjects” and made any outcomes scientifically meaningless, according to documents obtained by ProPublica Illinois.
Pavuluri’s research is also under investigation by two offices in the U.S. Department of Health and Human Services: the inspector general’s office, which examines waste, fraud and abuse in government programs, according to subpoenas obtained by ProPublica Illinois, and the Office of Research Integrity, according to university officials.
The article is here.
Thursday, May 24, 2018
Is there a universal morality?
Massimo Pigliucci
The Evolution Institute
Originally posted March 2018
Here is the conclusion:
The first bit means that we are all deeply inter-dependent on other people. Despite the fashionable nonsense, especially in the United States, about “self-made men” (they are usually men), there actually is no such thing. Without social bonds and support our lives would be, as Thomas Hobbes famously put it, poor, nasty, brutish, and short. The second bit, the one about intelligence, does not mean that we always, or even often, act rationally. Only that we have the capability to do so. Ethics, then, especially (but not only) for the Stoics becomes a matter of “living according to nature,” meaning not to endorse whatever is natural (that’s an elementary logical fallacy), but rather to take seriously the two pillars of human nature: sociality and reason. As Marcus Aurelius put it, “Do what is necessary, and whatever the reason of a social animal naturally requires, and as it requires.” (Meditations, IV.24)
There is something, of course, the ancients did get wrong: they, especially Aristotle, thought that human nature was the result of a teleological process, that everything has a proper function, determined by the very nature of the cosmos. We don’t believe that anymore, not after Copernicus and especially Darwin. But we do know that human beings are indeed a particular product of complex and ongoing evolutionary processes. These processes do not determine a human essence, but they do shape a statistical cluster of characters that define what it means to be human. That cluster, in turn, constrains — without determining — what sort of behaviors are pro-social and lead to human flourishing, and what sort of behaviors don’t. And ethics is the empirically informed philosophical enterprise that attempts to understand and articulate that distinction.
The information is here.
The Evolution Institute
Originally posted March 2018
Here is the conclusion:
The first bit means that we are all deeply inter-dependent on other people. Despite the fashionable nonsense, especially in the United States, about “self-made men” (they are usually men), there actually is no such thing. Without social bonds and support our lives would be, as Thomas Hobbes famously put it, poor, nasty, brutish, and short. The second bit, the one about intelligence, does not mean that we always, or even often, act rationally. Only that we have the capability to do so. Ethics, then, especially (but not only) for the Stoics becomes a matter of “living according to nature,” meaning not to endorse whatever is natural (that’s an elementary logical fallacy), but rather to take seriously the two pillars of human nature: sociality and reason. As Marcus Aurelius put it, “Do what is necessary, and whatever the reason of a social animal naturally requires, and as it requires.” (Meditations, IV.24)
There is something, of course, the ancients did get wrong: they, especially Aristotle, thought that human nature was the result of a teleological process, that everything has a proper function, determined by the very nature of the cosmos. We don’t believe that anymore, not after Copernicus and especially Darwin. But we do know that human beings are indeed a particular product of complex and ongoing evolutionary processes. These processes do not determine a human essence, but they do shape a statistical cluster of characters that define what it means to be human. That cluster, in turn, constrains — without determining — what sort of behaviors are pro-social and lead to human flourishing, and what sort of behaviors don’t. And ethics is the empirically informed philosophical enterprise that attempts to understand and articulate that distinction.
The information is here.
Determined to be humble? Exploring the relationship between belief in free will and humility
Earp, B. D., Everett, J. A., Nadelhoffer, T., Caruso, G. D., Shariff, A., & Sinnott-Armstrong, W. (2018, April 24).
Abstract
In recent years, diminished belief in free will or increased belief in determinism have been associated with a range of antisocial or otherwise negative outcomes: unjustified aggression, cheating, prejudice, less helping behavior, and so on. Only a few studies have entertained the possibility of prosocial or otherwise positive outcomes, such as greater willingness to forgive and less motivation to punish retributively. Here, five studies explore the relationship between belief in determinism and another positive outcome or attribute, namely, humility. The reported findings suggest that relative disbelief in free will is reliably associated with at least one type of humility—what we call ‘Einsteinian’ humility—but is not associated with, or even negatively associated with, other types of humility described in the literature.
The preprint is here.
Abstract
In recent years, diminished belief in free will or increased belief in determinism have been associated with a range of antisocial or otherwise negative outcomes: unjustified aggression, cheating, prejudice, less helping behavior, and so on. Only a few studies have entertained the possibility of prosocial or otherwise positive outcomes, such as greater willingness to forgive and less motivation to punish retributively. Here, five studies explore the relationship between belief in determinism and another positive outcome or attribute, namely, humility. The reported findings suggest that relative disbelief in free will is reliably associated with at least one type of humility—what we call ‘Einsteinian’ humility—but is not associated with, or even negatively associated with, other types of humility described in the literature.
The preprint is here.
Wednesday, May 23, 2018
Double warning on impact of overworking on academic mental health
Sophie Inge
The Times of Higher Education
Originally published on April 4, 2018
Fresh calls have been made to tackle a crisis of overwork and poor mental health in academia in the wake of two worrying new studies.
US academics who conducted a global survey found that postgraduate students were more than six times more likely to experience depression or anxiety compared with the general population, with female researchers being worst affected.
Meanwhile, a survey of more than 5,500 staff in Norwegian universities found that academics reported higher levels of workaholism than their administrative colleagues and revealed that the group appears to be among the occupations most prone to workaholism in society as a whole. Young and female academics were more likely than their senior colleagues to indicate that this had an impact on their family life.
The information is here.
The Times of Higher Education
Originally published on April 4, 2018
Fresh calls have been made to tackle a crisis of overwork and poor mental health in academia in the wake of two worrying new studies.
US academics who conducted a global survey found that postgraduate students were more than six times more likely to experience depression or anxiety compared with the general population, with female researchers being worst affected.
Meanwhile, a survey of more than 5,500 staff in Norwegian universities found that academics reported higher levels of workaholism than their administrative colleagues and revealed that the group appears to be among the occupations most prone to workaholism in society as a whole. Young and female academics were more likely than their senior colleagues to indicate that this had an impact on their family life.
The information is here.
Growing brains in labs: why it's time for an ethical debate
Ian Sample
The Guardian
Originally published April 24, 2018
Here is an excerpt:
The call for debate has been prompted by a raft of studies in which scientists have made “brain organoids”, or lumps of human brain from stem cells; grown bits of human brain in rodents; and kept slivers of human brain alive for weeks after surgeons have removed the tissue from patients. Though it does not indicate consciousness, in one case, scientists recorded a surge of electrical activity from a ball of brain and retinal cells when they shined a light on it.
The research is driven by a need to understand how the brain works and how it fails in neurological disorders and mental illness. Brain organoids have already been used to study autism spectrum disorders, schizophrenia and the unusually small brain size seen in some babies infected with Zika virus in the womb.
“This research is essential to alleviate human suffering. It would be unethical to halt the work,” said Nita Farahany, professor of law and philosophy at Duke University in North Carolina. “What we want is a discussion about how to enable responsible progress in the field.”
The article is here.
The Guardian
Originally published April 24, 2018
Here is an excerpt:
The call for debate has been prompted by a raft of studies in which scientists have made “brain organoids”, or lumps of human brain from stem cells; grown bits of human brain in rodents; and kept slivers of human brain alive for weeks after surgeons have removed the tissue from patients. Though it does not indicate consciousness, in one case, scientists recorded a surge of electrical activity from a ball of brain and retinal cells when they shined a light on it.
The research is driven by a need to understand how the brain works and how it fails in neurological disorders and mental illness. Brain organoids have already been used to study autism spectrum disorders, schizophrenia and the unusually small brain size seen in some babies infected with Zika virus in the womb.
“This research is essential to alleviate human suffering. It would be unethical to halt the work,” said Nita Farahany, professor of law and philosophy at Duke University in North Carolina. “What we want is a discussion about how to enable responsible progress in the field.”
The article is here.
Tuesday, May 22, 2018
Truckers Line Up Under Bridge To Save Man Threatening Suicide
Vanessa Romo
www.npr.org
Originally published April 24, 2018
Here is an excerpt:
"It provides a safety net for the person in case they happen to lose their grip and fall or if they decide to jump," Shaw said. "With the trucks lined up underneath they're only falling about five to six feet as opposed 15 or 16."
After about two hours of engaging with officials the distressed man willingly backed off the edge and is receiving help, Shaw said.
"He was looking to take his own life but we were able to talk to him and find out what his specific trigger was and helped correct it," Shaw said.
In all, the ordeal lasted about three hours.
The article is here.
www.npr.org
Originally published April 24, 2018
Here is an excerpt:
"It provides a safety net for the person in case they happen to lose their grip and fall or if they decide to jump," Shaw said. "With the trucks lined up underneath they're only falling about five to six feet as opposed 15 or 16."
After about two hours of engaging with officials the distressed man willingly backed off the edge and is receiving help, Shaw said.
"He was looking to take his own life but we were able to talk to him and find out what his specific trigger was and helped correct it," Shaw said.
In all, the ordeal lasted about three hours.
The article is here.
Institutional Betrayal: Inequity, Discrimination, Bullying, and Retaliation in Academia
Karen Pyke
Sociological Perspectives
Volume: 61 issue: 1, page(s): 5-13
Article first published online: January 9, 2018
Abstract
Institutions of higher learning dedicated to the pursuit of knowledge and committed to diversity should be exemplars of workplace equity. Sadly, they are not. Their failure to take appropriate action to protect employees from inequity, discrimination, bullying, and retaliation amounts to institutional betrayal. The professional code of ethics for sociology, a discipline committed to the study of inequality, instructs sociologists to “strive to eliminate bias in their professional activities” and not to “tolerate any forms of discrimination.” As such, sociologists should be the leaders on our campuses in recognizing institutional betrayals by academic administrators and in promoting workplace equity. Regrettably, we have not accepted this charge. In this address, I call for sociologists to embrace our professional responsibilities and apply our scholarly knowledge and commitments to the reduction of inequality in our own workplace. If we can’t do it here, can we do it anywhere?
The article is here.
Sociological Perspectives
Volume: 61 issue: 1, page(s): 5-13
Article first published online: January 9, 2018
Abstract
Institutions of higher learning dedicated to the pursuit of knowledge and committed to diversity should be exemplars of workplace equity. Sadly, they are not. Their failure to take appropriate action to protect employees from inequity, discrimination, bullying, and retaliation amounts to institutional betrayal. The professional code of ethics for sociology, a discipline committed to the study of inequality, instructs sociologists to “strive to eliminate bias in their professional activities” and not to “tolerate any forms of discrimination.” As such, sociologists should be the leaders on our campuses in recognizing institutional betrayals by academic administrators and in promoting workplace equity. Regrettably, we have not accepted this charge. In this address, I call for sociologists to embrace our professional responsibilities and apply our scholarly knowledge and commitments to the reduction of inequality in our own workplace. If we can’t do it here, can we do it anywhere?
The article is here.
Monday, May 21, 2018
A Mathematical Framework for Superintelligent Machines
Daniel J. Buehrer
IEEE Access
Here is an excerpt:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.
Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy.
From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.
The information is here.
IEEE Access
Here is an excerpt:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.
Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy.
From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.
The information is here.
A ‘Master Algorithm’ may emerge sooner than you think
Tristan Greene
thenextweb.com
Originally posted April 18, 2018
Here is an excerpt:
It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.
Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:
The information is here.
thenextweb.com
Originally posted April 18, 2018
Here is an excerpt:
It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.
Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”
The information is here.
Sunday, May 20, 2018
Robot cognition requires machines that both think and feel
Luiz Pessosa
www.aeon.com
Originally published April 13, 2018
Here is an excerpt:
Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.
Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.
The information is here.
Friendly note: I don't agree with everything I post. In this case, I do not believe that AI needs emotions and feelings. Rather, AI will have a different form of consciousness. We don't need to try to reproduce our experiences exactly. AI consciousness will likely have flaws, like we do. We need to be able to manage AI given the limitations we create.
www.aeon.com
Originally published April 13, 2018
Here is an excerpt:
Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.
Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.
The information is here.
Friendly note: I don't agree with everything I post. In this case, I do not believe that AI needs emotions and feelings. Rather, AI will have a different form of consciousness. We don't need to try to reproduce our experiences exactly. AI consciousness will likely have flaws, like we do. We need to be able to manage AI given the limitations we create.
Saturday, May 19, 2018
County Jail or Psychiatric Hospital? Ethical Challenges in Correctional Mental Health Care
Andrea G. Segal, Rosemary Frasso, Dominic A. Sisti
Qualitative Health Research
First published March 21, 2018
Abstract
Approximately 20% of the roughly 2.5 million individuals incarcerated in the United States have a serious mental illness (SMI). As a result of their illnesses, these individuals are often more likely to commit a crime, end up incarcerated, and languish in correctional settings without appropriate treatment. The objective of the present study was to investigate how correctional facility personnel reconcile the ethical challenges that arise when housing and treating individuals with SMI. Four focus groups and one group interview were conducted with employees (n = 24) including nurses, clinicians, correctional officers, administrators, and sergeants at a county jail in Pennsylvania. Results show that jail employees felt there are too many inmates with SMI in jail who would benefit from more comprehensive treatment elsewhere; however, given limited resources, employees felt they were doing the best they can. These findings can inform mental health management and policy in a correctional setting.
The information is here.
Qualitative Health Research
First published March 21, 2018
Abstract
Approximately 20% of the roughly 2.5 million individuals incarcerated in the United States have a serious mental illness (SMI). As a result of their illnesses, these individuals are often more likely to commit a crime, end up incarcerated, and languish in correctional settings without appropriate treatment. The objective of the present study was to investigate how correctional facility personnel reconcile the ethical challenges that arise when housing and treating individuals with SMI. Four focus groups and one group interview were conducted with employees (n = 24) including nurses, clinicians, correctional officers, administrators, and sergeants at a county jail in Pennsylvania. Results show that jail employees felt there are too many inmates with SMI in jail who would benefit from more comprehensive treatment elsewhere; however, given limited resources, employees felt they were doing the best they can. These findings can inform mental health management and policy in a correctional setting.
The information is here.
Friday, May 18, 2018
You don’t have a right to believe whatever you want to
Daniel DeNicola
aeon.co
Originally published May 14, 2018
Here is the conclusion:
Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.
Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked: ‘No one man’s belief is in any case a private matter which concerns himself alone.’ Beliefs shape attitudes and motives, guide choices and actions. Believing and knowing are formed within an epistemic community, which also bears their effects. There is an ethic of believing, of acquiring, sustaining, and relinquishing beliefs – and that ethic both generates and limits our right to believe. If some beliefs are false, or morally repugnant, or irresponsible, some beliefs are also dangerous. And to those, we have no right.
The information is here.
aeon.co
Originally published May 14, 2018
Here is the conclusion:
Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.
Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked: ‘No one man’s belief is in any case a private matter which concerns himself alone.’ Beliefs shape attitudes and motives, guide choices and actions. Believing and knowing are formed within an epistemic community, which also bears their effects. There is an ethic of believing, of acquiring, sustaining, and relinquishing beliefs – and that ethic both generates and limits our right to believe. If some beliefs are false, or morally repugnant, or irresponsible, some beliefs are also dangerous. And to those, we have no right.
The information is here.
Increasing patient engagement in healthcare decision-making
Jennifer Blumenthal-Barby
Baylor College of Medicine Blogs
Originally posted March 10, 2017
Making decisions is hard. Anyone who has ever struggled to pick a restaurant for dinner knows well – choosing between options is difficult even when the stakes are low and you have full access to information.
But what happens when the information is incomplete or difficult to comprehend? How does navigating a health crisis impact our ability to choose between different treatment options?
The Wall Street Journal published an article about something I have spent considerable time studying: the importance of decision aids in helping patients make difficult medical decisions. They note correctly that simplifying medical jargon and complicated statistics helps patients take more control over their care.
But that is only part of the equation.
The blog post is here.
Baylor College of Medicine Blogs
Originally posted March 10, 2017
Making decisions is hard. Anyone who has ever struggled to pick a restaurant for dinner knows well – choosing between options is difficult even when the stakes are low and you have full access to information.
But what happens when the information is incomplete or difficult to comprehend? How does navigating a health crisis impact our ability to choose between different treatment options?
The Wall Street Journal published an article about something I have spent considerable time studying: the importance of decision aids in helping patients make difficult medical decisions. They note correctly that simplifying medical jargon and complicated statistics helps patients take more control over their care.
But that is only part of the equation.
The blog post is here.
Thursday, May 17, 2018
Empathy and outcome meta-analysis
Elliott, Robert and Bohart, Arthur C. and Watson, Jeanne C. and Murphy, David
(2018) Psychotherapy
Abstract
Put simply, empathy refers to understanding what another person is experiencing or trying to express. Therapist empathy has a long history as a hypothesized key change process in psychotherapy. We begin by discussing definitional issues and presenting an integrative definition. We then review measures of therapist empathy, including the conceptual problem of separating empathy from other relationship variables. We follow this with clinical examples illustrating different forms of therapist empathy and empathic response modes. The core of our review is a meta-analysis of research on the relation between therapist empathy and client outcome. Results indicated that empathy is a moderately strong predictor of therapy outcome: mean weighted r= .28 (p< .001; 95% confidence interval: .23 –.33; equivalent of d= .58) for 82 independent samples and 6,138 clients. In general, the empathy-outcome relation held for different theoretical orientations and client presenting problems; however, there was considerable heterogeneity in the effects. Client, observer, and therapist perception measures predicted client outcome better than empathic accuracy measures. We then consider the limitations of the current data. We conclude with diversity considerations and practice recommendations, including endorsing the different forms that empathy may take in therapy.
You can request a copy of the article here.
(2018) Psychotherapy
Abstract
Put simply, empathy refers to understanding what another person is experiencing or trying to express. Therapist empathy has a long history as a hypothesized key change process in psychotherapy. We begin by discussing definitional issues and presenting an integrative definition. We then review measures of therapist empathy, including the conceptual problem of separating empathy from other relationship variables. We follow this with clinical examples illustrating different forms of therapist empathy and empathic response modes. The core of our review is a meta-analysis of research on the relation between therapist empathy and client outcome. Results indicated that empathy is a moderately strong predictor of therapy outcome: mean weighted r= .28 (p< .001; 95% confidence interval: .23 –.33; equivalent of d= .58) for 82 independent samples and 6,138 clients. In general, the empathy-outcome relation held for different theoretical orientations and client presenting problems; however, there was considerable heterogeneity in the effects. Client, observer, and therapist perception measures predicted client outcome better than empathic accuracy measures. We then consider the limitations of the current data. We conclude with diversity considerations and practice recommendations, including endorsing the different forms that empathy may take in therapy.
You can request a copy of the article here.
Ethics must be at heart of Artificial Intelligence technology
The Irish Times
Originally posted April 16, 2018
Artificial Intelligence (AI) must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.
Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.
With Britain poised to become a world leader in the controversial technological field international safeguards need to be set in place, the study said.
Peers state that AI needs to be developed for the common good and that the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.
The information is here.
Originally posted April 16, 2018
Artificial Intelligence (AI) must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.
Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.
With Britain poised to become a world leader in the controversial technological field international safeguards need to be set in place, the study said.
Peers state that AI needs to be developed for the common good and that the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.
The information is here.
Wednesday, May 16, 2018
Escape the Echo Chamber
C Thi Nguyen
www.medium.com
Originally posted April 12, 2018
Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.
But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.
Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.
The information is here.
www.medium.com
Originally posted April 12, 2018
Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.
But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.
Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.
The information is here.
Moral Fatigue: The Effects of Cognitive Fatigue on Moral Reasoning
S. Timmons and R. Byrne
Quarterly Journal of Experimental Psychology (March 2018)
Abstract
We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgments compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgment that focuses on the harmful action, killing one person, but not when they make a judgment that focuses on the beneficial outcome, saving the lives of others, as shown in Experiment 1 (n = 196). It also occurs for judgments about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n = 187). The results have implications for alternative explanations of moral reasoning.
The research is here.
Quarterly Journal of Experimental Psychology (March 2018)
Abstract
We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgments compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgment that focuses on the harmful action, killing one person, but not when they make a judgment that focuses on the beneficial outcome, saving the lives of others, as shown in Experiment 1 (n = 196). It also occurs for judgments about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n = 187). The results have implications for alternative explanations of moral reasoning.
The research is here.
Tuesday, May 15, 2018
Google code of ethics on military contracts could hinder Pentagon work
Brittany De Lea
FoxBusiness.com
Originally published April 13, 2018
Google is among the frontrunners for a lucrative, multibillion dollar contract with the Pentagon, but ethical concerns among some of its employees may pose a problem.
The Defense Department’s pending cloud storage contract, known as Joint Enterprise Defense Infrastructure (JEDI), could span a decade and will likely be its largest yet – valued in the billions of dollars. The department issued draft requests for proposals to host sensitive and classified information and will likely announce the winner later this year.
While Google, Microsoft, Amazon and Oracle are viewed as the major contenders for the job, Google’s employees have voiced concern about creating products for the U.S. government. More than 3,000 of the tech giant’s employees signed a letter, released this month, addressed to company CEO Sundar Pichai, protesting involvement in a Pentagon pilot program called Project Maven.
“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter, obtained by The New York Times, read.
The article is here.
FoxBusiness.com
Originally published April 13, 2018
Google is among the frontrunners for a lucrative, multibillion dollar contract with the Pentagon, but ethical concerns among some of its employees may pose a problem.
The Defense Department’s pending cloud storage contract, known as Joint Enterprise Defense Infrastructure (JEDI), could span a decade and will likely be its largest yet – valued in the billions of dollars. The department issued draft requests for proposals to host sensitive and classified information and will likely announce the winner later this year.
While Google, Microsoft, Amazon and Oracle are viewed as the major contenders for the job, Google’s employees have voiced concern about creating products for the U.S. government. More than 3,000 of the tech giant’s employees signed a letter, released this month, addressed to company CEO Sundar Pichai, protesting involvement in a Pentagon pilot program called Project Maven.
“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter, obtained by The New York Times, read.
The article is here.
Mens rea ascription, expertise and outcome effects: Professional judges surveyed
Markus Kneer and Sacha Bourgeois-Gironde
Cognition
Volume 169, December 2017, Pages 139-146
Abstract
A coherent practice of mens rea (‘guilty mind’) ascription in criminal law presupposes a concept of mens rea which is insensitive to the moral valence of an action’s outcome. For instance, an assessment of whether an agent harmed another person intentionally should be unaffected by the severity of harm done. Ascriptions of intentionality made by laypeople, however, are subject to a strong outcome bias. As demonstrated by the Knobe effect, a knowingly incurred negative side effect is standardly judged intentional, whereas a positive side effect is not. We report the first empirical investigation into intentionality ascriptions made by professional judges, which finds (i) that professionals are sensitive to the moral valence of outcome type, and (ii) that the worse the outcome, the higher the propensity to ascribe intentionality. The data shows the intentionality ascriptions of professional judges to be inconsistent with the concept of mens rea supposedly at the foundation of criminal law.
Highlights
• The first paper to present empirical data regarding mens rea ascriptions of professional judges.
• Intentionality ascriptions of professional judges manifest the Knobe effect.
• Intentionality ascriptions of judges are also sensitive to severity of outcome.
The research is here.
Cognition
Volume 169, December 2017, Pages 139-146
Abstract
A coherent practice of mens rea (‘guilty mind’) ascription in criminal law presupposes a concept of mens rea which is insensitive to the moral valence of an action’s outcome. For instance, an assessment of whether an agent harmed another person intentionally should be unaffected by the severity of harm done. Ascriptions of intentionality made by laypeople, however, are subject to a strong outcome bias. As demonstrated by the Knobe effect, a knowingly incurred negative side effect is standardly judged intentional, whereas a positive side effect is not. We report the first empirical investigation into intentionality ascriptions made by professional judges, which finds (i) that professionals are sensitive to the moral valence of outcome type, and (ii) that the worse the outcome, the higher the propensity to ascribe intentionality. The data shows the intentionality ascriptions of professional judges to be inconsistent with the concept of mens rea supposedly at the foundation of criminal law.
Highlights
• The first paper to present empirical data regarding mens rea ascriptions of professional judges.
• Intentionality ascriptions of professional judges manifest the Knobe effect.
• Intentionality ascriptions of judges are also sensitive to severity of outcome.
The research is here.
Monday, May 14, 2018
Computer Says No: Part 2 - Explainability
Jasmine Leonard
theRSA.org
Originally posted March 23, 2018
Here is an expert:
The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable. But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions. And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision. It therefore doesn’t need to be explained itself, it merely needs to be justifiable.
This is a subtle but important distinction. To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug. She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too. In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it. And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases. Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them. Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.
If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system? The best answer, I believe, is that the system is shown to be sufficiently accurate. What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human. It also means that there are no other readily available systems that produce more accurate predictions.
The article is here.
theRSA.org
Originally posted March 23, 2018
Here is an expert:
The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable. But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions. And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision. It therefore doesn’t need to be explained itself, it merely needs to be justifiable.
This is a subtle but important distinction. To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug. She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too. In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it. And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases. Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them. Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.
If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system? The best answer, I believe, is that the system is shown to be sufficiently accurate. What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human. It also means that there are no other readily available systems that produce more accurate predictions.
The article is here.
No Luck for Moral Luck
Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018
Abstract
Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility judgments. While this is evidence in favor of current dual-process theories of moral judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.
The manuscript is here.
Draft, March 2018
Abstract
Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility judgments. While this is evidence in favor of current dual-process theories of moral judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.
The manuscript is here.
Sunday, May 13, 2018
Facebook Uses AI To Predict Your Future Actions for Advertizers
Sam Biddle
The Intercept
Originally posted April 13, 2018
Here is an excerpt:
Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.
The article is here.
The Intercept
Originally posted April 13, 2018
Here is an excerpt:
Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.
The article is here.
Saturday, May 12, 2018
Bystander risk, social value, and ethics of human research
S. K. Shah, J. Kimmelman, A. D. Lyerly, H. F. Lynch, and others
Science 13 Apr 2018 : 158-159
Two critical, recurring questions can arise in many areas of research with human subjects but are poorly addressed in much existing research regulation and ethics oversight: How should research risks to “bystanders” be addressed? And how should research be evaluated when risks are substantial but not offset by direct benefit to participants, and the benefit to society (“social value”) is context-dependent? We encountered these issues while serving on a multidisciplinary, independent expert panel charged with addressing whether human challenge trials (HCTs) in which healthy volunteers would be deliberately infected with Zika virus could be ethically justified (1). Based on our experience on that panel, which concluded that there was insufficient value to justify a Zika HCT at the time of our report, we propose a new review mechanism to preemptively address issues of bystander risk and contingent social value.
(cut)
Some may object that generalizing and institutionalizing this approach could slow valuable research by adding an additional layer for review. However, embedding this process within funding agencies could preempt ethical problems that might otherwise stymie research. Concerns that CERCs might suffer from “mission creep” could be countered by establishing clear charters and triggers for deploying CERCs. Unlike IRBs, their opinions should be publicly available to provide precedent for future research programs or for IRBs evaluating particular protocols at a later date.
The information is here.
Science 13 Apr 2018 : 158-159
Two critical, recurring questions can arise in many areas of research with human subjects but are poorly addressed in much existing research regulation and ethics oversight: How should research risks to “bystanders” be addressed? And how should research be evaluated when risks are substantial but not offset by direct benefit to participants, and the benefit to society (“social value”) is context-dependent? We encountered these issues while serving on a multidisciplinary, independent expert panel charged with addressing whether human challenge trials (HCTs) in which healthy volunteers would be deliberately infected with Zika virus could be ethically justified (1). Based on our experience on that panel, which concluded that there was insufficient value to justify a Zika HCT at the time of our report, we propose a new review mechanism to preemptively address issues of bystander risk and contingent social value.
(cut)
Some may object that generalizing and institutionalizing this approach could slow valuable research by adding an additional layer for review. However, embedding this process within funding agencies could preempt ethical problems that might otherwise stymie research. Concerns that CERCs might suffer from “mission creep” could be countered by establishing clear charters and triggers for deploying CERCs. Unlike IRBs, their opinions should be publicly available to provide precedent for future research programs or for IRBs evaluating particular protocols at a later date.
The information is here.
Friday, May 11, 2018
AI experts want government algorithms to be studied like environmental hazards
Dave Gershgorn
Quartz (www.qz.com)
Originally published April 9, 2018
Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”
The information is here.
Quartz (www.qz.com)
Originally published April 9, 2018
Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”
The information is here.
Samantha’s suffering: why sex machines should have rights too
Victoria Brooks
The Conversation
Originally posted April 5, 2018
Here is the conclusion:
Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?
The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.
So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.
The article is here.
The Conversation
Originally posted April 5, 2018
Here is the conclusion:
Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?
The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.
So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.
The article is here.
Thursday, May 10, 2018
A Two-Factor Model of Ethical Culture
Caterina Bulgarella
ethicalsystems.org
Making Progress in the Field of Business Ethics
Over the past 15 years, behavioral science has provided practitioners with a uniquely insightful
perspective on the organizational elements companies need to focus on to build an ethical culture.
Pieced together, this research can be used to address the growing challenges business must tackle
today.
Faced with unprecedented complexity and rapid change, more and more organizations are feeling the
limitations of an old-fashioned approach to ethics. In this new landscape, the importance of a proactive ethical stance has become increasingly clear. Not only is a strong focus on business integrity likely to reduce the costs of misconduct, but it can afford companies a solid corporate reputation, genuine employee compliance, robust governance, and even increased profitability.
The need for a smarter, deeper, and more holistic approach to ethical conduct is also strengthened by
the inherent complexity of human behavior. As research continues to shed light on the factors that
undermine people’s ability to ‘do the right thing,’ we are reminded of how difficult it is to solve for
ethics without addressing the larger challenge of organizational culture.
The components that shape the culture of an organization exercise a constant and unrelenting influence on how employees process information, make decisions, and, ultimately, respond to ethical dilemmas. This is why, in order to help business achieve a deeper and more systematic ethical focus, we must understand the ingredients that make up an ethical culture.
The information is here.
ethicalsystems.org
Making Progress in the Field of Business Ethics
Over the past 15 years, behavioral science has provided practitioners with a uniquely insightful
perspective on the organizational elements companies need to focus on to build an ethical culture.
Pieced together, this research can be used to address the growing challenges business must tackle
today.
Faced with unprecedented complexity and rapid change, more and more organizations are feeling the
limitations of an old-fashioned approach to ethics. In this new landscape, the importance of a proactive ethical stance has become increasingly clear. Not only is a strong focus on business integrity likely to reduce the costs of misconduct, but it can afford companies a solid corporate reputation, genuine employee compliance, robust governance, and even increased profitability.
The need for a smarter, deeper, and more holistic approach to ethical conduct is also strengthened by
the inherent complexity of human behavior. As research continues to shed light on the factors that
undermine people’s ability to ‘do the right thing,’ we are reminded of how difficult it is to solve for
ethics without addressing the larger challenge of organizational culture.
The components that shape the culture of an organization exercise a constant and unrelenting influence on how employees process information, make decisions, and, ultimately, respond to ethical dilemmas. This is why, in order to help business achieve a deeper and more systematic ethical focus, we must understand the ingredients that make up an ethical culture.
The information is here.
The WEIRD Science of Culture, Values, and Behavior
Kim Armstrong
Psychological Science
Originally posted April 2018
Here is an excerpt:
While the dominant norms of a society may shape our behavior, children first experience the influence of those cultural values through the attitudes and beliefs of their parents, which can significantly impact their psychological development, said Heidi Keller, a professor of psychology at the University of Osnabrueck, Germany.
Until recently, research within the field of psychology focused mainly on WEIRD (Western, educated, industrialized, rich, and democratic) populations, Keller said, limiting the understanding of the influence of culture on childhood development.
“The WEIRD group represents maximally 5% of the world’s population, but probably more than 90% of the researchers and scientists producing the knowledge that is represented in our textbooks work with participants from that particular context,” Keller explained.
Keller and colleagues’ research on the ecocultural model of development, which accounts for the interaction of socioeconomic and cultural factors throughout a child’s upbringing, explores this gap in the research by comparing the caretaking styles of rural and urban families throughout India, Cameroon, and Germany. The experiences of these groups can differ significantly from the WEIRD context, Keller notes, with rural farmers — who make up 30% to 40% of the world’s population — tending to live in extended family households while having more children at a younger age after an average of just 7 years of education.
The information is here.
Psychological Science
Originally posted April 2018
Here is an excerpt:
While the dominant norms of a society may shape our behavior, children first experience the influence of those cultural values through the attitudes and beliefs of their parents, which can significantly impact their psychological development, said Heidi Keller, a professor of psychology at the University of Osnabrueck, Germany.
Until recently, research within the field of psychology focused mainly on WEIRD (Western, educated, industrialized, rich, and democratic) populations, Keller said, limiting the understanding of the influence of culture on childhood development.
“The WEIRD group represents maximally 5% of the world’s population, but probably more than 90% of the researchers and scientists producing the knowledge that is represented in our textbooks work with participants from that particular context,” Keller explained.
Keller and colleagues’ research on the ecocultural model of development, which accounts for the interaction of socioeconomic and cultural factors throughout a child’s upbringing, explores this gap in the research by comparing the caretaking styles of rural and urban families throughout India, Cameroon, and Germany. The experiences of these groups can differ significantly from the WEIRD context, Keller notes, with rural farmers — who make up 30% to 40% of the world’s population — tending to live in extended family households while having more children at a younger age after an average of just 7 years of education.
The information is here.
Wednesday, May 9, 2018
How To Deliver Moral Leadership To Employees
John Baldoni
Forbes.com
Originally posted April 12, 2018
Here is an excerpt:
When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?
First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)
LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.
Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”
The information is here.
Forbes.com
Originally posted April 12, 2018
Here is an excerpt:
When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?
First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)
LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.
Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”
The information is here.
Getting Ethics Training Right for Leaders and Employees
Deloitte
The Wall Street Journal
Originally posted April 9, 2018
Here is an excerpt:
Ethics training has needed a serious redesign for some time, and we are seeing three changes to make training more effective. First, many organizations recognize that compliance training is not enough. Simply knowing the rules and how to call the ethics helpline does not necessarily mean employees will raise their voice when they see ethical issues in the workplace. Even if employees want to say something they often hesitate, worried that they may not be heard, or even worse, that voicing may lead to formal or informal retaliation. Overcoming this hesitation requires training to help employees learn how to voice their values with in-person, experiential practice in everyday workplace situations. More and more organizations are investing in this training, as a way to simultaneously support employees, reduce risk and proactively reshape their culture.
Another significant change in ethics training is a focus on helping senior leaders consider how their own ethical leadership shapes the culture. This requires leaders to examine the signals they send in their everyday behaviors, and how these signals make employees feel safe to voice ideas and concerns. In my training sessions with senior leaders, we use exercises that help them identify the leadership behaviors that create such trust, and those that may be counter-productive. We then redesign the everyday processes, such as the weekly meeting or decision-making models, that encourage voice and explicitly elevate ethical concerns.
Third, more organizations are seeing the connection between ethics and greater sense of purpose in the workplace. Employee engagement, performance and retention often increases when employees feel they are contributing something beyond profit creation. Ethics training can help employees see this connection and practice the so-called giver strategies that help others, their organizations, and their own careers at the same time.
The article is here.
The Wall Street Journal
Originally posted April 9, 2018
Here is an excerpt:
Ethics training has needed a serious redesign for some time, and we are seeing three changes to make training more effective. First, many organizations recognize that compliance training is not enough. Simply knowing the rules and how to call the ethics helpline does not necessarily mean employees will raise their voice when they see ethical issues in the workplace. Even if employees want to say something they often hesitate, worried that they may not be heard, or even worse, that voicing may lead to formal or informal retaliation. Overcoming this hesitation requires training to help employees learn how to voice their values with in-person, experiential practice in everyday workplace situations. More and more organizations are investing in this training, as a way to simultaneously support employees, reduce risk and proactively reshape their culture.
Another significant change in ethics training is a focus on helping senior leaders consider how their own ethical leadership shapes the culture. This requires leaders to examine the signals they send in their everyday behaviors, and how these signals make employees feel safe to voice ideas and concerns. In my training sessions with senior leaders, we use exercises that help them identify the leadership behaviors that create such trust, and those that may be counter-productive. We then redesign the everyday processes, such as the weekly meeting or decision-making models, that encourage voice and explicitly elevate ethical concerns.
Third, more organizations are seeing the connection between ethics and greater sense of purpose in the workplace. Employee engagement, performance and retention often increases when employees feel they are contributing something beyond profit creation. Ethics training can help employees see this connection and practice the so-called giver strategies that help others, their organizations, and their own careers at the same time.
The article is here.
Tuesday, May 8, 2018
AI Without Borders: How To Create Universally Moral Machines
Abinash Tripathy
Forbes.com
Originally posted April 11, 2018
Here is an excerpt:
Ultimately, developing moral machines will be a learning process. It’s not surprising that early versions of advanced machine learning have adopted undesirable human traits. It is promising, however, that immense thought and care are being put into these issues. Pioneers including DeepMind, researchers at Duke University, the German government, and the Leverhulme Centre for the Future of Intelligence have invested research, experimentation and thought into determining the best way not to model machines after humans as they exist but after an ideal version of human intelligence.
Despite this care, there will always be those who use technological advancements with malicious intent. Organizations will need to prepare for the potential harm that can arise both from competitors and from internal AI developments. From bots to AI assistants, to AI lawyers, to simple automated technologies such as those used in manufacturing, we must decide what is right, what is wrong and what aspects of humanity we are truly willing to hand over to machines.
The information is here.
Forbes.com
Originally posted April 11, 2018
Here is an excerpt:
Ultimately, developing moral machines will be a learning process. It’s not surprising that early versions of advanced machine learning have adopted undesirable human traits. It is promising, however, that immense thought and care are being put into these issues. Pioneers including DeepMind, researchers at Duke University, the German government, and the Leverhulme Centre for the Future of Intelligence have invested research, experimentation and thought into determining the best way not to model machines after humans as they exist but after an ideal version of human intelligence.
Despite this care, there will always be those who use technological advancements with malicious intent. Organizations will need to prepare for the potential harm that can arise both from competitors and from internal AI developments. From bots to AI assistants, to AI lawyers, to simple automated technologies such as those used in manufacturing, we must decide what is right, what is wrong and what aspects of humanity we are truly willing to hand over to machines.
The information is here.
Many People Taking Antidepressants Discover They Cannot Quit
Benedict Carey & Robert Gebeloff
The New York Times
Originally posted April 7, 2018
Here is an excerpt:
Dr. Peter Kramer, a psychiatrist and author of several books about antidepressants, said that while he generally works to wean patients with mild-to-moderate depression off medication, some report that they do better on it.
“There is a cultural question here, which is how much depression should people have to live with when we have these treatments that give so many a better quality of life,” Dr. Kramer said. “I don’t think that’s a question that should be decided in advance.”
Antidepressants are not harmless; they commonly cause emotional numbing, sexual problems like a lack of desire or erectile dysfunction and weight gain. Long-term users report in interviews a creeping unease that is hard to measure: Daily pill-popping leaves them doubting their own resilience, they say.
“We’ve come to a place, at least in the West, where it seems every other person is depressed and on medication,” said Edward Shorter, a historian of psychiatry at the University of Toronto. “You do have to wonder what that says about our culture.”
Patients who try to stop taking the drugs often say they cannot. In a recent survey of 250 long-term users of psychiatric drugs — most commonly antidepressants — about half who wound down their prescriptions rated the withdrawal as severe. Nearly half who tried to quit could not do so because of these symptoms.
In another study of 180 longtime antidepressant users, withdrawal symptoms were reported by more than 130. Almost half said they felt addicted to antidepressants.
The information is here.
The New York Times
Originally posted April 7, 2018
Here is an excerpt:
Dr. Peter Kramer, a psychiatrist and author of several books about antidepressants, said that while he generally works to wean patients with mild-to-moderate depression off medication, some report that they do better on it.
“There is a cultural question here, which is how much depression should people have to live with when we have these treatments that give so many a better quality of life,” Dr. Kramer said. “I don’t think that’s a question that should be decided in advance.”
Antidepressants are not harmless; they commonly cause emotional numbing, sexual problems like a lack of desire or erectile dysfunction and weight gain. Long-term users report in interviews a creeping unease that is hard to measure: Daily pill-popping leaves them doubting their own resilience, they say.
“We’ve come to a place, at least in the West, where it seems every other person is depressed and on medication,” said Edward Shorter, a historian of psychiatry at the University of Toronto. “You do have to wonder what that says about our culture.”
Patients who try to stop taking the drugs often say they cannot. In a recent survey of 250 long-term users of psychiatric drugs — most commonly antidepressants — about half who wound down their prescriptions rated the withdrawal as severe. Nearly half who tried to quit could not do so because of these symptoms.
In another study of 180 longtime antidepressant users, withdrawal symptoms were reported by more than 130. Almost half said they felt addicted to antidepressants.
The information is here.
Monday, May 7, 2018
Microsoft is cutting off some sales over AI ethics
Alan Boyle
www.geekwire.com
Originally published April 9, 2018
Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow and director at Microsoft Research Labs.
Horvitz laid out Microsoft’s commitment to AI ethics today during the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.
One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.
“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.
He said the committee reviews how Microsoft’s AI technology could be used by its customers, and makes recommendations that go all the way up to senior leadership.
“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”
Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.
The information is here.
www.geekwire.com
Originally published April 9, 2018
Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow and director at Microsoft Research Labs.
Horvitz laid out Microsoft’s commitment to AI ethics today during the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.
One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.
“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.
He said the committee reviews how Microsoft’s AI technology could be used by its customers, and makes recommendations that go all the way up to senior leadership.
“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”
Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.
The information is here.
A revolution in our sense of self
Nick Chater
The Guardian
Originally posted April 1, 2018
Here is an excerpt:
One crucial clue that the inner oracle is an illusion comes, on closer analysis, from the fact that our explanations are less than watertight. Indeed, they are systematically and spectacularly leaky. Now it is hardly controversial that our thoughts seem fragmentary and contradictory. I can’t quite tell you how a fridge works or how electricity flows around the house. I continually fall into confusion and contradiction when struggling to explain rules of English grammar, how quantitative easing works or the difference between a fruit and a vegetable.
But can’t the gaps be filled in and the contradictions somehow resolved? The only way to find out is to try. And try we have. Two thousand years of philosophy have been devoted to the problem of “clarifying” many of our commonsense ideas: causality, the good, space, time, knowledge, mind and many more; clarity has, needless to say, not been achieved. Moreover, science and mathematics began with our commonsense ideas, but ended up having to distort them so drastically – whether discussing heat, weight, force, energy and many more – that they were refashioned into entirely new, sophisticated concepts, with often counterintuitive consequences. This is one reason why “real” physics took centuries to discover and presents a fresh challenge to each generation of students.
Philosophers and scientists have found that beliefs, desires and similar every-day psychological concepts turn out to be especially puzzling and confused. We project them liberally: we say that ants “know” where the food is and “want” to bring it back to the nest; cows “believe” it is about rain; Tamagotchis “want” to be fed; autocomplete “thinks” I meant to type gristle when I really wanted grist. We project beliefs and desires just as wildly on ourselves and others; since Freud, we even create multiple inner selves (id, ego, superego), each with its own motives and agendas. But such rationalisations are never more than convenient fictions. Indeed, psychoanalysis is projection at its apogee: stories of greatest possible complexity can be spun from the barest fragments of behaviours or snippets of dreams.
The information is here.
The Guardian
Originally posted April 1, 2018
Here is an excerpt:
One crucial clue that the inner oracle is an illusion comes, on closer analysis, from the fact that our explanations are less than watertight. Indeed, they are systematically and spectacularly leaky. Now it is hardly controversial that our thoughts seem fragmentary and contradictory. I can’t quite tell you how a fridge works or how electricity flows around the house. I continually fall into confusion and contradiction when struggling to explain rules of English grammar, how quantitative easing works or the difference between a fruit and a vegetable.
But can’t the gaps be filled in and the contradictions somehow resolved? The only way to find out is to try. And try we have. Two thousand years of philosophy have been devoted to the problem of “clarifying” many of our commonsense ideas: causality, the good, space, time, knowledge, mind and many more; clarity has, needless to say, not been achieved. Moreover, science and mathematics began with our commonsense ideas, but ended up having to distort them so drastically – whether discussing heat, weight, force, energy and many more – that they were refashioned into entirely new, sophisticated concepts, with often counterintuitive consequences. This is one reason why “real” physics took centuries to discover and presents a fresh challenge to each generation of students.
Philosophers and scientists have found that beliefs, desires and similar every-day psychological concepts turn out to be especially puzzling and confused. We project them liberally: we say that ants “know” where the food is and “want” to bring it back to the nest; cows “believe” it is about rain; Tamagotchis “want” to be fed; autocomplete “thinks” I meant to type gristle when I really wanted grist. We project beliefs and desires just as wildly on ourselves and others; since Freud, we even create multiple inner selves (id, ego, superego), each with its own motives and agendas. But such rationalisations are never more than convenient fictions. Indeed, psychoanalysis is projection at its apogee: stories of greatest possible complexity can be spun from the barest fragments of behaviours or snippets of dreams.
The information is here.
Sunday, May 6, 2018
Saturday, May 5, 2018
Deep learning: Why it’s time for AI to get philosophical
Catherine Stinson
The Globe and Mail
Originally published March 23, 2018
Here is an excerpt:
Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.
The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.
The information is here.
The Globe and Mail
Originally published March 23, 2018
Here is an excerpt:
Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.
The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.
The information is here.
Friday, May 4, 2018
Will Tech Companies Ever Take Ethics Seriously?
Evan Selinger
www.medium.com
Originally published April 9, 2018
Here are two excerpts:
And let’s face it, tech companies are in a structural bind, because they simultaneously serve many masters who can have competing priorities: shareholders, regulators, and consumers. Indeed, while “conscientious capitalism” sounds nice, anyone who takes political economy seriously knows we should be wary of civics being conflated with keeping markets going and companies appealing to ethics as an end-run strategy to avoid robust regulation.
But what if there is reason — even if just a sliver of practical optimism — to be more hopeful? What if the responses to the Cambridge Analytica scandal have already set in motion a reckoning throughout the tech world that’s moving history to a tipping point? What would it take for tech companies to do some real soul searching and embrace Spider-Man’s maxim that great responsibility comes with great power?
(cut)
Responsibility has many dimensions. But as far as Hartzog is concerned — and the “values in design” literature supports this contention — the three key ideals that tech companies should be prioritizing are: promoting genuine trust (through greater transparency and less manipulation), respecting obscurity (the ability for people to be more selective when sharing personal information in public and semipublic spaces), and treating dignity as sacrosanct (by fostering genuine autonomy and not treating illusions of user control as the real deal). At the very least, embracing these goals means that companies will have to come up with better answers to two fundamental questions: What signals do their design choices send to users about how their products should be perceived and used? What socially significant consequences follow from their design choices lowering transaction costs and making it easier or harder to do things, such as communicate and be observed?
The information is here.
www.medium.com
Originally published April 9, 2018
Here are two excerpts:
And let’s face it, tech companies are in a structural bind, because they simultaneously serve many masters who can have competing priorities: shareholders, regulators, and consumers. Indeed, while “conscientious capitalism” sounds nice, anyone who takes political economy seriously knows we should be wary of civics being conflated with keeping markets going and companies appealing to ethics as an end-run strategy to avoid robust regulation.
But what if there is reason — even if just a sliver of practical optimism — to be more hopeful? What if the responses to the Cambridge Analytica scandal have already set in motion a reckoning throughout the tech world that’s moving history to a tipping point? What would it take for tech companies to do some real soul searching and embrace Spider-Man’s maxim that great responsibility comes with great power?
(cut)
Responsibility has many dimensions. But as far as Hartzog is concerned — and the “values in design” literature supports this contention — the three key ideals that tech companies should be prioritizing are: promoting genuine trust (through greater transparency and less manipulation), respecting obscurity (the ability for people to be more selective when sharing personal information in public and semipublic spaces), and treating dignity as sacrosanct (by fostering genuine autonomy and not treating illusions of user control as the real deal). At the very least, embracing these goals means that companies will have to come up with better answers to two fundamental questions: What signals do their design choices send to users about how their products should be perceived and used? What socially significant consequences follow from their design choices lowering transaction costs and making it easier or harder to do things, such as communicate and be observed?
The information is here.
Psychology will fail if it keeps using ancient words like “attention” and “memory”
Olivia Goldhill
Quartz.com
Originally published April 7, 2018
Here is an excerpt:
Then there are “jangle fallacies,” when two things that are the same are seen as different because they have different names. For example, “working memory” is used to describe the ability to keep information mind. It’s not clear this is meaningfully different from simply “paying attention” to particular aspects of information.
Scientific concepts should be operationalized, meaning measurable and testable in experiments that produce clear-cut results. “You’d hope that a scientific concept would name something that one can use to then make predictions about how it’s going to work. It’s not clear that ‘attention’ does that for us,” says Poldrack.
It’s no surprise “attention” and “memory” don’t perfectly map onto the brain functions scientists know of today, given that they entered the lexicon centuries ago, when we knew very little about the internal workings of the brain or our own mental processes. Psychology, Poldrack argues, cannot be a precise science as long as it relies on these centuries-old, lay terms, which have broad, fluctuating usage. The field has to create new terminology that accurately describes mental processes. “It hurts us a lot because we can’t really test theories,” says Poldrack. “People can talk past one another. If one person says I’m studying ‘working memory’ and the other people says ‘attention,’ they can be finding things that are potentially highly relevant to one another but they’re talking past one another.”
The information is here.
Quartz.com
Originally published April 7, 2018
Here is an excerpt:
Then there are “jangle fallacies,” when two things that are the same are seen as different because they have different names. For example, “working memory” is used to describe the ability to keep information mind. It’s not clear this is meaningfully different from simply “paying attention” to particular aspects of information.
Scientific concepts should be operationalized, meaning measurable and testable in experiments that produce clear-cut results. “You’d hope that a scientific concept would name something that one can use to then make predictions about how it’s going to work. It’s not clear that ‘attention’ does that for us,” says Poldrack.
It’s no surprise “attention” and “memory” don’t perfectly map onto the brain functions scientists know of today, given that they entered the lexicon centuries ago, when we knew very little about the internal workings of the brain or our own mental processes. Psychology, Poldrack argues, cannot be a precise science as long as it relies on these centuries-old, lay terms, which have broad, fluctuating usage. The field has to create new terminology that accurately describes mental processes. “It hurts us a lot because we can’t really test theories,” says Poldrack. “People can talk past one another. If one person says I’m studying ‘working memory’ and the other people says ‘attention,’ they can be finding things that are potentially highly relevant to one another but they’re talking past one another.”
The information is here.
Thursday, May 3, 2018
Why Pure Reason Won’t End American Tribalism
Robert Wright
www.wired.com
Originally published April 9, 2018
Here is an excerpt:
Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.
So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.
But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.
I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.
The article is here.
www.wired.com
Originally published April 9, 2018
Here is an excerpt:
Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.
So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.
But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.
I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.
The article is here.
We can train AI to identify good and evil, and then use it to teach us morality
Ambarish Mitra
Quartz.com
Originally published April 5, 2018
Here is an excerpt:
To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.
If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.
While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues.
The information is here.
Quartz.com
Originally published April 5, 2018
Here is an excerpt:
To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.
If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.
While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues.
The information is here.
Wednesday, May 2, 2018
How Do You Know You Are Reading This?
Jason Pontin
www.wired.com
Originally published April 2, 2018
Here are two excerpts:
Understanding consciousness better would solve some urgent, practical problems. It would be useful, for instance, to know whether patients locked in by stroke are capable of thought. Similarly, one or two patients in a thousand later recall being in pain under general anesthesia, though they seemed to be asleep. Could we reliably measure whether such people are conscious? Some of the heat of the abortion debate might dissipate if we knew when and to what degree fetuses are conscious. We are building artificial intelligences whose capabilities rival or exceed our own. Soon, we will have to decide: Are our machines conscious, to even a small degree, and do they have rights, which we are bound to respect? These are questions of more than academic philosophical interest.
(cut)
IIT doesn’t try to answer the hard problem. Instead, it does something more subtle: It posits that consciousness is a feature of the universe, like gravity, and then tries to solve the pretty hard problem of determining which systems are conscious with a mathematical measurement of consciousness represented by the Greek letter phi (Φ). Until Massimini’s test, which was developed in partnership with Tononi, there was little experimental evidence of IIT, because calculating the phi value of a human brain with its tens of billions of neurons was impractical. PCI is “a poor man’s phi” according to Tononi. “The poor man’s version may be poor, but it works better than anything else. PCI works in dreaming and dreamless sleep. With general anesthesia, PCI is down, and with ketamine it’s up more. Now we can tell, just by looking at the value, whether someone is conscious or not. We can assess consciousness in nonresponsive patients.”
The information is here.
www.wired.com
Originally published April 2, 2018
Here are two excerpts:
Understanding consciousness better would solve some urgent, practical problems. It would be useful, for instance, to know whether patients locked in by stroke are capable of thought. Similarly, one or two patients in a thousand later recall being in pain under general anesthesia, though they seemed to be asleep. Could we reliably measure whether such people are conscious? Some of the heat of the abortion debate might dissipate if we knew when and to what degree fetuses are conscious. We are building artificial intelligences whose capabilities rival or exceed our own. Soon, we will have to decide: Are our machines conscious, to even a small degree, and do they have rights, which we are bound to respect? These are questions of more than academic philosophical interest.
(cut)
IIT doesn’t try to answer the hard problem. Instead, it does something more subtle: It posits that consciousness is a feature of the universe, like gravity, and then tries to solve the pretty hard problem of determining which systems are conscious with a mathematical measurement of consciousness represented by the Greek letter phi (Φ). Until Massimini’s test, which was developed in partnership with Tononi, there was little experimental evidence of IIT, because calculating the phi value of a human brain with its tens of billions of neurons was impractical. PCI is “a poor man’s phi” according to Tononi. “The poor man’s version may be poor, but it works better than anything else. PCI works in dreaming and dreamless sleep. With general anesthesia, PCI is down, and with ketamine it’s up more. Now we can tell, just by looking at the value, whether someone is conscious or not. We can assess consciousness in nonresponsive patients.”
The information is here.
Institutional Research Misconduct Reports Need More Credibility
Gunsalus CK, Marcus AR, Oransky I.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358
Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.
Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.
As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.
The article is here.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358
Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.
Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.
As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.
The article is here.
Tuesday, May 1, 2018
'They stole my life away': women forcibly sterilised by Japan speak out
Daniel Hurst
The Guardian
Originally published April 3, 2018
Here is an excerpt:
Between 1948 and 1996, about 25,000 people were sterilised under the law, including 16,500 who did not consent to the procedure. The youngest known patients were just nine or 10 years old. About 70% of the cases involved women or girls.
Yasutaka Ichinokawa, a sociology professor at the University of Tokyo, says psychiatrists identified patients whom they thought needed sterilisation. Carers at nursing homes for people with intellectual disabilities also had sterilisation initiatives. Outside such institutions, the key people were local welfare officers known as Minsei-iin.
“All of them worked with goodwill, and they thought sterilisations were for the interests of the people for whom they cared, but today we must see this as a violation of the reproductive rights of people with disabilities,” Ichinokawa says.
After peaking at 1,362 cases in a single year in the mid-1950s, the figures began to decline in tandem with a shift in public attitudes.
In 1972, the government triggered protests by proposing an amendment to the Eugenic Protection Law to allow pregnant women with disabled foetuses to have induced abortions.
The information is here.
The Guardian
Originally published April 3, 2018
Here is an excerpt:
Between 1948 and 1996, about 25,000 people were sterilised under the law, including 16,500 who did not consent to the procedure. The youngest known patients were just nine or 10 years old. About 70% of the cases involved women or girls.
Yasutaka Ichinokawa, a sociology professor at the University of Tokyo, says psychiatrists identified patients whom they thought needed sterilisation. Carers at nursing homes for people with intellectual disabilities also had sterilisation initiatives. Outside such institutions, the key people were local welfare officers known as Minsei-iin.
“All of them worked with goodwill, and they thought sterilisations were for the interests of the people for whom they cared, but today we must see this as a violation of the reproductive rights of people with disabilities,” Ichinokawa says.
After peaking at 1,362 cases in a single year in the mid-1950s, the figures began to decline in tandem with a shift in public attitudes.
In 1972, the government triggered protests by proposing an amendment to the Eugenic Protection Law to allow pregnant women with disabled foetuses to have induced abortions.
The information is here.
If we want moral AI, we need to teach it right from wrong
Emma Kendrew
Management Today
Originally posted April 3, 2018
Here is an excerpt:
Ethical constructs need to come before, not after, developing other skills. We teach children morality before maths. When they can be part of a social environment, we teach them language skills and reasoning. All of this happens before they enter a formal classroom.
Four out of five executives see AI working next to humans in their organisations as a co-worker within the next two years. It’s imperative that we learn to nurture AI to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.
AI Needs to Be Raised to Benefit Business and Society
AI is becoming smarter and more capable than ever before. With neural networks giving AI the ability to learn, the technology is evolving into an independent problem solver.
Consequently, we need to create learning-based AI that foster ethics and behave responsibly – imparting knowledge without bias, so that AI will be able to operate more effectively in the context of its situation. It will also be able to adapt to new requirements based on feedback from both its artificial and human peers. This feedback loop is an essential and also fundamental part of human learning.
The information is here.
Management Today
Originally posted April 3, 2018
Here is an excerpt:
Ethical constructs need to come before, not after, developing other skills. We teach children morality before maths. When they can be part of a social environment, we teach them language skills and reasoning. All of this happens before they enter a formal classroom.
Four out of five executives see AI working next to humans in their organisations as a co-worker within the next two years. It’s imperative that we learn to nurture AI to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.
AI Needs to Be Raised to Benefit Business and Society
AI is becoming smarter and more capable than ever before. With neural networks giving AI the ability to learn, the technology is evolving into an independent problem solver.
Consequently, we need to create learning-based AI that foster ethics and behave responsibly – imparting knowledge without bias, so that AI will be able to operate more effectively in the context of its situation. It will also be able to adapt to new requirements based on feedback from both its artificial and human peers. This feedback loop is an essential and also fundamental part of human learning.
The information is here.
Subscribe to:
Posts (Atom)