Matthieu Ricard
Medium.com
Originally published January 5, 2018
Here is an excerpt:
Research by Abigail Marsh and other neuroscientists reveals that psychopaths’ brains are marked by a dysfunction in the structure called the amygdala that is responsible for essential social and emotional function. In psychopaths, the amygdala is not only under-responsive to images of people experiencing fear, but is also up to 20% smaller than average.
Marsh also wondered about people who are at the other end of the spectrum, extreme altruists: people filled with compassion, people who volunteer, for example, to donate one of their kidneys to a stranger. The answer is remarkable: extreme altruists surpass everyone in detecting expressions of fear in others and, while they do experience fear themselves, that does not stop them from acting in ways that are considered very courageous.
Since her initial discovery, several studies have confirmed that the ability to label other peoples’ fear predicts altruism better than gender, mood or how compassionate people claim to be. In addition, Abigail Marsh found that, among extreme altruists, the amygdala is physically larger than the average by about 8%. The significance of this fact held up even after finding something rather unexpected: the altruists’s brains are in general larger than those of the average person.
The information is here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Wednesday, January 31, 2018
I Believe In Intelligent Design....For Robots
Matt Simon
Wired Magazine
Originally published January 3, 2018
Here is an excerpt:
Roboticists are honing their robots by essentially mimicking natural selection. Keep what works, throw out what doesn’t, to optimally adapt a robot to a particular job. “If we want to scrap something totally, we can do that,” says Nick Gravish, who studies the intersection of robotics and biology at UC San Diego. “Or we can take the best pieces from some design and put them in a new design and get rid of the things we don't need.” Think of it, then, like intelligent design—that follows the principles of natural selection.
The caveat being, biology is rather more inflexible than what roboticists are doing. After all, you can give your biped robot two extra limbs and turn it into a quadruped fairly quickly, while animals change their features—cave-dwelling species might lose their eyes, for instance—over thousands of years. “Evolution is as much a trap as a means to advance,” says Gerald Loeb, CEO and co-founder of SynTouch, which is giving robots the power to feel. “Because you get locked into a lot of hardware that worked well in previous iterations and now can't be changed because you've built your whole embryology on it.”
Evolution can still be rather explosive, though. 550 million years ago the Cambrian Explosion kicked off, giving birth to an incredible array of complex organisms. Before that, life was relatively squishier, relatively calmer. But then boom, predators a plenty, scrapping like hell to gain an edge.
The article is here.
Wired Magazine
Originally published January 3, 2018
Here is an excerpt:
Roboticists are honing their robots by essentially mimicking natural selection. Keep what works, throw out what doesn’t, to optimally adapt a robot to a particular job. “If we want to scrap something totally, we can do that,” says Nick Gravish, who studies the intersection of robotics and biology at UC San Diego. “Or we can take the best pieces from some design and put them in a new design and get rid of the things we don't need.” Think of it, then, like intelligent design—that follows the principles of natural selection.
The caveat being, biology is rather more inflexible than what roboticists are doing. After all, you can give your biped robot two extra limbs and turn it into a quadruped fairly quickly, while animals change their features—cave-dwelling species might lose their eyes, for instance—over thousands of years. “Evolution is as much a trap as a means to advance,” says Gerald Loeb, CEO and co-founder of SynTouch, which is giving robots the power to feel. “Because you get locked into a lot of hardware that worked well in previous iterations and now can't be changed because you've built your whole embryology on it.”
Evolution can still be rather explosive, though. 550 million years ago the Cambrian Explosion kicked off, giving birth to an incredible array of complex organisms. Before that, life was relatively squishier, relatively calmer. But then boom, predators a plenty, scrapping like hell to gain an edge.
The article is here.
Tuesday, January 30, 2018
Utilitarianism’s Missing Dimensions
Erik Parens
Quillette
Originally published on January 3, 2018
Here is an excerpt:
Missing the “Impartial Beneficence” Dimension of Utilitarianism
In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”
According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.
The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.
The article is here.
Quillette
Originally published on January 3, 2018
Here is an excerpt:
Missing the “Impartial Beneficence” Dimension of Utilitarianism
In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”
According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.
The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.
The article is here.
Your Brain Creates Your Emotions
Lisa Feldman Barrett
TED Talk
Published December 2017
Can you look at someone's face and know what they're feeling? Does everyone experience happiness, sadness and anxiety the same way? What are emotions anyway? For the past 25 years, psychology professor Lisa Feldman Barrett has mapped facial expressions, scanned brains and analyzed hundreds of physiology studies to understand what emotions really are. She shares the results of her exhaustive research -- and explains how we may have more control over our emotions than we think.
TED Talk
Published December 2017
Can you look at someone's face and know what they're feeling? Does everyone experience happiness, sadness and anxiety the same way? What are emotions anyway? For the past 25 years, psychology professor Lisa Feldman Barrett has mapped facial expressions, scanned brains and analyzed hundreds of physiology studies to understand what emotions really are. She shares the results of her exhaustive research -- and explains how we may have more control over our emotions than we think.
Monday, January 29, 2018
Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others
Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017
The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.
The article is here.
Front. Integr. Neurosci., 12 December 2017
The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.
The article is here.
Go Fund Yourself
Stephen Marche
Mother Jones
Originally published January/February 2018
Here is an excerpt:
Health care in America is the wedge of inequality: It’s the luxury everyone has to have and millions can’t afford. Sites like YouCaring have stepped in to fill the gap. The total amount in donations generated by crowdfunding sites has increased elevenfold since the appearance of Obamacare. In 2011, sites like GoFundMe and YouCaring were generating a total of $837 million. Three years later, that number had climbed to $9.5 billion. Under the Trump administration, YouCaring expects donations to jump even higher, and the company has already seen an estimated 25 percent spike since the election, which company representatives believe is partly a response to the administration’s threats to Obamacare.
Crowdfunding companies say they’re using technology to help people helping people, the miracle of interconnectedness leading to globalized compassion. But an emerging consensus is starting to suggest a darker, more fraught reality—sites like YouCaring and GoFundMe may in fact be fueling the inequities of the American health care system, not fighting them. And they are potentially exacerbating racial, economic, and educational divides. “Crowdfunding websites have helped a lot of people,” medical researcher Jeremy Snyder wrote in a 2016 article for the Hastings Center Report, a journal focused on medical ethics. But, echoing other scholars, he warned that they’re “ultimately not a solution to injustices in the health system. Indeed, they may themselves be a cause of injustices.” Crowdfunding is yet another example of tech’s best intentions generating unseen and unfortunate outcomes.
Sunday, January 28, 2018
Republicans redefine morality as whatever Trump does
Dana Milbank
The Washington Post
Posted on January 26, 2018
Someday, likely three years from now, perhaps sooner, perhaps — gulp — later, President Trump will depart the stage.
But what will be left of us?
New evidence suggests that the damage he is doing to the culture is bigger than the man. A Quinnipiac University poll released Thursday found that two-thirds of Americans say Trump is not a good role model for children. Every component of society feels that way — men and women, old and young, black and white, highly educated or not — except for one: Republicans. By 72 to 22 percent, they say Trump is a good role model.
In marked contrast to the rest of the country, Republicans also say that Trump shares their values (82 percent) and that — get this — he “provides the United States with moral leadership” (80 percent).
And what moral leadership this role model has been providing!
The article is here.
The Washington Post
Posted on January 26, 2018
Someday, likely three years from now, perhaps sooner, perhaps — gulp — later, President Trump will depart the stage.
But what will be left of us?
New evidence suggests that the damage he is doing to the culture is bigger than the man. A Quinnipiac University poll released Thursday found that two-thirds of Americans say Trump is not a good role model for children. Every component of society feels that way — men and women, old and young, black and white, highly educated or not — except for one: Republicans. By 72 to 22 percent, they say Trump is a good role model.
In marked contrast to the rest of the country, Republicans also say that Trump shares their values (82 percent) and that — get this — he “provides the United States with moral leadership” (80 percent).
And what moral leadership this role model has been providing!
The article is here.
Saturday, January 27, 2018
Evolving Morality
Joshua Greene
Aspen Ideas Festival
2017
Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.
The one-hour talk on SoundCloud is here.
Aspen Ideas Festival
2017
Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.
The one-hour talk on SoundCloud is here.
Friday, January 26, 2018
Should Potential Risk of Chronic Traumatic Encephalopathy Be Discussed with Young Athletes?
Kimberly Hornbeck, Kevin Walter, and Matthew Myrvik
AMA Journal of Ethics. July 2017, Volume 19, Number 7: 686-692.
Abstract
As participation in youth sports has risen over the past two decades, so has the incidence of youth sports injuries. A common topic of concern is concussion, or mild traumatic brain injury, in young athletes and whether concussions sustained at a young age could lead to lifelong impairment such as chronic traumatic encephalopathy (CTE). While the pathway from a concussed young athlete to an adult with CTE remains unknown, current research is attempting to provide more clarity. This article discusses how health care professionals can help foster an informed, balanced decision-making process regarding participation in contact sports that involves the parents as well as the children.
The information is here.
AMA Journal of Ethics. July 2017, Volume 19, Number 7: 686-692.
Abstract
As participation in youth sports has risen over the past two decades, so has the incidence of youth sports injuries. A common topic of concern is concussion, or mild traumatic brain injury, in young athletes and whether concussions sustained at a young age could lead to lifelong impairment such as chronic traumatic encephalopathy (CTE). While the pathway from a concussed young athlete to an adult with CTE remains unknown, current research is attempting to provide more clarity. This article discusses how health care professionals can help foster an informed, balanced decision-making process regarding participation in contact sports that involves the parents as well as the children.
The information is here.
Power Causes Brain Damage
Jerry Useem
The Atlantic
Originally published July 2017
Here is an excerpt:
This is a depressing finding. Knowledge is supposed to be power. But what good is knowing that power deprives you of knowledge?
The sunniest possible spin, it seems, is that these changes are only sometimes harmful. Power, the research says, primes our brain to screen out peripheral information. In most situations, this provides a helpful efficiency boost. In social ones, it has the unfortunate side effect of making us more obtuse. Even that is not necessarily bad for the prospects of the powerful, or the groups they lead. As Susan Fiske, a Princeton psychology professor, has persuasively argued, power lessens the need for a nuanced read of people, since it gives us command of resources we once had to cajole from others. But of course, in a modern organization, the maintenance of that command relies on some level of organizational support. And the sheer number of examples of executive hubris that bristle from the headlines suggests that many leaders cross the line into counterproductive folly.
Less able to make out people’s individuating traits, they rely more heavily on stereotype. And the less they’re able to see, other research suggests, the more they rely on a personal “vision” for navigation. John Stumpf saw a Wells Fargo where every customer had eight separate accounts. (As he’d often noted to employees, eight rhymes with great.) “Cross-selling,” he told Congress, “is shorthand for deepening relationships.”
The article is here.
The Atlantic
Originally published July 2017
Here is an excerpt:
This is a depressing finding. Knowledge is supposed to be power. But what good is knowing that power deprives you of knowledge?
The sunniest possible spin, it seems, is that these changes are only sometimes harmful. Power, the research says, primes our brain to screen out peripheral information. In most situations, this provides a helpful efficiency boost. In social ones, it has the unfortunate side effect of making us more obtuse. Even that is not necessarily bad for the prospects of the powerful, or the groups they lead. As Susan Fiske, a Princeton psychology professor, has persuasively argued, power lessens the need for a nuanced read of people, since it gives us command of resources we once had to cajole from others. But of course, in a modern organization, the maintenance of that command relies on some level of organizational support. And the sheer number of examples of executive hubris that bristle from the headlines suggests that many leaders cross the line into counterproductive folly.
Less able to make out people’s individuating traits, they rely more heavily on stereotype. And the less they’re able to see, other research suggests, the more they rely on a personal “vision” for navigation. John Stumpf saw a Wells Fargo where every customer had eight separate accounts. (As he’d often noted to employees, eight rhymes with great.) “Cross-selling,” he told Congress, “is shorthand for deepening relationships.”
The article is here.
Thursday, January 25, 2018
Neurotechnology, Elon Musk and the goal of human enhancement
Sarah Marsh
The Guardian
Originally published January 1, 2018
Here is an excerpt:
“I hope more resources will be put into supporting this very promising area of research. Brain Computer Interfaces (BCIs) are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”
He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”
Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.
“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”
The article is here.
The Guardian
Originally published January 1, 2018
Here is an excerpt:
“I hope more resources will be put into supporting this very promising area of research. Brain Computer Interfaces (BCIs) are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”
He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”
Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.
“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”
The article is here.
Minding matter
Adam Frank
aeon.com
Originally posted March 13, 2017
Here are two excerpts:
You can see how this throws a monkey wrench into a simple, physics-based view of an objective materialist world. How can there be one mathematical rule for the external objective world before a measurement is made, and another that jumps in after the measurement occurs? For a hundred years now, physicists and philosophers have been beating the crap out of each other (and themselves) trying to figure out how to interpret the wave function and its associated measurement problem. What exactly is quantum mechanics telling us about the world? What does the wave function describe? What really happens when a measurement occurs? Above all, what is matter?
(cut)
Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case.
The article is here.
aeon.com
Originally posted March 13, 2017
Here are two excerpts:
You can see how this throws a monkey wrench into a simple, physics-based view of an objective materialist world. How can there be one mathematical rule for the external objective world before a measurement is made, and another that jumps in after the measurement occurs? For a hundred years now, physicists and philosophers have been beating the crap out of each other (and themselves) trying to figure out how to interpret the wave function and its associated measurement problem. What exactly is quantum mechanics telling us about the world? What does the wave function describe? What really happens when a measurement occurs? Above all, what is matter?
(cut)
Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case.
The article is here.
Wednesday, January 24, 2018
Top 10 lies doctors tell themselves
Pamela Wible
www.idealmedicalcare.org
Originally published December 27, 2017
Here is an excerpt:
Sydney Ashland: “I must overwork and overextend myself.” I hear this all the time. Workaholism, alcoholism, self-medicating. These are the top coping strategies that we, as medical professionals, use to deal with unrealistic work demands. We tell ourselves, “In order to get everything done that I have to get done. In order to meet expectations, meet the deadlines, then I have to overwork.” And this is not true. If you believe in it, you are participating in the lie, you’re enabling it. Start to claim yourself. Start to claim your time. Don’t participate. Don’t believe that there is a magic workaround or gimmick that’s going to enable you to stay in a toxic work environment and reshuffle the deck. What happens is in that shuffling process you continue to overcompensate, overdo, overextend yourself—and you’ve moved from overwork on the face of things to complicating your life. This is common. Liberate yourself. You can be free. It’s not about overwork.
Pamela Wible: And here’s the thing that really is almost humorous. What physicians do when they’re overworked, their solution for overwork—is to overwork. Right? They’re like, “Okay. I’m exhausted. I’m tired. My office isn’t working. I’ll get another phone line. I’ll get two more receptionists. I’ll add three more patients per day.” Your solution to overwork, if it’s overwork, is probably not going to work.
The interview is here.
www.idealmedicalcare.org
Originally published December 27, 2017
Here is an excerpt:
Sydney Ashland: “I must overwork and overextend myself.” I hear this all the time. Workaholism, alcoholism, self-medicating. These are the top coping strategies that we, as medical professionals, use to deal with unrealistic work demands. We tell ourselves, “In order to get everything done that I have to get done. In order to meet expectations, meet the deadlines, then I have to overwork.” And this is not true. If you believe in it, you are participating in the lie, you’re enabling it. Start to claim yourself. Start to claim your time. Don’t participate. Don’t believe that there is a magic workaround or gimmick that’s going to enable you to stay in a toxic work environment and reshuffle the deck. What happens is in that shuffling process you continue to overcompensate, overdo, overextend yourself—and you’ve moved from overwork on the face of things to complicating your life. This is common. Liberate yourself. You can be free. It’s not about overwork.
Pamela Wible: And here’s the thing that really is almost humorous. What physicians do when they’re overworked, their solution for overwork—is to overwork. Right? They’re like, “Okay. I’m exhausted. I’m tired. My office isn’t working. I’ll get another phone line. I’ll get two more receptionists. I’ll add three more patients per day.” Your solution to overwork, if it’s overwork, is probably not going to work.
The interview is here.
The Moral Fabric and Social Norms
AEI Political Report
Volume 14, Issue 1
January 2018
A large majority now, as in the past, say moral values in the country are getting worse. Social conservatives, moderates, and liberals agree. At the same time, however, as these pages show, people accept some behaviors once thought wrong. Later in this issue, we look at polls on women’s experiences with sexual harassment, a topic which has drawn public scrutiny following recent allegations of misconduct against high profile individuals.
Q: Right now, do you think . . . ?
Volume 14, Issue 1
January 2018
A large majority now, as in the past, say moral values in the country are getting worse. Social conservatives, moderates, and liberals agree. At the same time, however, as these pages show, people accept some behaviors once thought wrong. Later in this issue, we look at polls on women’s experiences with sexual harassment, a topic which has drawn public scrutiny following recent allegations of misconduct against high profile individuals.
Q: Right now, do you think . . . ?
Tuesday, January 23, 2018
President Trump’s Mental Health — Is It Morally Permissible for Psychiatrists to Comment?
Claire Pouncey
The New England Journal of Medicine
December 27, 2107
Ralph Northam, a pediatric neurologist who was recently elected governor of Virginia, distinguished himself during the gubernatorial race by calling President Donald Trump a “narcissistic maniac.” Northam drew criticism for using medical diagnostic terminology to denounce a political figure, though he defended the terminology as “medically correct.” The term isn’t medically correct — “maniac” has not been a medical term for well over a century — but Northam’s use of it in either medical or political contexts would not be considered unethical by his professional peers.
For psychiatrists, however, the situation is different, which is why many psychiatrists and other mental health professionals have refrained from speculating about Trump’s mental health. But in October, psychiatrist Bandy Lee published a collection of essays written largely by mental health professionals who believe that their training and expertise compel them to warn the public of the dangers they see in Trump’s psychology. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President rejects the position of the American Psychiatric Association (APA) that psychiatrists should never offer diagnostic opinions about persons they have not personally examined. Past APA president Jeffrey Lieberman has written in Psychiatric News that the book is “not a serious, scholarly, civic-minded work, but simply tawdry, indulgent, fatuous tabloid psychiatry.” I believe it shouldn’t be dismissed so quickly.
The article is here.
The New England Journal of Medicine
December 27, 2107
Ralph Northam, a pediatric neurologist who was recently elected governor of Virginia, distinguished himself during the gubernatorial race by calling President Donald Trump a “narcissistic maniac.” Northam drew criticism for using medical diagnostic terminology to denounce a political figure, though he defended the terminology as “medically correct.” The term isn’t medically correct — “maniac” has not been a medical term for well over a century — but Northam’s use of it in either medical or political contexts would not be considered unethical by his professional peers.
For psychiatrists, however, the situation is different, which is why many psychiatrists and other mental health professionals have refrained from speculating about Trump’s mental health. But in October, psychiatrist Bandy Lee published a collection of essays written largely by mental health professionals who believe that their training and expertise compel them to warn the public of the dangers they see in Trump’s psychology. The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President rejects the position of the American Psychiatric Association (APA) that psychiatrists should never offer diagnostic opinions about persons they have not personally examined. Past APA president Jeffrey Lieberman has written in Psychiatric News that the book is “not a serious, scholarly, civic-minded work, but simply tawdry, indulgent, fatuous tabloid psychiatry.” I believe it shouldn’t be dismissed so quickly.
The article is here.
Best Practices for School-Based Moral Education
Peter Meindl, Abigail Quirk, Jesse Graham
Policy Insights from the Behavioral and Brain Sciences
First Published December 21, 2017
Abstract
How can schools help students build moral character? One way is to use prepackaged moral education programs, but as we report here, their effectiveness tends to be limited. What, then, can schools do? We took two steps to answer this question. First, we consulted more than 50 of the world’s leading social scientists. These scholars have spent decades studying morality, character, or behavior change but until now few had used their expertise to inform moral education practices. Second, we searched recent studies for promising behavior change techniques that apply to school-based moral education. These two lines of investigation congealed into two recommendations: Schools should place more emphasis on hidden or “stealthy” moral education practices and on a small set of “master” virtues. Throughout the article, we describe practices flowing from these recommendations that could improve both the effectiveness and efficiency of school-based moral education.
The article is here.
Policy Insights from the Behavioral and Brain Sciences
First Published December 21, 2017
Abstract
How can schools help students build moral character? One way is to use prepackaged moral education programs, but as we report here, their effectiveness tends to be limited. What, then, can schools do? We took two steps to answer this question. First, we consulted more than 50 of the world’s leading social scientists. These scholars have spent decades studying morality, character, or behavior change but until now few had used their expertise to inform moral education practices. Second, we searched recent studies for promising behavior change techniques that apply to school-based moral education. These two lines of investigation congealed into two recommendations: Schools should place more emphasis on hidden or “stealthy” moral education practices and on a small set of “master” virtues. Throughout the article, we describe practices flowing from these recommendations that could improve both the effectiveness and efficiency of school-based moral education.
The article is here.
Monday, January 22, 2018
Science and Morality
Jim Kozubek
Scientific American
Originally published December 27, 2017
Here is an excerpt:
The argument that genes embody a sort of sacrosanct character that should not be interfered with is not too compelling, since artifacts of viruses are burrowed in our genomes, and genes undergo mutations with each passing generation. Even so, the principle that all life has inherent dignity is hardly a bad thought and provides a necessary counterbalance to the impulse to use in vitro techniques and CRISPR to alter any gene variant to reduce risk or enhance features, none of which are more or less perfect but variations in human evolution.
Indeed, the question of dignity is thornier than we might imagine, since science tends to challenge the belief in abstract or enduring concepts of value. How to uphold beliefs or a sense of dignity seems ever confusing and appears to throw us up against an age of radical nihilism as scientists today are using the gene editing tool CRISPR to do things such as tinker with the color of butterfly wings, genetically alter pigs, even humans. If science is a method of truth-seeking, technology its mode of power and CRISPR is a means to the commodification of life. It also raises the possibility this power can erode societal trust.
The article is here.
Scientific American
Originally published December 27, 2017
Here is an excerpt:
The argument that genes embody a sort of sacrosanct character that should not be interfered with is not too compelling, since artifacts of viruses are burrowed in our genomes, and genes undergo mutations with each passing generation. Even so, the principle that all life has inherent dignity is hardly a bad thought and provides a necessary counterbalance to the impulse to use in vitro techniques and CRISPR to alter any gene variant to reduce risk or enhance features, none of which are more or less perfect but variations in human evolution.
Indeed, the question of dignity is thornier than we might imagine, since science tends to challenge the belief in abstract or enduring concepts of value. How to uphold beliefs or a sense of dignity seems ever confusing and appears to throw us up against an age of radical nihilism as scientists today are using the gene editing tool CRISPR to do things such as tinker with the color of butterfly wings, genetically alter pigs, even humans. If science is a method of truth-seeking, technology its mode of power and CRISPR is a means to the commodification of life. It also raises the possibility this power can erode societal trust.
The article is here.
Should US Physicians Support the Decriminalization of Commercial Sex?
Emily F. Rothman
AMA Journal of Ethics. January 2017, Volume 19, Number 1: 110-121.
Abstract
According to the World Health Organization, “commercial sex” is the exchange of money or goods for sexual services, and this term can be applied to both consensual and nonconsensual exchanges. Some nonconsensual exchanges qualify as human trafficking. Whether the form of commercial sex that is also known as prostitution should be decriminalized is being debated contentiously around the world, in part because the percentage of commercial sex exchanges that are consensual as opposed to nonconsensual, or trafficked, is unknown. This paper explores the question of decriminalization of commercial sex with reference to the bioethical principles of beneficence, nonmaleficence, and respect for autonomy. It concludes that though there is no perfect policy solution to the various ethical problems associated with commercial sex that can arise under either criminalized or decriminalized conditions, the Nordic model offers several potential advantages. This model criminalizes the buying of sex and third-party brokering of sex (i.e., pimping) but exempts sex sellers (i.e., prostitutes, sex workers) from criminal penalties. However, ongoing support for this type of policy should be contingent upon positive results over time.
The article is here.
AMA Journal of Ethics. January 2017, Volume 19, Number 1: 110-121.
Abstract
According to the World Health Organization, “commercial sex” is the exchange of money or goods for sexual services, and this term can be applied to both consensual and nonconsensual exchanges. Some nonconsensual exchanges qualify as human trafficking. Whether the form of commercial sex that is also known as prostitution should be decriminalized is being debated contentiously around the world, in part because the percentage of commercial sex exchanges that are consensual as opposed to nonconsensual, or trafficked, is unknown. This paper explores the question of decriminalization of commercial sex with reference to the bioethical principles of beneficence, nonmaleficence, and respect for autonomy. It concludes that though there is no perfect policy solution to the various ethical problems associated with commercial sex that can arise under either criminalized or decriminalized conditions, the Nordic model offers several potential advantages. This model criminalizes the buying of sex and third-party brokering of sex (i.e., pimping) but exempts sex sellers (i.e., prostitutes, sex workers) from criminal penalties. However, ongoing support for this type of policy should be contingent upon positive results over time.
The article is here.
Sunday, January 21, 2018
Cognitive Economics: How Self-Organization and Collective Intelligence Works
Geoff Mulgan
evonomics.com
Originally published December 22, 2017
Here are two excerpts:
But self-organization is not an altogether-coherent concept and has often turned out to be misleading as a guide to collective intelligence. It obscures the work involved in organization and in particular the hard work involved in high-dimensional choices. If you look in detail at any real example—from the family camping trip to the operation of the Internet, open-source software to everyday markets, these are only self-organizing if you look from far away. Look more closely and different patterns emerge. You quickly find some key shapers—like the designers of underlying protocols, or the people setting the rules for trading. There are certainly some patterns of emergence. Many ideas may be tried and tested before only a few successful ones survive and spread. To put it in the terms of network science, the most useful links survive and are reinforced; the less useful ones wither. The community decides collectively which ones are useful. Yet on closer inspection, there turn out to be concentrations of power and influence even in the most decentralized communities, and when there’s a crisis, networks tend to create temporary hierarchies—or at least the successful ones do—to speed up decision making. As I will show, almost all lasting examples of social coordination combine some elements of hierarchy, solidarity, and individualism.
(cut)
Here we see a more common pattern. The more dimensional any choice is, the more work is needed to think it through. If it is cognitively multidimensional, we may need many people and more disciplines to help us toward a viable solution. If it is socially dimensional, then there is no avoiding a good deal of talk, debate, and argument on the way to a solution that will be supported. And if the choice involves long feedback loops, where results come long after actions have been taken, there is the hard labor of observing what actually happens and distilling conclusions. The more dimensional the choice in these senses, the greater the investment of time and cognitive energy needed to make successful decisions.
Again, it is possible to overshoot: to analyze a problem too much or from too many angles, bring too many people into the conversation, or wait too long for perfect data and feedback rather than relying on rough-and-ready quicker proxies. All organizations struggle to find a good enough balance between their allocation of cognitive resources and the pressures of the environment they’re in. But the long-term trend of more complex societies is to require ever more mediation and intellectual labor of this kind.
The article is here.
evonomics.com
Originally published December 22, 2017
Here are two excerpts:
But self-organization is not an altogether-coherent concept and has often turned out to be misleading as a guide to collective intelligence. It obscures the work involved in organization and in particular the hard work involved in high-dimensional choices. If you look in detail at any real example—from the family camping trip to the operation of the Internet, open-source software to everyday markets, these are only self-organizing if you look from far away. Look more closely and different patterns emerge. You quickly find some key shapers—like the designers of underlying protocols, or the people setting the rules for trading. There are certainly some patterns of emergence. Many ideas may be tried and tested before only a few successful ones survive and spread. To put it in the terms of network science, the most useful links survive and are reinforced; the less useful ones wither. The community decides collectively which ones are useful. Yet on closer inspection, there turn out to be concentrations of power and influence even in the most decentralized communities, and when there’s a crisis, networks tend to create temporary hierarchies—or at least the successful ones do—to speed up decision making. As I will show, almost all lasting examples of social coordination combine some elements of hierarchy, solidarity, and individualism.
(cut)
Here we see a more common pattern. The more dimensional any choice is, the more work is needed to think it through. If it is cognitively multidimensional, we may need many people and more disciplines to help us toward a viable solution. If it is socially dimensional, then there is no avoiding a good deal of talk, debate, and argument on the way to a solution that will be supported. And if the choice involves long feedback loops, where results come long after actions have been taken, there is the hard labor of observing what actually happens and distilling conclusions. The more dimensional the choice in these senses, the greater the investment of time and cognitive energy needed to make successful decisions.
Again, it is possible to overshoot: to analyze a problem too much or from too many angles, bring too many people into the conversation, or wait too long for perfect data and feedback rather than relying on rough-and-ready quicker proxies. All organizations struggle to find a good enough balance between their allocation of cognitive resources and the pressures of the environment they’re in. But the long-term trend of more complex societies is to require ever more mediation and intellectual labor of this kind.
The article is here.
Saturday, January 20, 2018
Exploiting Risk–Reward Structures in Decision Making under Uncertainty
Christina Leuker Thorsten Pachur Ralph Hertwig Timothy Pleskac
PsyArXiv Preprints
Posted December 21, 2017
Abstract
People often have to make decisions under uncertainty — that is, in situations where the probabilities of obtaining a reward are unknown or at least difficult to ascertain. Because outside the laboratory payoffs and probabilities are often correlated, one solution to this problem might be to infer the probability from the magnitude of the potential reward. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk–reward relationships from the environment—and if so, how? (2) How do learned risk–reward relationships impact preferences in decision-making under uncertainty? Across three studies (N = 352), we found that participants learned risk–reward relationships after being exposed to choice environments with a negative, positive, or uncorrelated risk–reward relationship. They learned the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic
events (Experiment 3). In subsequent decisions under uncertainty, participants exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who learned a negative risk–reward relationship preferred the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk–reward heuristic.
From the Discussion Section:
Risks and rewards are the pillars of preference. This makes decision making under uncertainty a vexing problem as one of those pillars—the risks, or probabilities—is missing (Knight, 1921; Luce & Raiffa, 1957). People are commonly thought to deal with this problem by intuiting subjective probabilities from their knowledge and memory (Fox & Tversky, 1998; Tversky & Fox, 1995) or by estimating statistical probabilities from samples of information (Hertwig & Erev, 2009). Our results support another ecologically grounded solution, namely, that people estimate the missing probabilities from their immediate choice environments via their learned risk–reward relationships.
The research is here.
PsyArXiv Preprints
Posted December 21, 2017
Abstract
People often have to make decisions under uncertainty — that is, in situations where the probabilities of obtaining a reward are unknown or at least difficult to ascertain. Because outside the laboratory payoffs and probabilities are often correlated, one solution to this problem might be to infer the probability from the magnitude of the potential reward. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk–reward relationships from the environment—and if so, how? (2) How do learned risk–reward relationships impact preferences in decision-making under uncertainty? Across three studies (N = 352), we found that participants learned risk–reward relationships after being exposed to choice environments with a negative, positive, or uncorrelated risk–reward relationship. They learned the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic
events (Experiment 3). In subsequent decisions under uncertainty, participants exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who learned a negative risk–reward relationship preferred the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk–reward heuristic.
From the Discussion Section:
Risks and rewards are the pillars of preference. This makes decision making under uncertainty a vexing problem as one of those pillars—the risks, or probabilities—is missing (Knight, 1921; Luce & Raiffa, 1957). People are commonly thought to deal with this problem by intuiting subjective probabilities from their knowledge and memory (Fox & Tversky, 1998; Tversky & Fox, 1995) or by estimating statistical probabilities from samples of information (Hertwig & Erev, 2009). Our results support another ecologically grounded solution, namely, that people estimate the missing probabilities from their immediate choice environments via their learned risk–reward relationships.
The research is here.
Friday, January 19, 2018
Why banning autonomous killer robots wouldn’t solve anything
Susanne Burri and Michael Robillard
aeon.com
Originally published December 19, 2017
Here is an excerpt:
For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.
To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.
The article is here.
aeon.com
Originally published December 19, 2017
Here is an excerpt:
For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.
To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.
The article is here.
AI is Fueling Smarter Prosthetics Than Ever Before
Andrea Powell
www.wired.com
Originally posted December 22, 2017
The distance between prosthetic and real is shrinking. Thanks to advances in batteries, brain-controlled robotics, and AI, today’s mechanical limbs can do everything from twist and point to grab and lift. And this isn’t just good news for amputees. “For something like bomb disposal, why not use a robotic arm?” says Justin Sanchez, manager of Darpa’s Revolutionizing Prosthetics program. Well, that would certainly be handy.
The article and pictures are here.
www.wired.com
Originally posted December 22, 2017
The distance between prosthetic and real is shrinking. Thanks to advances in batteries, brain-controlled robotics, and AI, today’s mechanical limbs can do everything from twist and point to grab and lift. And this isn’t just good news for amputees. “For something like bomb disposal, why not use a robotic arm?” says Justin Sanchez, manager of Darpa’s Revolutionizing Prosthetics program. Well, that would certainly be handy.
The article and pictures are here.
Thursday, January 18, 2018
Cooperation and the evolution of hunter-gatherer storytelling
Daniel Smith and others
Nature Communications, 8: 1853
doi: 10.1038/s41467-017-02036-8
Storytelling is a human universal. From gathering around the camp-fire telling tales of ancestors to watching the latest television box-set, humans are inveterate producers and consumers of stories. Despite its ubiquity, little attention has been given to understanding the function and evolution of storytelling. Here we explore the impact of storytelling on hunter gatherer cooperative behaviour and the individual-level fitness benefits to being a skilled storyteller. Stories told by the Agta, a Filipino hunter-gatherer population, convey messages relevant to coordinating behaviour in a foraging ecology, such as cooperation, sex equality and egalitarianism. These themes are present in narratives from other foraging societies. We also show that the presence of good storytellers is associated with increased cooperation. In return, skilled storytellers are preferred social partners and have greater reproductive success, providing a pathway by which group-beneficial behaviours, such as storytelling, can evolve via individual-level selection. We conclude that one of the adaptive functions of storytelling among hunter gatherers may be to organise cooperation.
The article is here.
Implications for psychotherapy and couples counseling.
Nature Communications, 8: 1853
doi: 10.1038/s41467-017-02036-8
Storytelling is a human universal. From gathering around the camp-fire telling tales of ancestors to watching the latest television box-set, humans are inveterate producers and consumers of stories. Despite its ubiquity, little attention has been given to understanding the function and evolution of storytelling. Here we explore the impact of storytelling on hunter gatherer cooperative behaviour and the individual-level fitness benefits to being a skilled storyteller. Stories told by the Agta, a Filipino hunter-gatherer population, convey messages relevant to coordinating behaviour in a foraging ecology, such as cooperation, sex equality and egalitarianism. These themes are present in narratives from other foraging societies. We also show that the presence of good storytellers is associated with increased cooperation. In return, skilled storytellers are preferred social partners and have greater reproductive success, providing a pathway by which group-beneficial behaviours, such as storytelling, can evolve via individual-level selection. We conclude that one of the adaptive functions of storytelling among hunter gatherers may be to organise cooperation.
The article is here.
Implications for psychotherapy and couples counseling.
Humans 2.0: meet the entrepreneur who wants to put a chip in your brain
Zofia Niemtus
The Guardian
Originally posted December 14, 2017
Here are two exerpts:
The shape that this technology will take is still unknown. Johnson uses the term “brain chip”, but the developments taking place in neuroprosthesis are working towards less invasive procedures than opening up your skull and cramming a bit of hardware in; injectable sensors are one possibility.
It may sound far-fetched, but Johnson has a track record of getting things done. Within his first semester at university, he’d set up a profitable business selling mobile phones to fellow students. By age 30, he’d founded online payment company Braintree, which he sold six years later to PayPal for $800m. He used $100m of the proceeds to create Kernel in 2016 – it now employs more than 30 people.
(cut)
“And yet, the brain is everything we are, everything we do, and everything we aspire to be. It seemed obvious to me that the brain is both the most consequential variable in the world and also our biggest blind spot as a species. I decided that if the root problems of humanity begin in the human mind, let’s change our minds.”
The article is here.
The Guardian
Originally posted December 14, 2017
Here are two exerpts:
The shape that this technology will take is still unknown. Johnson uses the term “brain chip”, but the developments taking place in neuroprosthesis are working towards less invasive procedures than opening up your skull and cramming a bit of hardware in; injectable sensors are one possibility.
It may sound far-fetched, but Johnson has a track record of getting things done. Within his first semester at university, he’d set up a profitable business selling mobile phones to fellow students. By age 30, he’d founded online payment company Braintree, which he sold six years later to PayPal for $800m. He used $100m of the proceeds to create Kernel in 2016 – it now employs more than 30 people.
(cut)
“And yet, the brain is everything we are, everything we do, and everything we aspire to be. It seemed obvious to me that the brain is both the most consequential variable in the world and also our biggest blind spot as a species. I decided that if the root problems of humanity begin in the human mind, let’s change our minds.”
The article is here.
Wednesday, January 17, 2018
‘I want to help humans genetically modify themselves’
Tom Ireland
The Guardian
Originally posted December 24, 2017
Josiah Zayner, 36, recently made headlines by becoming the first person to use the revolutionary gene-editing tool Crispr to try to change their own genes. Part way through a talk on genetic engineering, Zayner pulled out a syringe apparently containing DNA and other chemicals designed to trigger a genetic change in his cells associated with dramatically increased muscle mass. He injected the DIY gene therapy into his left arm, live-streaming the procedure on the internet.
The former Nasa biochemist, based in California, has become a leading figure in the growing “biohacker” movement, which involves loose collectives of scientists, engineers, artists, designers, and activists experimenting with biotechnology outside of conventional institutions and laboratories.
Despite warnings from the US Food and Drug Administration (FDA) that selling gene therapy products without regulatory approval is illegal, Zayner sells kits that allow anyone to get started with basic genetic engineering techniques, and has published a free guide for others who want to take it further and experiment on themselves.
The article is here.
The Guardian
Originally posted December 24, 2017
Josiah Zayner, 36, recently made headlines by becoming the first person to use the revolutionary gene-editing tool Crispr to try to change their own genes. Part way through a talk on genetic engineering, Zayner pulled out a syringe apparently containing DNA and other chemicals designed to trigger a genetic change in his cells associated with dramatically increased muscle mass. He injected the DIY gene therapy into his left arm, live-streaming the procedure on the internet.
The former Nasa biochemist, based in California, has become a leading figure in the growing “biohacker” movement, which involves loose collectives of scientists, engineers, artists, designers, and activists experimenting with biotechnology outside of conventional institutions and laboratories.
Despite warnings from the US Food and Drug Administration (FDA) that selling gene therapy products without regulatory approval is illegal, Zayner sells kits that allow anyone to get started with basic genetic engineering techniques, and has published a free guide for others who want to take it further and experiment on themselves.
The article is here.
Do Physicians Have an Ethical Duty to Repair Relationships with So-Called “Difficult” Patients?
Micah Johnson
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 323-331.
doi: 10.1001/journalofethics.2017.19.04.ecas1-1704.
Abstract
This essay argues that physicians hold primary ethical responsibility for repairing damaged patient-physician relationships. The first section establishes that the patient-physician relationship has an important influence on patient health and argues that physicians’ duty to treat should be understood as including a responsibility to repair broken relationships, regardless of which party was “responsible” for the initial tension. The second section argues that the person with more power to repair the relationship also has more responsibility to do so and considers the moral psychology of pain as foundational to conceiving the patient in this case as especially vulnerable and disempowered. The essay concludes with suggestions for clinicians to act on the idea that a healthy patient-physician relationship ought to lie at the center of medicine’s moral mission.
The article is here.
AMA Journal of Ethics. April 2017, Volume 19, Number 4: 323-331.
doi: 10.1001/journalofethics.2017.19.04.ecas1-1704.
Abstract
This essay argues that physicians hold primary ethical responsibility for repairing damaged patient-physician relationships. The first section establishes that the patient-physician relationship has an important influence on patient health and argues that physicians’ duty to treat should be understood as including a responsibility to repair broken relationships, regardless of which party was “responsible” for the initial tension. The second section argues that the person with more power to repair the relationship also has more responsibility to do so and considers the moral psychology of pain as foundational to conceiving the patient in this case as especially vulnerable and disempowered. The essay concludes with suggestions for clinicians to act on the idea that a healthy patient-physician relationship ought to lie at the center of medicine’s moral mission.
The article is here.
Tuesday, January 16, 2018
3D Printed Biomimetic Blood Brain Barrier Eliminates Need for Animal Testing
Hannah Rose Mendoza
3Dprint.com
Originally published December 21, 2017
The blood-brain barrier (BBB) may sound like a rating system for avoiding horror movies, but in reality it is a semi-permeable membrane responsible for restricting and regulating the entry of neurotoxic compounds, diseases, and circulating blood into the brain. It exists as a defense mechanism to protect the brain from direct contact with damaging entities carried in the body. Normally, this is something that is important to maintain as a strong defense; however, there are times when medical treatments require the ability to trespass beyond this biological barrier without damaging it. This is especially true now in the era of nanomedicine, when therapeutic treatments have been developed to combat brain cancer, neurodegenerative diseases, and even the effects of trauma-based brain damage.
In order to advance medical research in these important areas, it has been important to operate in an environment that accurately represents the BBB. As such, researchers have turned to animal subjects, something which comes with significant ethical and moral questions.
The story is here.
3Dprint.com
Originally published December 21, 2017
The blood-brain barrier (BBB) may sound like a rating system for avoiding horror movies, but in reality it is a semi-permeable membrane responsible for restricting and regulating the entry of neurotoxic compounds, diseases, and circulating blood into the brain. It exists as a defense mechanism to protect the brain from direct contact with damaging entities carried in the body. Normally, this is something that is important to maintain as a strong defense; however, there are times when medical treatments require the ability to trespass beyond this biological barrier without damaging it. This is especially true now in the era of nanomedicine, when therapeutic treatments have been developed to combat brain cancer, neurodegenerative diseases, and even the effects of trauma-based brain damage.
In order to advance medical research in these important areas, it has been important to operate in an environment that accurately represents the BBB. As such, researchers have turned to animal subjects, something which comes with significant ethical and moral questions.
The story is here.
Should Governments Invest More in Nudging?
Shlomo Benartzi, John Beshears, Katherine L. Milkman, and others
Psychological Science
Vol 28, Issue 8, pp. 1041 - 1055
First Published June 5, 2017
Abstract
Governments are increasingly adopting behavioral science techniques for changing individual behavior in pursuit of policy objectives. The types of “nudge” interventions that governments are now adopting alter people’s decisions without coercion or significant changes to economic incentives. We calculated ratios of impact to cost for nudge interventions and for traditional policy tools, such as tax incentives and other financial inducements, and we found that nudge interventions often compare favorably with traditional interventions. We conclude that nudging is a valuable approach that should be used more often in conjunction with traditional policies, but more calculations are needed to
determine the relative effectiveness of nudging.
The article is here.
Psychological Science
Vol 28, Issue 8, pp. 1041 - 1055
First Published June 5, 2017
Abstract
Governments are increasingly adopting behavioral science techniques for changing individual behavior in pursuit of policy objectives. The types of “nudge” interventions that governments are now adopting alter people’s decisions without coercion or significant changes to economic incentives. We calculated ratios of impact to cost for nudge interventions and for traditional policy tools, such as tax incentives and other financial inducements, and we found that nudge interventions often compare favorably with traditional interventions. We conclude that nudging is a valuable approach that should be used more often in conjunction with traditional policies, but more calculations are needed to
determine the relative effectiveness of nudging.
The article is here.
Monday, January 15, 2018
The media needs to do more to elevate a national conversation about ethics
Arthur Caplan
Poynter.com
Originally December 21, 2017
Here is an excerpt:
Obviously unethical conduct has been around forever and will be into the foreseeable future. That said, it is important that the leaders of this nation and, more importantly, those leading our key institutions and professions reaffirm their commitment to the view that there are higher values worth pursuing in a just society. The fact that so many fail to live up to basic values does not mean that the values are meaningless, wrong or misplaced. They aren’t. It is rather that the organizations and professions where the epidemic of moral failure is burgeoning have put other values, often power and profits, ahead of morality.
There is no simple fix for hypocrisy. Egoism, the gross abuse of power and self-indulgence, is a very tough moral opponent in an individualistic society like America. Short-term reward is deceptively more attractive then slogging out the virtues in the name of the long haul. If we are to prepare our children to succeed, then attending to their moral development is as important as anything we can do. If our leaders are to truly lead then we have to reward those who do, not those who don’t, won’t or can’t. Are we?
The article is here.
Poynter.com
Originally December 21, 2017
Here is an excerpt:
Obviously unethical conduct has been around forever and will be into the foreseeable future. That said, it is important that the leaders of this nation and, more importantly, those leading our key institutions and professions reaffirm their commitment to the view that there are higher values worth pursuing in a just society. The fact that so many fail to live up to basic values does not mean that the values are meaningless, wrong or misplaced. They aren’t. It is rather that the organizations and professions where the epidemic of moral failure is burgeoning have put other values, often power and profits, ahead of morality.
There is no simple fix for hypocrisy. Egoism, the gross abuse of power and self-indulgence, is a very tough moral opponent in an individualistic society like America. Short-term reward is deceptively more attractive then slogging out the virtues in the name of the long haul. If we are to prepare our children to succeed, then attending to their moral development is as important as anything we can do. If our leaders are to truly lead then we have to reward those who do, not those who don’t, won’t or can’t. Are we?
The article is here.
Lesion network localization of criminal behavior
R. Ryan Darby Andreas Horn, Fiery Cushman, and Michael D. Fox
The Proceedings of the National Academy of Sciences
Abstract
Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases. The lesion sites were spatially heterogeneous, including the medial prefrontal cortex, orbitofrontal cortex, and different locations within the bilateral temporal lobes. No single brain region was damaged in all cases. Because lesion-induced symptoms can come from sites connected to the lesion location and not just the lesion location itself, we also identified brain regions functionally connected to each lesion location. This technique, termed lesion network mapping, has recently identified regions involved in symptom generation across a variety of lesion-induced disorders. All lesions were functionally connected to the same network of brain regions. This criminality-associated connectivity pattern was unique compared with lesions causing four other neuropsychiatric syndromes. This network includes regions involved in morality, value-based decision making, and theory of mind, but not regions involved in cognitive control or empathy. Finally, we replicated our results in a separate cohort of 23 cases in which a temporal relationship between brain lesions and criminal behavior was implied but not definitive. Our results suggest that lesions in criminals occur in different brain locations but localize to a unique resting state network, providing insight into the neurobiology of criminal behavior.
Significance
Cases like that of Charles Whitman, who murdered 16 people after growth of a brain tumor, have sparked debate about why some brain lesions, but not others, might lead to criminal behavior. Here we systematically characterize such lesions and compare them with lesions that cause other symptoms. We find that lesions in multiple different brain areas are associated with criminal behavior. However, these lesions all fall within a unique functionally connected brain network involved in moral decision making. Furthermore, connectivity to competing brain networks predicts the abnormal moral decisions observed in these patients. These results provide insight into why some brain lesions, but not others, might predispose to criminal behavior, with potential neuroscience, medical, and legal implications.
The article is here.
The Proceedings of the National Academy of Sciences
Abstract
Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases. The lesion sites were spatially heterogeneous, including the medial prefrontal cortex, orbitofrontal cortex, and different locations within the bilateral temporal lobes. No single brain region was damaged in all cases. Because lesion-induced symptoms can come from sites connected to the lesion location and not just the lesion location itself, we also identified brain regions functionally connected to each lesion location. This technique, termed lesion network mapping, has recently identified regions involved in symptom generation across a variety of lesion-induced disorders. All lesions were functionally connected to the same network of brain regions. This criminality-associated connectivity pattern was unique compared with lesions causing four other neuropsychiatric syndromes. This network includes regions involved in morality, value-based decision making, and theory of mind, but not regions involved in cognitive control or empathy. Finally, we replicated our results in a separate cohort of 23 cases in which a temporal relationship between brain lesions and criminal behavior was implied but not definitive. Our results suggest that lesions in criminals occur in different brain locations but localize to a unique resting state network, providing insight into the neurobiology of criminal behavior.
Significance
Cases like that of Charles Whitman, who murdered 16 people after growth of a brain tumor, have sparked debate about why some brain lesions, but not others, might lead to criminal behavior. Here we systematically characterize such lesions and compare them with lesions that cause other symptoms. We find that lesions in multiple different brain areas are associated with criminal behavior. However, these lesions all fall within a unique functionally connected brain network involved in moral decision making. Furthermore, connectivity to competing brain networks predicts the abnormal moral decisions observed in these patients. These results provide insight into why some brain lesions, but not others, might predispose to criminal behavior, with potential neuroscience, medical, and legal implications.
The article is here.
Sunday, January 14, 2018
The Criminalization of Compliance
Todd Haugh
92 Notre Dame L. Rev. 1215 (2017).
Abstract
Corporate compliance is becoming increasingly “criminalized.” What began as a means of industry self-regulation has morphed into a multi-billion-dollar effort to avoid government intervention in business, specifically criminal and quasi-criminal investigations and prosecutions. In order to avoid application of the criminal law, companies have adopted compliance programs that are motivated by and mimic that law, using the precepts of criminal legislation, enforcement, and adjudication to advance their compliance goals. This approach to compliance is inherently flawed, however—it can never be fully effective in abating corporate wrongdoing. Criminalized compliance regimes are inherently ineffective because they impose unintended behavioral consequences on corporate employees. Employees subject to criminalized compliance have greater opportunities to rationalize their future unethical or illegal behavior. Rationalizations are a key component in the psychological process necessary for the commission of corporate crime—they allow offenders to square their self-perception as “good people” with the illegal behavior they are contemplating, thereby allowing the behavior to go forward. Criminalized compliance regimes fuel these rationalizations, and in turn, bad corporate conduct. By importing into the corporation many of the criminal law’s delegitimizing features, criminalized compliance creates space for rationalizations, facilitating the necessary precursors to the commission of white collar and corporate crime. The result is that many compliance programs, by mimicking the criminal law in hopes of reducing employee misconduct, are actually fostering it. This insight, which offers a new way of conceptualizing corporate compliance, explains the ineffectiveness of many compliance programs and also suggests how companies might go about fixing them.
The article is here.
92 Notre Dame L. Rev. 1215 (2017).
Abstract
Corporate compliance is becoming increasingly “criminalized.” What began as a means of industry self-regulation has morphed into a multi-billion-dollar effort to avoid government intervention in business, specifically criminal and quasi-criminal investigations and prosecutions. In order to avoid application of the criminal law, companies have adopted compliance programs that are motivated by and mimic that law, using the precepts of criminal legislation, enforcement, and adjudication to advance their compliance goals. This approach to compliance is inherently flawed, however—it can never be fully effective in abating corporate wrongdoing. Criminalized compliance regimes are inherently ineffective because they impose unintended behavioral consequences on corporate employees. Employees subject to criminalized compliance have greater opportunities to rationalize their future unethical or illegal behavior. Rationalizations are a key component in the psychological process necessary for the commission of corporate crime—they allow offenders to square their self-perception as “good people” with the illegal behavior they are contemplating, thereby allowing the behavior to go forward. Criminalized compliance regimes fuel these rationalizations, and in turn, bad corporate conduct. By importing into the corporation many of the criminal law’s delegitimizing features, criminalized compliance creates space for rationalizations, facilitating the necessary precursors to the commission of white collar and corporate crime. The result is that many compliance programs, by mimicking the criminal law in hopes of reducing employee misconduct, are actually fostering it. This insight, which offers a new way of conceptualizing corporate compliance, explains the ineffectiveness of many compliance programs and also suggests how companies might go about fixing them.
The article is here.
Saturday, January 13, 2018
The costs of being consequentialist: Social perceptions of those who harm and help for the greater good
Everett, J. A. C., Faber, N. S., Savulescu, J., & Crockett, M. (2017, December 15).
The Cost of Being Consequentialist. Retrieved from psyarxiv.com/a2kx6
Abstract
Previous work has demonstrated that people are more likely to trust “deontological” agents who reject instrumentally harming one person to save a greater number than “consequentialist” agents who endorse such harm in pursuit of the greater good. It has been argued that these differential social perceptions of deontological vs. consequentialist agents could explain the higher prevalence of deontological moral intuitions. Yet consequentialism involves much more than decisions to endorse instrumental harm: another critical dimension is impartial beneficence, defined as the impartial maximization of the greater good, treating the well-being of every individual as equally important. In three studies (total N = 1,634), we investigated preferences for deontological vs. consequentialist social partners in both the domains of instrumental harm and impartial beneficence, and consider how such preferences vary across different types of social relationships. Our results demonstrate consistent preferences for deontological over consequentialist agents across both domains of instrumental harm and impartial beneficence: deontological agents were viewed as more moral and trustworthy, and were actually entrusted with more money in a resource distribution task. However, preferences for deontological agents were stronger when those preferences were revealed via aversion to instrumental harm than impartial beneficence. Finally, in the domain of instrumental harm, deontological agents were uniformly preferred across a variety of social roles, but in the domain of impartial beneficence, people prefer deontologists for roles requiring direct interaction (friend, spouse, boss) but not for more distant roles with little-to-no personal interaction (political leader).
The research is here.
The Cost of Being Consequentialist. Retrieved from psyarxiv.com/a2kx6
Previous work has demonstrated that people are more likely to trust “deontological” agents who reject instrumentally harming one person to save a greater number than “consequentialist” agents who endorse such harm in pursuit of the greater good. It has been argued that these differential social perceptions of deontological vs. consequentialist agents could explain the higher prevalence of deontological moral intuitions. Yet consequentialism involves much more than decisions to endorse instrumental harm: another critical dimension is impartial beneficence, defined as the impartial maximization of the greater good, treating the well-being of every individual as equally important. In three studies (total N = 1,634), we investigated preferences for deontological vs. consequentialist social partners in both the domains of instrumental harm and impartial beneficence, and consider how such preferences vary across different types of social relationships. Our results demonstrate consistent preferences for deontological over consequentialist agents across both domains of instrumental harm and impartial beneficence: deontological agents were viewed as more moral and trustworthy, and were actually entrusted with more money in a resource distribution task. However, preferences for deontological agents were stronger when those preferences were revealed via aversion to instrumental harm than impartial beneficence. Finally, in the domain of instrumental harm, deontological agents were uniformly preferred across a variety of social roles, but in the domain of impartial beneficence, people prefer deontologists for roles requiring direct interaction (friend, spouse, boss) but not for more distant roles with little-to-no personal interaction (political leader).
The research is here.
Friday, January 12, 2018
The Normalization of Corruption in Organizations
Blake E. Ashforth and Vikas Anand
Research in Organizational Behavior
Volume 25, 2003, Pages 1-52
Abstract
Organizational corruption imposes a steep cost on society, easily dwarfing that of street crime. We examine how corruption becomes normalized, that is, embedded in the organization such that it is more or less taken for granted and perpetuated. We argue that three mutually reinforcing processes underlie normalization: (1) institutionalization, where an initial corrupt decision or act becomes embedded in structures and processes and thereby routinized; (2) rationalization, where self-serving ideologies develop to justify and perhaps even valorize corruption; and (3) socialization, where naı̈ve newcomers are induced to view corruption as permissible if not desirable. The model helps explain how otherwise morally upright individuals can routinely engage in corruption without experiencing conflict, how corruption can persist despite the turnover of its initial practitioners, how seemingly rational organizations can engage in suicidal corruption and how an emphasis on the individual as evildoer misses the point that systems and individuals are mutually reinforcing.
The article is here.
Research in Organizational Behavior
Volume 25, 2003, Pages 1-52
Abstract
Organizational corruption imposes a steep cost on society, easily dwarfing that of street crime. We examine how corruption becomes normalized, that is, embedded in the organization such that it is more or less taken for granted and perpetuated. We argue that three mutually reinforcing processes underlie normalization: (1) institutionalization, where an initial corrupt decision or act becomes embedded in structures and processes and thereby routinized; (2) rationalization, where self-serving ideologies develop to justify and perhaps even valorize corruption; and (3) socialization, where naı̈ve newcomers are induced to view corruption as permissible if not desirable. The model helps explain how otherwise morally upright individuals can routinely engage in corruption without experiencing conflict, how corruption can persist despite the turnover of its initial practitioners, how seemingly rational organizations can engage in suicidal corruption and how an emphasis on the individual as evildoer misses the point that systems and individuals are mutually reinforcing.
The article is here.
The Age of Outrage
Jonathan Haidt
Essay derived from a speech in City Journal
December 17, 2017
Here is an excerpt:
When we look back at the ways our ancestors lived, there’s no getting around it: we are tribal primates. We are exquisitely designed and adapted by evolution for life in small societies with intense, animistic religion and violent intergroup conflict over territory. We love tribal living so much that we invented sports, fraternities, street gangs, fan clubs, and tattoos. Tribalism is in our hearts and minds. We’ll never stamp it out entirely, but we can minimize its effects because we are a behaviorally flexible species. We can live in many different ways, from egalitarian hunter-gatherer groups of 50 individuals to feudal hierarchies binding together millions. And in the last two centuries, a lot of us have lived in large, multi-ethnic secular liberal democracies. So clearly that is possible. But how much margin of error do we have in such societies?
Here is the fine-tuned liberal democracy hypothesis: as tribal primates, human beings are unsuited for life in large, diverse secular democracies, unless you get certain settings finely adjusted to make possible the development of stable political life. This seems to be what the Founding Fathers believed. Jefferson, Madison, and the rest of those eighteenth-century deists clearly did think that designing a constitution was like designing a giant clock, a clock that might run forever if they chose the right springs and gears.
Thankfully, our Founders were good psychologists. They knew that we are not angels; they knew that we are tribal creatures. As Madison wrote in Federalist 10: “the latent causes of faction are thus sown in the nature of man.” Our Founders were also good historians; they were well aware of Plato’s belief that democracy is the second worst form of government because it inevitably decays into tyranny. Madison wrote in Federalist 10 about pure or direct democracies, which he said are quickly consumed by the passions of the majority: “such democracies have ever been spectacles of turbulence and contention . . . and have in general been as short in their lives as they have been violent in their deaths.”
So what did the Founders do? They built in safeguards against runaway factionalism, such as the division of powers among the three branches, and an elaborate series of checks and balances. But they also knew that they had to train future generations of clock mechanics. They were creating a new kind of republic, which would demand far more maturity from its citizens than was needed in nations ruled by a king or other Leviathan.
The full speech is here.
Essay derived from a speech in City Journal
December 17, 2017
Here is an excerpt:
When we look back at the ways our ancestors lived, there’s no getting around it: we are tribal primates. We are exquisitely designed and adapted by evolution for life in small societies with intense, animistic religion and violent intergroup conflict over territory. We love tribal living so much that we invented sports, fraternities, street gangs, fan clubs, and tattoos. Tribalism is in our hearts and minds. We’ll never stamp it out entirely, but we can minimize its effects because we are a behaviorally flexible species. We can live in many different ways, from egalitarian hunter-gatherer groups of 50 individuals to feudal hierarchies binding together millions. And in the last two centuries, a lot of us have lived in large, multi-ethnic secular liberal democracies. So clearly that is possible. But how much margin of error do we have in such societies?
Here is the fine-tuned liberal democracy hypothesis: as tribal primates, human beings are unsuited for life in large, diverse secular democracies, unless you get certain settings finely adjusted to make possible the development of stable political life. This seems to be what the Founding Fathers believed. Jefferson, Madison, and the rest of those eighteenth-century deists clearly did think that designing a constitution was like designing a giant clock, a clock that might run forever if they chose the right springs and gears.
Thankfully, our Founders were good psychologists. They knew that we are not angels; they knew that we are tribal creatures. As Madison wrote in Federalist 10: “the latent causes of faction are thus sown in the nature of man.” Our Founders were also good historians; they were well aware of Plato’s belief that democracy is the second worst form of government because it inevitably decays into tyranny. Madison wrote in Federalist 10 about pure or direct democracies, which he said are quickly consumed by the passions of the majority: “such democracies have ever been spectacles of turbulence and contention . . . and have in general been as short in their lives as they have been violent in their deaths.”
So what did the Founders do? They built in safeguards against runaway factionalism, such as the division of powers among the three branches, and an elaborate series of checks and balances. But they also knew that they had to train future generations of clock mechanics. They were creating a new kind of republic, which would demand far more maturity from its citizens than was needed in nations ruled by a king or other Leviathan.
The full speech is here.
Thursday, January 11, 2018
Is Blended Intelligence the Next Stage of Human Evolution?
Richard Yonck
TED Talk
Published December 8, 2017
What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.
TED Talk
Published December 8, 2017
What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2.
IEEE, 2017.
Introduction
As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives.
To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection, we need to have an open and honest debate around our imaginary, our sets of explicit or implicit values, our institutions, symbols and representations.
Eudaimonia, as elucidated by Aristotle, is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.
Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.
The guidelines are here.
IEEE, 2017.
Introduction
As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives.
To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection, we need to have an open and honest debate around our imaginary, our sets of explicit or implicit values, our institutions, symbols and representations.
Eudaimonia, as elucidated by Aristotle, is a practice that defines human well-being as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.
Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age. Measuring and honoring the potential of holistic economic prosperity should become more important than pursuing one-dimensional goals like productivity increase or GDP growth.
The guidelines are here.
Wednesday, January 10, 2018
Failing better
Erik Angner
BPP Blog, the companion blog to the new journal Behavioural Public Policy
Originally posted June 2, 2017
Cass R. Sunstein’s ‘Nudges That Fail’ explores why some nudges work, why some fail, and what should be done in the face of failure. It’s a useful contribution in part because it reminds us that nudging – roughly speaking, the effort to improve people’s welfare by helping them make better choices without interfering with their liberty or autonomy – is harder than it might seem. When people differ in beliefs, values, and preferences, or when they differ in their responses to behavioral interventions, for example, it may be difficult to design a nudge that benefits at least some without violating anyone’s liberty or autonomy. But the paper is a useful contribution also because it suggests concrete, positive steps that may be taken to help us get better simultaneously at enhancing welfare and at respecting liberty and autonomy.
(cut)
Moreover, even if a nudge is on the net welfare enhancing and doesn’t violate any other values, it does not follow that it should be implemented. As economists are fond of telling you, everything has an opportunity cost, and so do nudges. If whatever resources would be used in the implementation of the nudge could be put to better use elsewhere, we would have reason not to implement it. If we did anyway, we would be guilty of the Econ 101 fallacy of ignoring opportunity costs, which would be embarrassing.
The blog post is here.
BPP Blog, the companion blog to the new journal Behavioural Public Policy
Originally posted June 2, 2017
Cass R. Sunstein’s ‘Nudges That Fail’ explores why some nudges work, why some fail, and what should be done in the face of failure. It’s a useful contribution in part because it reminds us that nudging – roughly speaking, the effort to improve people’s welfare by helping them make better choices without interfering with their liberty or autonomy – is harder than it might seem. When people differ in beliefs, values, and preferences, or when they differ in their responses to behavioral interventions, for example, it may be difficult to design a nudge that benefits at least some without violating anyone’s liberty or autonomy. But the paper is a useful contribution also because it suggests concrete, positive steps that may be taken to help us get better simultaneously at enhancing welfare and at respecting liberty and autonomy.
(cut)
Moreover, even if a nudge is on the net welfare enhancing and doesn’t violate any other values, it does not follow that it should be implemented. As economists are fond of telling you, everything has an opportunity cost, and so do nudges. If whatever resources would be used in the implementation of the nudge could be put to better use elsewhere, we would have reason not to implement it. If we did anyway, we would be guilty of the Econ 101 fallacy of ignoring opportunity costs, which would be embarrassing.
The blog post is here.
Our enemies are human: that’s why we want to kill them
Tage Rai, Piercarlo Valdesolo, and Jesse Graham
aeon.co
Originally posted December 13, 2017
Here are two excerpts:
What we found was that dehumanising victims predicts support for instrumental violence, but not for moral violence. For example, Americans who saw Iraqi civilians as less human were more likely to support drone strikes in Iraq. In this case, no one wants to kill innocent civilians, but if they die as collateral damage in the pursuit of killing ISIS terrorists, dehumanising them eases our guilt. In contrast, seeing ISIS terrorists as less human predicted nothing about support for drone strikes against them. This is because people want to hurt and kill terrorists. Without their humanity, how could terrorists be guilty, and how could they feel the pain that they deserve?
(cut)
Many people believe that it is only a breakdown in our moral sensibilities that causes violence. To reduce violence, according to this argument, we need only restore our sense of morality by generating empathy toward victims. If we could just see them as fellow human beings, then we would do them no harm. Yet our research suggests that this is untrue. In cases of moral violence, our experiments suggest that it is the engagement of our moral sense, not its disengagement, that often causes aggression. When Myanmar security forces plant landmines at the Bangladesh border in an attempt to kill the Rohingya minorities who are trying to escape the slaughter, the primary driver of their behaviour is not dehumanisation, but rather moral outrage toward an enemy conceptualised as evil, but also completely human.
The article is here.
aeon.co
Originally posted December 13, 2017
Here are two excerpts:
What we found was that dehumanising victims predicts support for instrumental violence, but not for moral violence. For example, Americans who saw Iraqi civilians as less human were more likely to support drone strikes in Iraq. In this case, no one wants to kill innocent civilians, but if they die as collateral damage in the pursuit of killing ISIS terrorists, dehumanising them eases our guilt. In contrast, seeing ISIS terrorists as less human predicted nothing about support for drone strikes against them. This is because people want to hurt and kill terrorists. Without their humanity, how could terrorists be guilty, and how could they feel the pain that they deserve?
(cut)
Many people believe that it is only a breakdown in our moral sensibilities that causes violence. To reduce violence, according to this argument, we need only restore our sense of morality by generating empathy toward victims. If we could just see them as fellow human beings, then we would do them no harm. Yet our research suggests that this is untrue. In cases of moral violence, our experiments suggest that it is the engagement of our moral sense, not its disengagement, that often causes aggression. When Myanmar security forces plant landmines at the Bangladesh border in an attempt to kill the Rohingya minorities who are trying to escape the slaughter, the primary driver of their behaviour is not dehumanisation, but rather moral outrage toward an enemy conceptualised as evil, but also completely human.
The article is here.
Tuesday, January 9, 2018
Drug Companies’ Liability for the Opioid Epidemic
Rebecca L. Haffajee and Michelle M. Mello
N Engl J Med 2017; 377:2301-2305
December 14, 2017
DOI: 10.1056/NEJMp1710756
Here is an excerpt:
Opioid products, they alleged, were defectively designed because companies failed to include safety mechanisms, such as an antagonist agent or tamper-resistant formulation. Manufacturers also purportedly failed to adequately warn about addiction risks on drug packaging and in promotional activities. Some claims alleged that opioid manufacturers deliberately withheld information about their products’ dangers, misrepresenting them as safer than alternatives.
These suits faced formidable barriers that persist today. As with other prescription drugs, persuading a jury that an opioid is defectively designed if the Food and Drug Administration approved it is challenging. Furthermore, in most states, a drug manufacturer’s duty to warn about risks is limited to issuing an adequate warning to prescribers, who are responsible for communicating with patients. Finally, juries may resist laying legal responsibility at the manufacturer’s feet when the prescriber’s decisions and the patient’s behavior contributed to the harm. Some individuals do not take opioids as prescribed or purchase them illegally. Companies may argue that such conduct precludes holding manufacturers liable, or at least should reduce damages awards.
One procedural strategy adopted in opioid litigation that can help overcome defenses based on users’ conduct is the class action suit, brought by a large group of similarly situated individuals. In such suits, the causal relationship between the companies’ business practices and the harm is assessed at the group level, with the focus on statistical associations between product use and injury. The use of class actions was instrumental in overcoming tobacco companies’ defenses based on smokers’ conduct. But early attempts to bring class actions against opioid manufacturers encountered procedural barriers. Because of different factual circumstances surrounding individuals’ opioid use and clinical conditions, judges often deemed proposed class members to lack sufficiently common claims.
The article is here.
N Engl J Med 2017; 377:2301-2305
December 14, 2017
DOI: 10.1056/NEJMp1710756
Here is an excerpt:
Opioid products, they alleged, were defectively designed because companies failed to include safety mechanisms, such as an antagonist agent or tamper-resistant formulation. Manufacturers also purportedly failed to adequately warn about addiction risks on drug packaging and in promotional activities. Some claims alleged that opioid manufacturers deliberately withheld information about their products’ dangers, misrepresenting them as safer than alternatives.
These suits faced formidable barriers that persist today. As with other prescription drugs, persuading a jury that an opioid is defectively designed if the Food and Drug Administration approved it is challenging. Furthermore, in most states, a drug manufacturer’s duty to warn about risks is limited to issuing an adequate warning to prescribers, who are responsible for communicating with patients. Finally, juries may resist laying legal responsibility at the manufacturer’s feet when the prescriber’s decisions and the patient’s behavior contributed to the harm. Some individuals do not take opioids as prescribed or purchase them illegally. Companies may argue that such conduct precludes holding manufacturers liable, or at least should reduce damages awards.
One procedural strategy adopted in opioid litigation that can help overcome defenses based on users’ conduct is the class action suit, brought by a large group of similarly situated individuals. In such suits, the causal relationship between the companies’ business practices and the harm is assessed at the group level, with the focus on statistical associations between product use and injury. The use of class actions was instrumental in overcoming tobacco companies’ defenses based on smokers’ conduct. But early attempts to bring class actions against opioid manufacturers encountered procedural barriers. Because of different factual circumstances surrounding individuals’ opioid use and clinical conditions, judges often deemed proposed class members to lack sufficiently common claims.
The article is here.
Dangers of neglecting non-financial conflicts of interest in health and medicine
Wiersma M, Kerridge I, Lipworth W.
Journal of Medical Ethics
Published Online First: 24 November 2017.
doi: 10.1136/medethics-2017-104530
Abstract
Non-financial interests, and the conflicts of interest that may result from them, are frequently overlooked in biomedicine. This is partly due to the complex and varied nature of these interests, and the limited evidence available regarding their prevalence and impact on biomedical research and clinical practice. We suggest that there are no meaningful conceptual distinctions, and few practical differences, between financial and non-financial conflicts of interest, and accordingly, that both require careful consideration. Further, a better understanding of the complexities of non-financial conflicts of interest, and their entanglement with financial conflicts of interest, may assist in the development of a more sophisticated approach to all forms of conflicts of interest.
The article is here.
Journal of Medical Ethics
Published Online First: 24 November 2017.
doi: 10.1136/medethics-2017-104530
Abstract
Non-financial interests, and the conflicts of interest that may result from them, are frequently overlooked in biomedicine. This is partly due to the complex and varied nature of these interests, and the limited evidence available regarding their prevalence and impact on biomedical research and clinical practice. We suggest that there are no meaningful conceptual distinctions, and few practical differences, between financial and non-financial conflicts of interest, and accordingly, that both require careful consideration. Further, a better understanding of the complexities of non-financial conflicts of interest, and their entanglement with financial conflicts of interest, may assist in the development of a more sophisticated approach to all forms of conflicts of interest.
The article is here.
Monday, January 8, 2018
Advocacy group raises concerns about psychological evaluations on hundreds of defendants
Keith L. Alexander
The Washington Post
Originally published December 14, 2017
A District employee who has conducted mental evaluations on hundreds of criminal defendants as a forensic psychologist has been removed from that role after concerns surfaced about her educational qualifications, according to city officials.
Officials with the District’s Department of Health said Reston N. Bell was not qualified to conduct the assessments without the help or review of a supervisor. The city said it had mistakenly granted Bell, who was hired in 2016, a license to practice psychology, but this month the license was downgraded to “psychology associate.”
Although Bell has a master’s degree in psychology and a doctorate in education, she does not have a PhD in psychology, which led to the downgrade.
The article is here.
The Washington Post
Originally published December 14, 2017
A District employee who has conducted mental evaluations on hundreds of criminal defendants as a forensic psychologist has been removed from that role after concerns surfaced about her educational qualifications, according to city officials.
Officials with the District’s Department of Health said Reston N. Bell was not qualified to conduct the assessments without the help or review of a supervisor. The city said it had mistakenly granted Bell, who was hired in 2016, a license to practice psychology, but this month the license was downgraded to “psychology associate.”
Although Bell has a master’s degree in psychology and a doctorate in education, she does not have a PhD in psychology, which led to the downgrade.
The article is here.
Nudging, informed consent and bullshit
William Simkulet
Journal of Medical Ethics Published Online
First: 18 November 2017. doi: 10.1136/medethics-2017-104480
Abstract
Some philosophers have argued that during the process of obtaining informed consent, physicians should try to nudge their patients towards consenting to the option the physician believes best, where a nudge is any influence that is expected to predictably alter a person’s behaviour without (substantively) restricting her options. Some proponents of nudging even argue that it is a necessary and unavoidable part of securing informed consent. Here I argue that nudging is incompatible with obtaining informed consent. I assume informed consent requires that a physician tells her patient the truth about her options and argue that nudging is incompatible with truth-telling. Instead, nudging satisfies Harry Frankfurt’s account of bullshit.
The article is here.
Journal of Medical Ethics Published Online
First: 18 November 2017. doi: 10.1136/medethics-2017-104480
Abstract
Some philosophers have argued that during the process of obtaining informed consent, physicians should try to nudge their patients towards consenting to the option the physician believes best, where a nudge is any influence that is expected to predictably alter a person’s behaviour without (substantively) restricting her options. Some proponents of nudging even argue that it is a necessary and unavoidable part of securing informed consent. Here I argue that nudging is incompatible with obtaining informed consent. I assume informed consent requires that a physician tells her patient the truth about her options and argue that nudging is incompatible with truth-telling. Instead, nudging satisfies Harry Frankfurt’s account of bullshit.
The article is here.
Sunday, January 7, 2018
Are human rights anything more than legal conventions?
John Tasioulas
aeon.co
Originally published April 11, 2017
We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.
Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?
The article is here.
aeon.co
Originally published April 11, 2017
We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.
Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?
The article is here.
Saturday, January 6, 2018
The Myth of Responsibility
Raoul Martinez
RSA.org
Originally posted December 7, 2017
Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy. Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.
4 minutes video is worth watching.....
RSA.org
Originally posted December 7, 2017
Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy. Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.
4 minutes video is worth watching.....
Friday, January 5, 2018
Changing genetic privacy rules may adversely affect research participation
Hayley Peoples
Baylor College of Medicine Blogs
Originally posted May 26, 2017
Do you know your genetic information? Maybe you’ve taken a “23andMe” test because you were curious about your ancestry or health. Maybe it was part of a medical examination. Maybe, like me, you underwent testing and received results as part of a class in college.
Do you ever worry about what could happen if your information landed in the wrong hands?
If you do, you aren’t alone. We’ve previously written about legislation affecting genetic privacy and public resistance to global data sharing, and the dialog about growing genetic privacy concerns only continues.
Wired.com recently ran an interesting piece on the House Health Plan and its approach to pre-existing conditions. While much about how a final, Senate-approved Affordable Care Act repeal and replace plan will address pre-existing conditions is still speculation, it brings up an interesting question – with respect to genetic information, will changing rules about pre-existing conditions have a chilling effect on research participation?
The information is here.
Baylor College of Medicine Blogs
Originally posted May 26, 2017
Do you know your genetic information? Maybe you’ve taken a “23andMe” test because you were curious about your ancestry or health. Maybe it was part of a medical examination. Maybe, like me, you underwent testing and received results as part of a class in college.
Do you ever worry about what could happen if your information landed in the wrong hands?
If you do, you aren’t alone. We’ve previously written about legislation affecting genetic privacy and public resistance to global data sharing, and the dialog about growing genetic privacy concerns only continues.
Wired.com recently ran an interesting piece on the House Health Plan and its approach to pre-existing conditions. While much about how a final, Senate-approved Affordable Care Act repeal and replace plan will address pre-existing conditions is still speculation, it brings up an interesting question – with respect to genetic information, will changing rules about pre-existing conditions have a chilling effect on research participation?
The information is here.
Implementation of Moral Uncertainty in Intelligent Machines
Kyle Bogosian
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608
Abstract
The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.
Introduction
Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.
Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).
The article is here.
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608
Abstract
The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.
Introduction
Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.
Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).
The article is here.
Thursday, January 4, 2018
Artificial Intelligence Seeks An Ethical Conscience
Tom Simonite
wired.com
Originally published December 7, 2017
Here is an excerpt:
Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.
Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.
Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.
The article is here.
wired.com
Originally published December 7, 2017
Here is an excerpt:
Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.
Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.
Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.
The article is here.
Non-disclosing preimplantation genetic diagnosis: Questions, challenges and needs for guidelines
Robert Klitzman
Fertility and Sterility
Originally published December 6, 2017
Consider This:
Non-disclosing Preimplantation Genetic Diagnosis (ND-PGD) is performed, but controversial, raising many questions. It has been used when prospective parents at-risk for mutations highly associated with serious disease (especially Huntington’s disease [HD](1)), do not want to know their mutation-status, but wish to ensure that no mutation-containing embryos are transferred. Physicians would then transfer only mutation-negative embryos, and not tell the patient whether any mutation-positive embryos were identified. In 2002, Stern et al. described using ND-PGD successfully with 10 couples.1
Pros and cons of non-disclosing PGD
Several advantages and disadvantages have been articulated. Few individuals at-risk for HD want to learn their mutation-status. Caused by an autosomal dominant mutation, the disease lacks treatment, and leads to debilitating neurological and psychiatric symptoms and death, generally in the 4th-5th decade of life. Many at-risk individuals see a mutation-positive test result as a “death sentence,” and only 3%-21% of at-risk adults get tested (e.g. only 3-5% in Sweden).(2)
Though the patient may not be infertile, ND-PGD requires IVF, which has certain risks. Yet many patients may see the procedure’s benefits as outweighing these dangers. Misdiagnoses can also occur, but prenatal confirmatory tests can be performed.
The article is here.
Fertility and Sterility
Originally published December 6, 2017
Consider This:
Non-disclosing Preimplantation Genetic Diagnosis (ND-PGD) is performed, but controversial, raising many questions. It has been used when prospective parents at-risk for mutations highly associated with serious disease (especially Huntington’s disease [HD](1)), do not want to know their mutation-status, but wish to ensure that no mutation-containing embryos are transferred. Physicians would then transfer only mutation-negative embryos, and not tell the patient whether any mutation-positive embryos were identified. In 2002, Stern et al. described using ND-PGD successfully with 10 couples.1
Pros and cons of non-disclosing PGD
Several advantages and disadvantages have been articulated. Few individuals at-risk for HD want to learn their mutation-status. Caused by an autosomal dominant mutation, the disease lacks treatment, and leads to debilitating neurological and psychiatric symptoms and death, generally in the 4th-5th decade of life. Many at-risk individuals see a mutation-positive test result as a “death sentence,” and only 3%-21% of at-risk adults get tested (e.g. only 3-5% in Sweden).(2)
Though the patient may not be infertile, ND-PGD requires IVF, which has certain risks. Yet many patients may see the procedure’s benefits as outweighing these dangers. Misdiagnoses can also occur, but prenatal confirmatory tests can be performed.
The article is here.
Wednesday, January 3, 2018
Illegal VA policy allows hiring since 2002 of medical workers with revoked licenses
Donovan Slack
USA Today
Originally published December 21, 2017
The Department of Veterans Affairs has allowed its hospitals across the country to hire health care providers with revoked medical licenses for at least 15 years in violation of federal law, a USA TODAY investigation found.
The VA issued national guidelines in 2002 giving local hospitals discretion to hire clinicians after “prior consideration of all relevant facts surrounding” any revocations and as long as they still had a license in one state.
But a federal law passed in 1999 bars the VA from employing any health care worker whose license has been yanked by any state.
Hospital officials at the VA in Iowa City relied on the illegal guidance earlier this year to hire neurosurgeon John Henry Schneider, who had revealed in his application that he had numerous malpractice claims and settlements and Wyoming had revoked his license after a patient death. He still had a license in Montana.
The article is here.
USA Today
Originally published December 21, 2017
The Department of Veterans Affairs has allowed its hospitals across the country to hire health care providers with revoked medical licenses for at least 15 years in violation of federal law, a USA TODAY investigation found.
The VA issued national guidelines in 2002 giving local hospitals discretion to hire clinicians after “prior consideration of all relevant facts surrounding” any revocations and as long as they still had a license in one state.
But a federal law passed in 1999 bars the VA from employing any health care worker whose license has been yanked by any state.
Hospital officials at the VA in Iowa City relied on the illegal guidance earlier this year to hire neurosurgeon John Henry Schneider, who had revealed in his application that he had numerous malpractice claims and settlements and Wyoming had revoked his license after a patient death. He still had a license in Montana.
The article is here.
The neuroscience of morality and social decision-making
Keith Yoder and Jean Decety
Psychology, Crime & Law
doi: 10.1080/1068316X.2017.1414817
Abstract
Across cultures humans care deeply about morality and create institutions, such as criminal courts, to enforce social norms. In such contexts, judges and juries engage in complex social decision-making to ascertain a defendant’s capacity, blameworthiness, and culpability. Cognitive neuroscience investigations have begun to reveal the distributed neural networks which interact to implement moral judgment and social decision-making, including systems for reward learning, valuation, mental state understanding, and salience processing. These processes are fundamental to morality, and their underlying neural mechanisms are influenced by individual differences in empathy, caring and justice sensitivity. This new knowledge has important implication in legal settings for understanding how triers of fact reason. Moreover, recent work demonstrates how disruptions within the social decision-making network facilitate immoral behavior, as in the case of psychopathy. Incorporating neuroscientific methods with psychology and clinical neuroscience has the potential to improve predictions of recidivism, future dangerousness, and responsivity to particular forms of rehabilitation.
The article is here.
From the Conclusion section:
Current neuroscience work demonstrates that social decision-making and moral reasoning rely on multiple partially overlapping neural networks which support domain general processes, such as executive control, saliency processing, perspective-taking, reasoning, and valuation. Neuroscience investigations have contributed to a growing understanding of the role of these process in moral cognition and judgments of blame and culpability, exactly the sorts of judgments required of judges and juries. Dysfunction of these networks can lead to dysfunctional social behavior and a propensity to immoral behavior as in the case of psychopathy. Significant progress has been made in clarifying which aspects of social decision-making network functioning are most predictive of future recidivism. Psychopathy, in particular, constitutes a complex type of moral disorder and a challenge to the criminal justice system.
Worth reading.....
Psychology, Crime & Law
doi: 10.1080/1068316X.2017.1414817
Abstract
Across cultures humans care deeply about morality and create institutions, such as criminal courts, to enforce social norms. In such contexts, judges and juries engage in complex social decision-making to ascertain a defendant’s capacity, blameworthiness, and culpability. Cognitive neuroscience investigations have begun to reveal the distributed neural networks which interact to implement moral judgment and social decision-making, including systems for reward learning, valuation, mental state understanding, and salience processing. These processes are fundamental to morality, and their underlying neural mechanisms are influenced by individual differences in empathy, caring and justice sensitivity. This new knowledge has important implication in legal settings for understanding how triers of fact reason. Moreover, recent work demonstrates how disruptions within the social decision-making network facilitate immoral behavior, as in the case of psychopathy. Incorporating neuroscientific methods with psychology and clinical neuroscience has the potential to improve predictions of recidivism, future dangerousness, and responsivity to particular forms of rehabilitation.
The article is here.
From the Conclusion section:
Current neuroscience work demonstrates that social decision-making and moral reasoning rely on multiple partially overlapping neural networks which support domain general processes, such as executive control, saliency processing, perspective-taking, reasoning, and valuation. Neuroscience investigations have contributed to a growing understanding of the role of these process in moral cognition and judgments of blame and culpability, exactly the sorts of judgments required of judges and juries. Dysfunction of these networks can lead to dysfunctional social behavior and a propensity to immoral behavior as in the case of psychopathy. Significant progress has been made in clarifying which aspects of social decision-making network functioning are most predictive of future recidivism. Psychopathy, in particular, constitutes a complex type of moral disorder and a challenge to the criminal justice system.
Worth reading.....
Tuesday, January 2, 2018
The Neuroscience of Changing Your Mind
Bret Stetka
Scientific American
Originally published on December 7, 2017
Here are two excerpts:
Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.
(cut)
Tracking these eye movements and neural action let the researchers resolve the very confusing question of what brain areas are involved in these split-second decisions, says Vanderbilt University neuroscientist Jeffrey Schall, who was not involved in the research. “By combining human functional brain imaging with nonhuman primate neurophysiology, [the investigators] weave together threads of research that have too long been separate strands,” he says. “If we can understand how the brain stops or prevents an action, we may gain ability to enhance that stopping process to afford individuals more control over their choices.”
The article is here.
Scientific American
Originally published on December 7, 2017
Here are two excerpts:
Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.
(cut)
Tracking these eye movements and neural action let the researchers resolve the very confusing question of what brain areas are involved in these split-second decisions, says Vanderbilt University neuroscientist Jeffrey Schall, who was not involved in the research. “By combining human functional brain imaging with nonhuman primate neurophysiology, [the investigators] weave together threads of research that have too long been separate strands,” he says. “If we can understand how the brain stops or prevents an action, we may gain ability to enhance that stopping process to afford individuals more control over their choices.”
The article is here.
Votes for the future
Thomas Wells
Aeon.co
Originally published May 8, 2014
Here is an excerpt:
By contrast, future generations must accept whatever we choose to bequeath them, and they have no way of informing us of their values. In this, they are even more helpless than foreigners, on whom our political decisions about pollution, trade, war and so on are similarly imposed without consent. Disenfranchised as they are, such foreigners can at least petition their own governments to tell ours off, or engage with us directly by writing articles in our newspapers about the justice of their cause. The citizens of the future lack even this recourse.
The asymmetry between past and future is more than unfair. Our ancestors are beyond harm; they cannot know if we disappoint them. Yet the political decisions we make today will do more than just determine the burdens of citizenship for our grandchildren. They also concern existential dangers such as the likelihood of pandemics and environmental collapse. Without a presence in our political system, the plight of future citizens who might suffer or gain from our present political decisions cannot be properly weighed. We need to give them a voice.
How could we do that? After all, they can’t actually speak to us. Yet even if we can’t know what future citizens will actually value and believe in, we can still consider their interests, on the reasonable assumption that they will somewhat resemble our own (everybody needs breathable air, for example). Interests are much easier than wishes, and quite suitable for representation by proxies.
So perhaps we should simply encourage current citizens to take up the Burkean perspective and think of their civic duty in a more extended way when casting votes. Could this work?
The article is here.
Aeon.co
Originally published May 8, 2014
Here is an excerpt:
By contrast, future generations must accept whatever we choose to bequeath them, and they have no way of informing us of their values. In this, they are even more helpless than foreigners, on whom our political decisions about pollution, trade, war and so on are similarly imposed without consent. Disenfranchised as they are, such foreigners can at least petition their own governments to tell ours off, or engage with us directly by writing articles in our newspapers about the justice of their cause. The citizens of the future lack even this recourse.
The asymmetry between past and future is more than unfair. Our ancestors are beyond harm; they cannot know if we disappoint them. Yet the political decisions we make today will do more than just determine the burdens of citizenship for our grandchildren. They also concern existential dangers such as the likelihood of pandemics and environmental collapse. Without a presence in our political system, the plight of future citizens who might suffer or gain from our present political decisions cannot be properly weighed. We need to give them a voice.
How could we do that? After all, they can’t actually speak to us. Yet even if we can’t know what future citizens will actually value and believe in, we can still consider their interests, on the reasonable assumption that they will somewhat resemble our own (everybody needs breathable air, for example). Interests are much easier than wishes, and quite suitable for representation by proxies.
So perhaps we should simply encourage current citizens to take up the Burkean perspective and think of their civic duty in a more extended way when casting votes. Could this work?
The article is here.
Monday, January 1, 2018
Leaders Don't Make Deals About Ethics
John Baldoni
Forbes.com
Originally published December 8, 2017
Here is an excerpt:
Partisanship abides in darker recesses of our human nature; it’s about winning at all costs. Partisans comfort themselves that their side is in the right, and therefore, whatever they do to promote it is correct. To them I quote Abraham Lincoln: “my concern is not whether God is on our side; my greatest concern is to be on God's side, for God is always right."
Human values do not need to be sanctioned through religious faith. Human values as they relate to morality, equality and dignity are bedrock principles that when cast aside allow aberrant and abhorrent behaviors to flourish. The least among us become the most preyed-upon among us.
Ethics therefore knows no party. The Me Too movement is apolitical; it gives voice to women who have been abused. The preyed upon are beginning to take back what they never should have lost in the first place – their dignity. To argue about which party – or which industry – has the most sexual harassers is a fool’s errand. Sexual harassers exist within every social strata as well as every political persuasion.
Living by a moral code is putting into practice what you believe is right. That is, you call out men who abuse women – as well as all those who give the abusers sanctuary. Right now, men in powerful positions in the media, business and politics are tumbling like dominoes.
But make no mistake — there are bosses in organizations of every kind who are guilty of sexual harassment and worse. A moral code demands that such men be exposed for their predatory behaviors. It also demands protection for their accusers.
The article is here.
Forbes.com
Originally published December 8, 2017
Here is an excerpt:
Partisanship abides in darker recesses of our human nature; it’s about winning at all costs. Partisans comfort themselves that their side is in the right, and therefore, whatever they do to promote it is correct. To them I quote Abraham Lincoln: “my concern is not whether God is on our side; my greatest concern is to be on God's side, for God is always right."
Human values do not need to be sanctioned through religious faith. Human values as they relate to morality, equality and dignity are bedrock principles that when cast aside allow aberrant and abhorrent behaviors to flourish. The least among us become the most preyed-upon among us.
Ethics therefore knows no party. The Me Too movement is apolitical; it gives voice to women who have been abused. The preyed upon are beginning to take back what they never should have lost in the first place – their dignity. To argue about which party – or which industry – has the most sexual harassers is a fool’s errand. Sexual harassers exist within every social strata as well as every political persuasion.
Living by a moral code is putting into practice what you believe is right. That is, you call out men who abuse women – as well as all those who give the abusers sanctuary. Right now, men in powerful positions in the media, business and politics are tumbling like dominoes.
But make no mistake — there are bosses in organizations of every kind who are guilty of sexual harassment and worse. A moral code demands that such men be exposed for their predatory behaviors. It also demands protection for their accusers.
The article is here.
What I Was Wrong About This Year
David Leonhardt
The New York Times
Originally posted December 24, 2017
Here is an excerpt:
But I’ve come to realize that I was wrong about a major aspect of probabilities.
They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?
They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.
And when the unlikely happens, people scream: The probabilities were wrong!
Usually, they were not wrong. The screamers were wrong.
The article is here.
The New York Times
Originally posted December 24, 2017
Here is an excerpt:
But I’ve come to realize that I was wrong about a major aspect of probabilities.
They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?
They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.
And when the unlikely happens, people scream: The probabilities were wrong!
Usually, they were not wrong. The screamers were wrong.
The article is here.
Subscribe to:
Posts (Atom)