Hannes Leroy
forbes.com
Originally posted 30 Jan 20
Here is an excerpt:
For our research into morality we reviewed some 300 studies on moral leadership. We discovered that morality is – generally speaking – a good thing for leadership effectiveness but it is also a double-edged sword about which you need to be careful and smart.
To do this, there are three basic approaches.
First, followers can be inspired by a leader who advocates the highest common good for all and is motivated to contribute to that common good from an expectation of reciprocity (servant leadership; consequentialism).
Second, followers can also be inspired by a leader who advocates the adherence to a set of standards or rules and is motivated to contribute to the clarity and safety this structure imposes for an orderly society (ethical leadership; deontology).
Third and finally, followers can also be inspired by a leader who advocates for moral freedom and corresponding responsibility and is motivated to contribute to this system in the knowledge that others will afford them their own moral autonomy (authentic leadership; virtue ethics).
The info is here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Saturday, February 29, 2020
Friday, February 28, 2020
Lon Fuller and the Moral Value of the Rule of Law
Murphy, Colleen
Law and Philosophy,
Vol. 24, 2005.
Available at SSRN
It is often argued that the rule of law is only instrumentally morally valuable, valuable when and to the extent that a legal system is used to purse morally valuable ends. In this paper, I defend Lon Fuller’s view that the rule of law has conditional non-instrumental as well as instrumental moral value. I argue, along Fullerian lines, that the rule of law is conditionally non-instrumentally valuable in virtue of the way a legal system structures political relationships. The rule of law specifies a set of requirements which lawmakers must respect if they are to govern legally. As such, the rule of law restricts the illegal or extra-legal use of power. When a society rules by law, there are clear rules articulating the behavior appropriate for citizens and officials. Such rules ideally determine the particular contours political relationships will take. When the requirements of the rule of law are respected, the political relationships structured by the legal system constitutively express the moral values of reciprocity and respect for autonomy. The rule of law is instrumentally valuable, I argue, because in practice the rule of law limits the kind of injustice which governments pursue. There is in practice a deeper connection between ruling by law and the pursuit of moral ends than advocates
of the standard view recognize.
The next part of this paper outlines Lon Fuller’s conception of the rule of law and his explanation of its moral value. The third section illustrates how the Fullerian analysis draws attention to the impact that state-sanctioned atrocities can have upon the institutional functioning of the legal system, and so to their impact on the relationships between officials and citizens that are structured by that institution. The fourth section considers two objections to this account. According to the first, Razian objection, while the Fullerian analysis accurately describes the nature of the requirements of the rule of law, it offers a mistaken account of its moral value. Against my assertion that the rule of law has non-instrumental value, this objection argues that the rule of law is only instrumentally valuable. The second objection grants that the rule of law has non-instrumental moral value but claims that the Fullerian account of the requirements of the rule of law is incomplete.
Law and Philosophy,
Vol. 24, 2005.
Available at SSRN
It is often argued that the rule of law is only instrumentally morally valuable, valuable when and to the extent that a legal system is used to purse morally valuable ends. In this paper, I defend Lon Fuller’s view that the rule of law has conditional non-instrumental as well as instrumental moral value. I argue, along Fullerian lines, that the rule of law is conditionally non-instrumentally valuable in virtue of the way a legal system structures political relationships. The rule of law specifies a set of requirements which lawmakers must respect if they are to govern legally. As such, the rule of law restricts the illegal or extra-legal use of power. When a society rules by law, there are clear rules articulating the behavior appropriate for citizens and officials. Such rules ideally determine the particular contours political relationships will take. When the requirements of the rule of law are respected, the political relationships structured by the legal system constitutively express the moral values of reciprocity and respect for autonomy. The rule of law is instrumentally valuable, I argue, because in practice the rule of law limits the kind of injustice which governments pursue. There is in practice a deeper connection between ruling by law and the pursuit of moral ends than advocates
of the standard view recognize.
The next part of this paper outlines Lon Fuller’s conception of the rule of law and his explanation of its moral value. The third section illustrates how the Fullerian analysis draws attention to the impact that state-sanctioned atrocities can have upon the institutional functioning of the legal system, and so to their impact on the relationships between officials and citizens that are structured by that institution. The fourth section considers two objections to this account. According to the first, Razian objection, while the Fullerian analysis accurately describes the nature of the requirements of the rule of law, it offers a mistaken account of its moral value. Against my assertion that the rule of law has non-instrumental value, this objection argues that the rule of law is only instrumentally valuable. The second objection grants that the rule of law has non-instrumental moral value but claims that the Fullerian account of the requirements of the rule of law is incomplete.
Slow response times undermine trust in algorithmic (but not human) predictions
E Efendic, P van de Calseyde, & A Evans
PsyArXiv PrePrints
Lasted Edited 22 Jan 20
Abstract
Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.
General discussion
When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions.
We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality.
The research is here.
PsyArXiv PrePrints
Lasted Edited 22 Jan 20
Abstract
Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.
General discussion
When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions.
We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality.
The research is here.
Thursday, February 27, 2020
Liar, Liar, Liar
S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20
When we think about dishonesty, we mostly think about the big stuff.
We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.
But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.
"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.
These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?
That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.
The info is here.
There is a 30 minute audio file worth listening.
Hidden Brain - NPR.org
Originally posted 17 Feb 20
When we think about dishonesty, we mostly think about the big stuff.
We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.
But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.
"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.
These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?
That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.
The info is here.
There is a 30 minute audio file worth listening.
The cultural evolution of prosocial religions
Norenzayan, A., and others.
(2016). Behavioral and Brain Sciences, 39, E1.
doi:10.1017/S0140525X14001356
Abstract
We develop a cultural evolutionary theory of the origins of prosocial religions and apply it to resolve two puzzles in human psychology and cultural history: (1) the rise of large-scale cooperation among strangers and, simultaneously, (2) the spread of prosocial religions in the last 10–12 millennia. We argue that these two developments were importantly linked and mutually energizing. We explain how a package of culturally evolved religious beliefs and practices characterized by increasingly potent, moralizing, supernatural agents, credible displays of faith, and other psychologically active elements conducive to social solidarity promoted high fertility rates and large-scale cooperation with co-religionists, often contributing to success in intergroup competition and conflict. In turn, prosocial religious beliefs and practices spread and aggregated as these successful groups expanded, or were copied by less successful groups. This synthesis is grounded in the idea that although religious beliefs and practices originally arose as nonadaptive by-products of innate cognitive functions, particular cultural variants were then selected for their prosocial effects in a long-term, cultural evolutionary process. This framework (1) reconciles key aspects of the adaptationist and by-product approaches to the origins of religion, (2) explains a variety of empirical observations that have not received adequate attention, and (3) generates novel predictions. Converging lines of evidence drawn from diverse disciplines provide empirical support while at the same time encouraging new research directions and opening up new questions for exploration and debate.
The paper is here.
(2016). Behavioral and Brain Sciences, 39, E1.
doi:10.1017/S0140525X14001356
Abstract
We develop a cultural evolutionary theory of the origins of prosocial religions and apply it to resolve two puzzles in human psychology and cultural history: (1) the rise of large-scale cooperation among strangers and, simultaneously, (2) the spread of prosocial religions in the last 10–12 millennia. We argue that these two developments were importantly linked and mutually energizing. We explain how a package of culturally evolved religious beliefs and practices characterized by increasingly potent, moralizing, supernatural agents, credible displays of faith, and other psychologically active elements conducive to social solidarity promoted high fertility rates and large-scale cooperation with co-religionists, often contributing to success in intergroup competition and conflict. In turn, prosocial religious beliefs and practices spread and aggregated as these successful groups expanded, or were copied by less successful groups. This synthesis is grounded in the idea that although religious beliefs and practices originally arose as nonadaptive by-products of innate cognitive functions, particular cultural variants were then selected for their prosocial effects in a long-term, cultural evolutionary process. This framework (1) reconciles key aspects of the adaptationist and by-product approaches to the origins of religion, (2) explains a variety of empirical observations that have not received adequate attention, and (3) generates novel predictions. Converging lines of evidence drawn from diverse disciplines provide empirical support while at the same time encouraging new research directions and opening up new questions for exploration and debate.
The paper is here.
Wednesday, February 26, 2020
Zombie Ethics: Don’t Keep These Wrong Ideas About Ethical Leadership Alive
Bruce Weinstein
Forbes.com
Originally poste 18 Feb 20
Here is an excerpt:
Zombie Myth #1: There are no right and wrong answers in ethics
A simple thought experiment should permanently dispel this myth. Think about a time when you were disciplined or punished for something you firmly believed was unfair. Perhaps you were accused at work of doing something you didn’t do. Your supervisor Mike warned you not to do it again, even though you had plenty of evidence that you were innocent. Even Mike didn’t fire you, your good reputation has been sullied for no good reason.
Suppose you tell your colleague Janice this story, and she responds, “Well, to you Mike’s response was unfair, but from Mike’s point of view, it was absolutely fair.” What would you say to Janice?
A. “You’re right. There are no right or wrong answers in ethics.”
B. “No, Janice. Mike didn’t have a different point of view. He had a mistaken point of view. There are facts at hand, and Mike refused to consider them.”
Perhaps you believed myth #1 before this incident occurred. Now that you’ve been on the receiving end of a true injustice, you see this myth for what it really is: a zombie idea that needs to go to its grave permanently.
Zombie myth #2: Ethics varies from culture to culture and place to place
It’s tempting to treat this myth as true. For example, bribery is a widely accepted way to do business in many countries. At a speech I gave to commercial pilots, an audience member said that the high-level executives on a recent flight weren’t allowed disembark until someone “took care of” a customs official. Either they could give him some money under the table and gain entry into the country, or they could leave.
But just because a practice is widely accepted doesn’t mean it is acceptable. That’s why smart businesses prohibit engaging in unfair international business practices, even if it means losing clients.
The info is here.
Forbes.com
Originally poste 18 Feb 20
Here is an excerpt:
Zombie Myth #1: There are no right and wrong answers in ethics
A simple thought experiment should permanently dispel this myth. Think about a time when you were disciplined or punished for something you firmly believed was unfair. Perhaps you were accused at work of doing something you didn’t do. Your supervisor Mike warned you not to do it again, even though you had plenty of evidence that you were innocent. Even Mike didn’t fire you, your good reputation has been sullied for no good reason.
Suppose you tell your colleague Janice this story, and she responds, “Well, to you Mike’s response was unfair, but from Mike’s point of view, it was absolutely fair.” What would you say to Janice?
A. “You’re right. There are no right or wrong answers in ethics.”
B. “No, Janice. Mike didn’t have a different point of view. He had a mistaken point of view. There are facts at hand, and Mike refused to consider them.”
Perhaps you believed myth #1 before this incident occurred. Now that you’ve been on the receiving end of a true injustice, you see this myth for what it really is: a zombie idea that needs to go to its grave permanently.
Zombie myth #2: Ethics varies from culture to culture and place to place
It’s tempting to treat this myth as true. For example, bribery is a widely accepted way to do business in many countries. At a speech I gave to commercial pilots, an audience member said that the high-level executives on a recent flight weren’t allowed disembark until someone “took care of” a customs official. Either they could give him some money under the table and gain entry into the country, or they could leave.
But just because a practice is widely accepted doesn’t mean it is acceptable. That’s why smart businesses prohibit engaging in unfair international business practices, even if it means losing clients.
The info is here.
Ethical and Legal Aspects of Ambient Intelligence in Hospitals
Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699
Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.
One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.
As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.
The info is here.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699
Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.
One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.
As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.
The info is here.
Tuesday, February 25, 2020
Autonomy, mastery, respect, and fulfillment are key to avoiding moral injury in physicians
Simon G Talbot and Wendy Dean
BMJ blogs
Originally posted 16 Jan 20
Here is an excerpt:
We believe that distress is a clinician’s response to multiple competing allegiances—when they are forced to make a choice that transgresses a long standing, deeply held commitment to healing. Doctors today are caught in a double bind between making patients’ needs the top priority (thereby upholding our Hippocratic Oath) and giving precedence to the business and financial frameworks of the healthcare system (insurance, hospital, and health system mandates).
Since our initial publication, we have come to believe that burnout is the end stage of moral injury, when clinicians are physically and emotionally exhausted with battling a broken system in their efforts to provide good care; when they feel ineffective because too often they have met with immovable barriers to good care; and when they depersonalize patients because emotional investment is intolerable when patient suffering is inevitable as a result of system dysfunction. Reconfiguring the healthcare system to focus on healing patients, rebuilding a sense of community and respect among doctors, and demonstrating the alignment of doctors’ goals with those of our patients may be the best way to address the crisis of distress and, potentially, find a way to prevent burnout. But how do we focus the restructuring this involves?
“Moral injury” has been widely adopted by doctors as a description for their distress, as evidenced by its use on social media and in non-academic publications. But what is at the heart of it? We believe that moral injury occurs when the basic elements of the medical profession are eroded. These are autonomy, mastery, respect, and fulfillment, which are all focused around the central principle of purpose.
The info is here.
BMJ blogs
Originally posted 16 Jan 20
Here is an excerpt:
We believe that distress is a clinician’s response to multiple competing allegiances—when they are forced to make a choice that transgresses a long standing, deeply held commitment to healing. Doctors today are caught in a double bind between making patients’ needs the top priority (thereby upholding our Hippocratic Oath) and giving precedence to the business and financial frameworks of the healthcare system (insurance, hospital, and health system mandates).
Since our initial publication, we have come to believe that burnout is the end stage of moral injury, when clinicians are physically and emotionally exhausted with battling a broken system in their efforts to provide good care; when they feel ineffective because too often they have met with immovable barriers to good care; and when they depersonalize patients because emotional investment is intolerable when patient suffering is inevitable as a result of system dysfunction. Reconfiguring the healthcare system to focus on healing patients, rebuilding a sense of community and respect among doctors, and demonstrating the alignment of doctors’ goals with those of our patients may be the best way to address the crisis of distress and, potentially, find a way to prevent burnout. But how do we focus the restructuring this involves?
“Moral injury” has been widely adopted by doctors as a description for their distress, as evidenced by its use on social media and in non-academic publications. But what is at the heart of it? We believe that moral injury occurs when the basic elements of the medical profession are eroded. These are autonomy, mastery, respect, and fulfillment, which are all focused around the central principle of purpose.
The info is here.
The Morality of Taking From the Rich and Giving to the Poor
Noah Smith
Bloomberg
Originally posted 11 Feb 20
Here is an excerpt:
Instead, economists can help by trying to translate people’s preferences for fairness, equality and other moral goals into actionable policy. This requires getting a handle on what amount and types of redistribution people actually want. Some researchers now are attempting to do this.
For example, in a new paper, economists Alain Cohn, Lasse Jessen, Marko Klasnja and Paul Smeets, reasoning that richer people have an outsized impact on the political process, use an online survey to measure how wealthy individuals think about redistribution. Their findings were not particularly surprising; people in the top 5% of the income and wealth distributions supported lower taxes and tended to vote Republican.
The authors also performed an online experiment in which some people were allowed to choose to redistribute winnings among other experimental subjects who completed an online task. No matter whether the winnings were awarded based on merit or luck, rich subjects chose less redistribution.
But not all rich subjects. Cohn and his co-authors found that people who grew up wealthy favored redistribution about as much as average Americans. But those with self-made fortunes favored more inequality. Apparently, many people who make it big out of poverty or the middle class believe that everyone should do the same.
This suggests that the U.S. has a dilemma. A dynamic economy creates lots of new companies, which bring great fortunes to the founders. But if Cohn and his co-authors are right, those founders are likely to support less redistribution as a result. So if the self-made entrepreneurs wield political power, as the authors believe, there could be a political trade-off between economic dynamism and redistribution.
The info is here.
Bloomberg
Originally posted 11 Feb 20
Here is an excerpt:
Instead, economists can help by trying to translate people’s preferences for fairness, equality and other moral goals into actionable policy. This requires getting a handle on what amount and types of redistribution people actually want. Some researchers now are attempting to do this.
For example, in a new paper, economists Alain Cohn, Lasse Jessen, Marko Klasnja and Paul Smeets, reasoning that richer people have an outsized impact on the political process, use an online survey to measure how wealthy individuals think about redistribution. Their findings were not particularly surprising; people in the top 5% of the income and wealth distributions supported lower taxes and tended to vote Republican.
The authors also performed an online experiment in which some people were allowed to choose to redistribute winnings among other experimental subjects who completed an online task. No matter whether the winnings were awarded based on merit or luck, rich subjects chose less redistribution.
But not all rich subjects. Cohn and his co-authors found that people who grew up wealthy favored redistribution about as much as average Americans. But those with self-made fortunes favored more inequality. Apparently, many people who make it big out of poverty or the middle class believe that everyone should do the same.
This suggests that the U.S. has a dilemma. A dynamic economy creates lots of new companies, which bring great fortunes to the founders. But if Cohn and his co-authors are right, those founders are likely to support less redistribution as a result. So if the self-made entrepreneurs wield political power, as the authors believe, there could be a political trade-off between economic dynamism and redistribution.
The info is here.
Monday, February 24, 2020
Physician Burnout Is Widespread, Especially Among Those in Midcareer
Brianna Abbott
The Wall Street Journal
Originally posted 15 Jan 20
Burnout is particularly pervasive among health-care workers, such as physicians or nurses, researchers say. Risk for burnout among physicians is significantly greater than that of general U.S. working adults, and physicians also report being less satisfied with their work-life balance, according to a 2019 study published in Mayo Clinic Proceedings.
Overall, 42% of the physicians in the new survey, across 29 specialties, reported feeling some sense of burnout, down slightly from 46% in 2015.
The report, published on Wednesday by medical-information platform Medscape, breaks down the generational differences in burnout and how doctors cope with the symptoms that are widespread throughout the profession.
“There are a lot more similarities than differences, and what that highlights is that burnout in medicine right now is really an entire-profession problem,” said Colin West, a professor of medicine at the Mayo Clinic who researches physician well-being. “There’s really no age group, career stage, gender or specialty that’s immune from these issues.”
In recent years, hospitals, health systems and advocacy groups have tried to curb the problem by starting wellness programs, hiring chief wellness officers or attempting to reduce administrative tasks for nurses and physicians.
Still, high rates of burnout persist among the medical community, from medical-school students to seasoned professionals, and more than two-thirds of all physicians surveyed in the Medscape report said that burnout had an impact on their personal relationships.
Nearly one in five physicians also reported that they are depressed, with the highest rate, 18%, reported by Gen Xers.
The info is here.
The Wall Street Journal
Originally posted 15 Jan 20
Burnout is particularly pervasive among health-care workers, such as physicians or nurses, researchers say. Risk for burnout among physicians is significantly greater than that of general U.S. working adults, and physicians also report being less satisfied with their work-life balance, according to a 2019 study published in Mayo Clinic Proceedings.
Overall, 42% of the physicians in the new survey, across 29 specialties, reported feeling some sense of burnout, down slightly from 46% in 2015.
The report, published on Wednesday by medical-information platform Medscape, breaks down the generational differences in burnout and how doctors cope with the symptoms that are widespread throughout the profession.
“There are a lot more similarities than differences, and what that highlights is that burnout in medicine right now is really an entire-profession problem,” said Colin West, a professor of medicine at the Mayo Clinic who researches physician well-being. “There’s really no age group, career stage, gender or specialty that’s immune from these issues.”
In recent years, hospitals, health systems and advocacy groups have tried to curb the problem by starting wellness programs, hiring chief wellness officers or attempting to reduce administrative tasks for nurses and physicians.
Still, high rates of burnout persist among the medical community, from medical-school students to seasoned professionals, and more than two-thirds of all physicians surveyed in the Medscape report said that burnout had an impact on their personal relationships.
Nearly one in five physicians also reported that they are depressed, with the highest rate, 18%, reported by Gen Xers.
The info is here.
An emotionally intelligent AI could support astronauts on a trip to Mars
Neel Patel
MIT Technology Review
Originally published 14 Jan 20
Here are two excerpts:
Keeping track of a crew’s mental and emotional health isn’t really a problem for NASA today. Astronauts on the ISS regularly talk to psychiatrists on the ground. NASA ensures that doctors are readily available to address any serious signs of distress. But much of this system is possible only because the astronauts are in low Earth orbit, easily accessible to mission control. In deep space, you would have to deal with lags in communication that could stretch for hours. Smaller agencies or private companies might not have mental health experts on call to deal with emergencies. An onboard emotional AI might be better equipped to spot problems and triage them as soon as they come up.
(cut)
Akin’s biggest obstacles are those that plague the entire field of emotional AI. Lisa Feldman Barrett, a psychologist at Northeastern University who specializes in human emotion, has previously pointed out that the way most tech firms train AI to recognize human emotions is deeply flawed. “Systems don’t recognize psychological meaning,” she says. “They recognize physical movements and changes, and they infer psychological meaning.” Those are certainly not the same thing.
But a spacecraft, it turns out, might actually be an ideal environment for training and deploying an emotionally intelligent AI. Since the technology would be interacting with just the small group of people onboard, says Barrett, it would be able to learn each individual’s “vocabulary of facial expressions” and how they manifest in the face, body, and voice.
The info is here.
MIT Technology Review
Originally published 14 Jan 20
Here are two excerpts:
Keeping track of a crew’s mental and emotional health isn’t really a problem for NASA today. Astronauts on the ISS regularly talk to psychiatrists on the ground. NASA ensures that doctors are readily available to address any serious signs of distress. But much of this system is possible only because the astronauts are in low Earth orbit, easily accessible to mission control. In deep space, you would have to deal with lags in communication that could stretch for hours. Smaller agencies or private companies might not have mental health experts on call to deal with emergencies. An onboard emotional AI might be better equipped to spot problems and triage them as soon as they come up.
(cut)
Akin’s biggest obstacles are those that plague the entire field of emotional AI. Lisa Feldman Barrett, a psychologist at Northeastern University who specializes in human emotion, has previously pointed out that the way most tech firms train AI to recognize human emotions is deeply flawed. “Systems don’t recognize psychological meaning,” she says. “They recognize physical movements and changes, and they infer psychological meaning.” Those are certainly not the same thing.
But a spacecraft, it turns out, might actually be an ideal environment for training and deploying an emotionally intelligent AI. Since the technology would be interacting with just the small group of people onboard, says Barrett, it would be able to learn each individual’s “vocabulary of facial expressions” and how they manifest in the face, body, and voice.
The info is here.
Sunday, February 23, 2020
Burnout as an ethical issue in psychotherapy.
Simionato, G., Simpson, S., & Reid, C.
Psychotherapy, 56(4), 470–482.
Abstract
Recent studies highlight a range of factors that place psychotherapists at risk of burnout. The aim of this study was to investigate the ethics issues linked to burnout among psychotherapists and to describe potentially effective ways of reducing vulnerability and preventing collateral damage. A purposive critical review of the literature was conducted to inform a narrative analysis. Differing burnout presentations elicit a wide range of ethics issues. High rates of burnout in the sector suggest systemic factors and the need for an ethics review of standard workplace practice. Burnout costs employers and taxpayers billions of dollars annually in heightened presenteeism and absenteeism. At a personal level, burnout has been linked to poorer physical and mental health outcomes for psychotherapists. Burnout has also been shown to interfere with clinical effectiveness and even contribute to misconduct. Hence, the ethical impact of burnout extends to our duty of care to clients and responsibilities to employers. A range of occupational and personal variables have been identified as vulnerability factors. A new 5-P model of prevention is proposed, which combines systemic and individually tailored responses as a means of offering the greatest potential for effective prevention, identification, and remediation. In addition to the significant economic impact and the impact on personal well-being, burnout in psychotherapists has the potential to directly and indirectly affect client care and standards of professional practice. Attending to the ethical risks associated with burnout is a priority for the profession, for service managers, and for each individual psychotherapist.
From the Conclusion:
Burnout is a common feature of unintentional misconduct among psychotherapists, often at the expense of client well-being, therapeutic progress, and successful client outcomes. Clinicians working in spite of burnout also incur personal and economic costs that compromise the principles of competence and beneficence outlined in ethical guidelines. This article has focused on a communitarian approach to identifying, understanding, and responding to the signs, symptoms, and risk factors in an attempt to harness ethical practice and foster successful careers in psychotherapy. The 5-P strength-based model illuminates the positive potential of workplaces that support wellbeing and prioritize ethical practice through providing an individualized responsiveness to the training, professional development, and support needs of staff. Further, in contrast to the majority of the literature that explores organizational factors leading to burnout and ethical missteps, the 5-P model also considers the personal characteristics that may contribute to burnout and the personal action that
psychotherapists can take to avoid burnout and unintentional misconduct.
The info is here.
Psychotherapy, 56(4), 470–482.
Abstract
Recent studies highlight a range of factors that place psychotherapists at risk of burnout. The aim of this study was to investigate the ethics issues linked to burnout among psychotherapists and to describe potentially effective ways of reducing vulnerability and preventing collateral damage. A purposive critical review of the literature was conducted to inform a narrative analysis. Differing burnout presentations elicit a wide range of ethics issues. High rates of burnout in the sector suggest systemic factors and the need for an ethics review of standard workplace practice. Burnout costs employers and taxpayers billions of dollars annually in heightened presenteeism and absenteeism. At a personal level, burnout has been linked to poorer physical and mental health outcomes for psychotherapists. Burnout has also been shown to interfere with clinical effectiveness and even contribute to misconduct. Hence, the ethical impact of burnout extends to our duty of care to clients and responsibilities to employers. A range of occupational and personal variables have been identified as vulnerability factors. A new 5-P model of prevention is proposed, which combines systemic and individually tailored responses as a means of offering the greatest potential for effective prevention, identification, and remediation. In addition to the significant economic impact and the impact on personal well-being, burnout in psychotherapists has the potential to directly and indirectly affect client care and standards of professional practice. Attending to the ethical risks associated with burnout is a priority for the profession, for service managers, and for each individual psychotherapist.
From the Conclusion:
Burnout is a common feature of unintentional misconduct among psychotherapists, often at the expense of client well-being, therapeutic progress, and successful client outcomes. Clinicians working in spite of burnout also incur personal and economic costs that compromise the principles of competence and beneficence outlined in ethical guidelines. This article has focused on a communitarian approach to identifying, understanding, and responding to the signs, symptoms, and risk factors in an attempt to harness ethical practice and foster successful careers in psychotherapy. The 5-P strength-based model illuminates the positive potential of workplaces that support wellbeing and prioritize ethical practice through providing an individualized responsiveness to the training, professional development, and support needs of staff. Further, in contrast to the majority of the literature that explores organizational factors leading to burnout and ethical missteps, the 5-P model also considers the personal characteristics that may contribute to burnout and the personal action that
psychotherapists can take to avoid burnout and unintentional misconduct.
The info is here.
Saturday, February 22, 2020
Hospitals Give Tech Giants Access to Detailed Medical Records
Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20
Here is an excerpt:
Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.
The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.
Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.
“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.
Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.
(cut)
Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.
The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.
The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.
Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.
The info is here.
The Wall Street Journal
Originally published 20 Jan 20
Here is an excerpt:
Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.
The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.
Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.
“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.
Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.
(cut)
Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.
The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.
The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.
Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.
The info is here.
Friday, February 21, 2020
Friends or foes: Is empathy necessary for moral behavior?
Jean Decety and Jason M. Cowell
Perspect Psychol Sci. 2014 Sep; 9(4): 525–537.
doi: 10.1177/1745691614545130
Abstract
The past decade has witnessed a flurry of empirical and theoretical research on morality and empathy, as well as increased interest and usage in the media and the public arena. At times, in both popular and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that, while there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish between the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology, developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used, and perhaps abandoning the muddy concept of empathy.
From the Conclusion:
To wrap up on a provocative note, it may be advantageous for the science of morality, in the future, to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena, and as a result yields confusion in both understanding and predictive ability. In both academic and applied domains such medicine, ethics, law and policy, empathy has become an enticing, but muddy notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.
The article is here.
Perspect Psychol Sci. 2014 Sep; 9(4): 525–537.
doi: 10.1177/1745691614545130
Abstract
The past decade has witnessed a flurry of empirical and theoretical research on morality and empathy, as well as increased interest and usage in the media and the public arena. At times, in both popular and academia, morality and empathy are used interchangeably, and quite often the latter is considered to play a foundational role for the former. In this article, we argue that, while there is a relationship between morality and empathy, it is not as straightforward as apparent at first glance. Moreover, it is critical to distinguish between the different facets of empathy (emotional sharing, empathic concern, and perspective taking), as each uniquely influences moral cognition and predicts differential outcomes in moral behavior. Empirical evidence and theories from evolutionary biology, developmental, behavioral, and affective and social neuroscience are comprehensively integrated in support of this argument. The wealth of findings illustrates a complex and equivocal relationship between morality and empathy. The key to understanding such relations is to be more precise on the concepts being used, and perhaps abandoning the muddy concept of empathy.
From the Conclusion:
To wrap up on a provocative note, it may be advantageous for the science of morality, in the future, to refrain from using the catch-all term of empathy, which applies to a myriad of processes and phenomena, and as a result yields confusion in both understanding and predictive ability. In both academic and applied domains such medicine, ethics, law and policy, empathy has become an enticing, but muddy notion, potentially leading to misinterpretation. If ancient Greek philosophy has taught us anything, it is that when a concept is attributed with so many meanings, it is at risk for losing function.
The article is here.
Why Google thinks we need to regulate AI
Sundar Pichai
ft.com
Originally posted 19 Jan 20
Here are two excerpts:
Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.
These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.
(cut)
But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.
ft.com
Originally posted 19 Jan 20
Here are two excerpts:
Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.
These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.
(cut)
But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.
Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.
Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.
Thursday, February 20, 2020
Harvey Weinstein’s ‘false memory’ defense is not backed by science
Anne DePrince & Joan Cook
The Conversation
Originally posted 10 Feb 20
Here is an excerpt:
In 1996, pioneering psychologist Jennifer Freyd introduced the concept of betrayal trauma. She made plain how forgetting, not thinking about and even mis-remembering an assault may be necessary and adaptive for some survivors. She argued that the way in which traumatic events, like sexual violence, are processed and remembered depends on how much betrayal there is. Betrayal happens when the victim depends on the abuser, such as a parent, spouse or boss. The victim has to adapt day-to-day because they are (or feel) stuck in that relationship. One way that victims can survive is by thinking or remembering less about the abuse or telling themselves it wasn’t abuse.
Since 1996, compelling scientific evidence has shown a strong relationship between amnesia and victims’ dependence on abusers. Psychologists and other scientists have also learned much about the nature of memory, including memory for traumas like sexual assault. What gets into memory and later remembered is affected by a host of factors, including characteristics of the person and the situation. For example, some individuals dissociate during or after traumatic events. Dissociation offers a way to escape the inescapable, such that people feel as if they have detached from their bodies or the environment. It is not surprising to us that dissociation is linked with incomplete memories.
Memory can also be affected by what other people do and say. For example, researchers recently looked at what happened when they told participants not to think about some words that they had just studied. Following that instruction, those who had histories of trauma suppressed more memories than their peers did.
The info is here.
The Conversation
Originally posted 10 Feb 20
Here is an excerpt:
In 1996, pioneering psychologist Jennifer Freyd introduced the concept of betrayal trauma. She made plain how forgetting, not thinking about and even mis-remembering an assault may be necessary and adaptive for some survivors. She argued that the way in which traumatic events, like sexual violence, are processed and remembered depends on how much betrayal there is. Betrayal happens when the victim depends on the abuser, such as a parent, spouse or boss. The victim has to adapt day-to-day because they are (or feel) stuck in that relationship. One way that victims can survive is by thinking or remembering less about the abuse or telling themselves it wasn’t abuse.
Since 1996, compelling scientific evidence has shown a strong relationship between amnesia and victims’ dependence on abusers. Psychologists and other scientists have also learned much about the nature of memory, including memory for traumas like sexual assault. What gets into memory and later remembered is affected by a host of factors, including characteristics of the person and the situation. For example, some individuals dissociate during or after traumatic events. Dissociation offers a way to escape the inescapable, such that people feel as if they have detached from their bodies or the environment. It is not surprising to us that dissociation is linked with incomplete memories.
Memory can also be affected by what other people do and say. For example, researchers recently looked at what happened when they told participants not to think about some words that they had just studied. Following that instruction, those who had histories of trauma suppressed more memories than their peers did.
The info is here.
Sharing Patient Data Without Exploiting Patients
McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354
Here is an excerpt:
The Risks of Data Sharing
When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.
Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.
Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.
The info is here.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354
Here is an excerpt:
The Risks of Data Sharing
When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.
Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.
Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.
The info is here.
Wednesday, February 19, 2020
American Psychological Association Calls for Immediate Halt to Sharing Immigrant Youths' Confidential Psychotherapy Notes with ICE
American Psychological Association
Press Release
Released 17 Feb 20
The American Psychological Association expressed shock and outrage that the federal Office of Refugee Resettlement has been sharing confidential psychotherapy notes with U.S. Immigration and Customs Enforcement to deny asylum to some immigrant youths.
“ORR’s sharing of confidential therapy notes of traumatized children destroys the bond of trust between patient and therapist that is vital to helping the patient,” said APA President Sandra L. Shullman, PhD. “We call on ORR to stop this practice immediately and on the Department of Health and Human Services and Congress to investigate its prevalence. We also call on ICE to release any immigrants who have had their asylum requests denied as a result.”
APA was reacting to a report in The Washington Post focused largely on the case of then-17-year-old Kevin Euceda, an asylum-seeker from Honduras whose request for asylum was granted by a judge, only to have it overturned when lawyers from ICE revealed information he had given in confidence to a therapist at a U.S. government shelter. According to the article, other unaccompanied minors have been similarly detained as a result of ICE’s use of confidential psychotherapy notes. These situations have also been confirmed by congressional testimony since 2018.
Unaccompanied minors who are detained in U.S. shelters are required to undergo therapy, ostensibly to help them deal with trauma and other issues arising from leaving their home countries. According to the Post, ORR entered into a formal memorandum of agreement with ICE in April 2018 to share details about children in its care. The then-head of ORR testified before Congress that the agency would be asking its therapists to “develop additional information” about children during “weekly counseling sessions where they may self-disclose previous gang or criminal activity to their assigned clinician,” the newspaper reported. The agency added two requirements to its public handbook: that arriving children be informed that while it was essential to be honest with staff, self-disclosures could affect their release and that if a minor mentioned anything having to do with gangs or drug dealing, therapists would file a report within four hours to be passed to ICE within one day, the Post said.
"For this administration to weaponize these therapy sessions by ordering that the psychotherapy notes be passed to ICE is appalling,” Shullman added. “These children have already experienced some unimaginable traumas. Plus, these are scared minors who may not understand that speaking truthfully to therapists about gangs and drugs – possibly the reasons they left home – would be used against them.”
Press Release
Released 17 Feb 20
The American Psychological Association expressed shock and outrage that the federal Office of Refugee Resettlement has been sharing confidential psychotherapy notes with U.S. Immigration and Customs Enforcement to deny asylum to some immigrant youths.
“ORR’s sharing of confidential therapy notes of traumatized children destroys the bond of trust between patient and therapist that is vital to helping the patient,” said APA President Sandra L. Shullman, PhD. “We call on ORR to stop this practice immediately and on the Department of Health and Human Services and Congress to investigate its prevalence. We also call on ICE to release any immigrants who have had their asylum requests denied as a result.”
APA was reacting to a report in The Washington Post focused largely on the case of then-17-year-old Kevin Euceda, an asylum-seeker from Honduras whose request for asylum was granted by a judge, only to have it overturned when lawyers from ICE revealed information he had given in confidence to a therapist at a U.S. government shelter. According to the article, other unaccompanied minors have been similarly detained as a result of ICE’s use of confidential psychotherapy notes. These situations have also been confirmed by congressional testimony since 2018.
Unaccompanied minors who are detained in U.S. shelters are required to undergo therapy, ostensibly to help them deal with trauma and other issues arising from leaving their home countries. According to the Post, ORR entered into a formal memorandum of agreement with ICE in April 2018 to share details about children in its care. The then-head of ORR testified before Congress that the agency would be asking its therapists to “develop additional information” about children during “weekly counseling sessions where they may self-disclose previous gang or criminal activity to their assigned clinician,” the newspaper reported. The agency added two requirements to its public handbook: that arriving children be informed that while it was essential to be honest with staff, self-disclosures could affect their release and that if a minor mentioned anything having to do with gangs or drug dealing, therapists would file a report within four hours to be passed to ICE within one day, the Post said.
"For this administration to weaponize these therapy sessions by ordering that the psychotherapy notes be passed to ICE is appalling,” Shullman added. “These children have already experienced some unimaginable traumas. Plus, these are scared minors who may not understand that speaking truthfully to therapists about gangs and drugs – possibly the reasons they left home – would be used against them.”
How to talk someone out of bigotry
Brian Resnick
vox.com
Originally published 29 Jan 20
Here is an excerpt:
Topping and dozens of other canvassers were a part of that 2016 effort. It was an important study: Not only has social science found very few strategies that work, in experiments, to change minds on issues of prejudice, but even fewer tests of those strategies have occurred in the real world.
Typically, the conversations begin with the canvasser asking the voter for their opinion on a topic, like abortion access, immigration, or LGBTQ rights. Canvassers (who may or may not be members of the impacted community) listen nonjudgmentally. They don’t say if they are pleased or hurt by the response. They are supposed “to appear genuinely interested in hearing the subject ruminate on the question,” as Broockman and Kalla’s latest study instructions read.
The canvassers then ask if the voters know anyone in the affected community, and ask if they relate to the person’s story. If they don’t, and even if they do, they’re asked a question like, “When was a time someone showed you compassion when you really needed it?” to get them to reflect on their experience when they might have felt something similar to the people in the marginalized community.
The canvassers also share their own stories: about being an immigrant, about being a member of the LGBTQ community, or about just knowing people who are.
It’s a type of conversation that’s closer to what a psychotherapist might have with a patient than a typical political argument. (One clinical therapist I showed it to said it sounded a bit like “motivational interviewing,” a technique used to help clients work through ambivalent feelings.) It’s not about listing facts or calling people out on their prejudicial views. It’s about sharing and listening, all the while nudging people to be analytical and think about their shared humanity with marginalized groups.
The info is here.
vox.com
Originally published 29 Jan 20
Here is an excerpt:
Topping and dozens of other canvassers were a part of that 2016 effort. It was an important study: Not only has social science found very few strategies that work, in experiments, to change minds on issues of prejudice, but even fewer tests of those strategies have occurred in the real world.
Typically, the conversations begin with the canvasser asking the voter for their opinion on a topic, like abortion access, immigration, or LGBTQ rights. Canvassers (who may or may not be members of the impacted community) listen nonjudgmentally. They don’t say if they are pleased or hurt by the response. They are supposed “to appear genuinely interested in hearing the subject ruminate on the question,” as Broockman and Kalla’s latest study instructions read.
The canvassers then ask if the voters know anyone in the affected community, and ask if they relate to the person’s story. If they don’t, and even if they do, they’re asked a question like, “When was a time someone showed you compassion when you really needed it?” to get them to reflect on their experience when they might have felt something similar to the people in the marginalized community.
The canvassers also share their own stories: about being an immigrant, about being a member of the LGBTQ community, or about just knowing people who are.
It’s a type of conversation that’s closer to what a psychotherapist might have with a patient than a typical political argument. (One clinical therapist I showed it to said it sounded a bit like “motivational interviewing,” a technique used to help clients work through ambivalent feelings.) It’s not about listing facts or calling people out on their prejudicial views. It’s about sharing and listening, all the while nudging people to be analytical and think about their shared humanity with marginalized groups.
The info is here.
Tuesday, February 18, 2020
Is it okay to sacrifice one person to save many? How you answer depends on where you’re from.
Sigal Samuel
vox.com
Originally posted 24 Jan 20
Here is an excerpt:
It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.
That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.
It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.
That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.
The info is here.
vox.com
Originally posted 24 Jan 20
Here is an excerpt:
It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.
That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.
It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.
That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.
The info is here.
Can an Evidence-Based Approach Improve the Patient-Physician Relationship?
A. S. Cifu, A. Lembo, & A. M. Davis
JAMA. 2020;323(1):31-32.
doi:10.1001/jama.2019.19427
Here is an excerpt:
Through these steps, the research team identified potentially useful clinical approaches that were perceived to contribute to physician “presence,” defined by the authors as a purposeful practice of “awareness, focus, and attention with the intent to understand and connect with patients.”
These practices were rated by patients and clinicians on their likely effects and feasibility in practice. A Delphi process was used to condense 13 preliminary practices into 5 final recommendations, which were (1) prepare with intention, (2) listen intently and completely, (3) agree on what matters most, (4) connect with the patient’s story, and (5) explore emotional cues. Each of these practices is complex, and the authors provide detailed explanations, including narrative examples and links to outcomes, that are summarized in the article and included in more detail in the online supplemental material.
If implemented in practice, these 5 practices suggested by Zulman and colleagues are likely to enhance patient-physician relationships, which ideally could help improve physician satisfaction and well-being, reduce physician frustration, improve clinical outcomes, and reduce health care costs.
Importantly, the authors also call for system-level interventions to create an environment for the implementation of these practices.
Although the patient-physician interaction is at the core of most physicians’ activities and has led to an entire genre of literature and television programs, very little is actually known about what makes for an effective relationship.
The info is here.
JAMA. 2020;323(1):31-32.
doi:10.1001/jama.2019.19427
Here is an excerpt:
Through these steps, the research team identified potentially useful clinical approaches that were perceived to contribute to physician “presence,” defined by the authors as a purposeful practice of “awareness, focus, and attention with the intent to understand and connect with patients.”
These practices were rated by patients and clinicians on their likely effects and feasibility in practice. A Delphi process was used to condense 13 preliminary practices into 5 final recommendations, which were (1) prepare with intention, (2) listen intently and completely, (3) agree on what matters most, (4) connect with the patient’s story, and (5) explore emotional cues. Each of these practices is complex, and the authors provide detailed explanations, including narrative examples and links to outcomes, that are summarized in the article and included in more detail in the online supplemental material.
If implemented in practice, these 5 practices suggested by Zulman and colleagues are likely to enhance patient-physician relationships, which ideally could help improve physician satisfaction and well-being, reduce physician frustration, improve clinical outcomes, and reduce health care costs.
Importantly, the authors also call for system-level interventions to create an environment for the implementation of these practices.
Although the patient-physician interaction is at the core of most physicians’ activities and has led to an entire genre of literature and television programs, very little is actually known about what makes for an effective relationship.
The info is here.
Monday, February 17, 2020
Religion’s Impact on Conceptions of the Moral Domain
S. Levine, and others
PsyArXiv Preprints
Last edited 2 Jan 20
Abstract
How does religious affiliation impact conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormons and Muslims moralize their religious norms, while Jews do not. Hindus do not seem to make a moral/non-moral distinction at all. These results suggest that religious affiliation has a profound effect on conceptions of the scope of morality.
From the General Discussion:
The results of Study 3 and 3a are predicted by neither Social Domain Theory nor Moral Foundations Theory: It is neither true that secular people and religious people share a common conception of the moral domain (as Social Domain Theory argues), nor that religious morality is expanded beyond secular morality in a uniform manner (as Moral Foundations Theory suggests).When participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights count as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants) made no distinction between the moral and the non-moral.
The research is here.
PsyArXiv Preprints
Last edited 2 Jan 20
Abstract
How does religious affiliation impact conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormons and Muslims moralize their religious norms, while Jews do not. Hindus do not seem to make a moral/non-moral distinction at all. These results suggest that religious affiliation has a profound effect on conceptions of the scope of morality.
From the General Discussion:
The results of Study 3 and 3a are predicted by neither Social Domain Theory nor Moral Foundations Theory: It is neither true that secular people and religious people share a common conception of the moral domain (as Social Domain Theory argues), nor that religious morality is expanded beyond secular morality in a uniform manner (as Moral Foundations Theory suggests).When participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights count as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants) made no distinction between the moral and the non-moral.
The research is here.
BlackRock’s New Morality Marks the End for Coal
Nathaniel Bullard
Bloomberg News
Originally posted 17 Jan 20
Here is an excerpt:
In the U.S., the move away from coal was well underway before the $7 trillion asset manager announced its restrictions. Companies have been shutting down coal-fired power plants and setting “transformative responsible energy plans” removing coal from the mix completely, even in the absence of robust federal policies.
U.S. coal consumption in power generation fell below 600 million tons last year. This year, the U.S. Energy Information Administration expects it to fall much further still, below 500 million tons. That’s not only down by more than 50% since 2007, but it would also put coal consumption back to 1978 levels.
That decline is thanks to a massive number of plant retirements, now totaling more than 300 since 2010. The U.S. coal fleet has not had any net capacity additions since 2011. 2015 is the most significant year for coal retirements to date, as a suite of Obama-era air quality standards took effect. 2018 wasn’t far behind, however, and 2019 wasn’t far behind 2018.
The base effect of a smaller number of operational coal plants also means that consumption is declining at an accelerating rate. Using the EIA’s projection for 2020 coal burn in the power sector, year-on-year consumption will decline nearly 15%, the most since at least 1950.
Coal’s decline doesn’t exist in isolation. Most coal in the U.S. travels from mine to plant by rail, so there’s a predictable impact on rail cargoes. A decade ago, U.S. rail carriers shipped nearly 7 million carloads of coal. Last year, that figure was barely 4 million.
The info is here.
Bloomberg News
Originally posted 17 Jan 20
Here is an excerpt:
In the U.S., the move away from coal was well underway before the $7 trillion asset manager announced its restrictions. Companies have been shutting down coal-fired power plants and setting “transformative responsible energy plans” removing coal from the mix completely, even in the absence of robust federal policies.
U.S. coal consumption in power generation fell below 600 million tons last year. This year, the U.S. Energy Information Administration expects it to fall much further still, below 500 million tons. That’s not only down by more than 50% since 2007, but it would also put coal consumption back to 1978 levels.
That decline is thanks to a massive number of plant retirements, now totaling more than 300 since 2010. The U.S. coal fleet has not had any net capacity additions since 2011. 2015 is the most significant year for coal retirements to date, as a suite of Obama-era air quality standards took effect. 2018 wasn’t far behind, however, and 2019 wasn’t far behind 2018.
The base effect of a smaller number of operational coal plants also means that consumption is declining at an accelerating rate. Using the EIA’s projection for 2020 coal burn in the power sector, year-on-year consumption will decline nearly 15%, the most since at least 1950.
Coal’s decline doesn’t exist in isolation. Most coal in the U.S. travels from mine to plant by rail, so there’s a predictable impact on rail cargoes. A decade ago, U.S. rail carriers shipped nearly 7 million carloads of coal. Last year, that figure was barely 4 million.
The info is here.
Sunday, February 16, 2020
Fast optimism, slow realism? Causal evidence for a two-step model of future thinking
Hallgeir Sjåstad and Roy F. Baumeister
PsyArXiv
Originally posted 6 Jan 20
Abstract
Future optimism is a widespread phenomenon, often attributed to the psychology of intuition. However, causal evidence for this explanation is lacking, and sometimes cautious realism is found. One resolution is that thoughts about the future have two steps: A first step imagining the desired outcome, and then a sobering reflection on how to get there. Four pre-registered experiments supported this two-step model, showing that fast predictions are more optimistic than slow predictions. The total sample consisted of 2,116 participants from USA and Norway, providing 9,036 predictions. In Study 1, participants in the fast-response condition thought positive events were more likely to happen and that negative events were less likely, as compared to participants in the slow-response condition. Although the predictions were optimistically biased in both conditions, future optimism was significantly stronger among fast responders. Participants in the fast-response condition also relied more on intuitive heuristics (CRT). Studies 2 and 3 focused on future health problems (e.g., getting a heart attack or diabetes), in which participants in the fast-response condition thought they were at lower risk. Study 4 provided a direct replication, with the additional finding that fast predictions were more optimistic only for the self (vs. the average person). The results suggest that when people think about their personal future, the first response is optimistic, which only later may be followed by a second step of reflective realism. Current health, income, trait optimism, perceived control and happiness were negatively correlated with health-risk predictions, but did not moderate the fast-optimism effect.
From the Discussion section:
Four studies found that people made more optimistic predictions when they relied on fast intuition rather than slow reflection. Apparently, a delay of 15 seconds is sufficient to enable second thoughts and a drop in future optimism. The slower responses were still "unrealistically optimistic"(Weinstein, 1980; Shepperd et al., 2013), but to a much lesser extent than the fast responses. We found this fast-optimism effect on relative comparison to the average person and isolated judgments of one's own likelihood, in two different languages across two different countries, and in one direct replication.All four experiments were pre-registered, and the total sample consisted of about 2,000 participants making more than 9,000 predictions.
PsyArXiv
Originally posted 6 Jan 20
Abstract
Future optimism is a widespread phenomenon, often attributed to the psychology of intuition. However, causal evidence for this explanation is lacking, and sometimes cautious realism is found. One resolution is that thoughts about the future have two steps: A first step imagining the desired outcome, and then a sobering reflection on how to get there. Four pre-registered experiments supported this two-step model, showing that fast predictions are more optimistic than slow predictions. The total sample consisted of 2,116 participants from USA and Norway, providing 9,036 predictions. In Study 1, participants in the fast-response condition thought positive events were more likely to happen and that negative events were less likely, as compared to participants in the slow-response condition. Although the predictions were optimistically biased in both conditions, future optimism was significantly stronger among fast responders. Participants in the fast-response condition also relied more on intuitive heuristics (CRT). Studies 2 and 3 focused on future health problems (e.g., getting a heart attack or diabetes), in which participants in the fast-response condition thought they were at lower risk. Study 4 provided a direct replication, with the additional finding that fast predictions were more optimistic only for the self (vs. the average person). The results suggest that when people think about their personal future, the first response is optimistic, which only later may be followed by a second step of reflective realism. Current health, income, trait optimism, perceived control and happiness were negatively correlated with health-risk predictions, but did not moderate the fast-optimism effect.
From the Discussion section:
Four studies found that people made more optimistic predictions when they relied on fast intuition rather than slow reflection. Apparently, a delay of 15 seconds is sufficient to enable second thoughts and a drop in future optimism. The slower responses were still "unrealistically optimistic"(Weinstein, 1980; Shepperd et al., 2013), but to a much lesser extent than the fast responses. We found this fast-optimism effect on relative comparison to the average person and isolated judgments of one's own likelihood, in two different languages across two different countries, and in one direct replication.All four experiments were pre-registered, and the total sample consisted of about 2,000 participants making more than 9,000 predictions.
Saturday, February 15, 2020
Influencing the physiology and decisions of groups: Physiological linkage during group decision-making
Thorson, K. R., and others.
(2020). Group Processes & Intergroup Relations.
https://doi.org/10.1177/1368430219890909
Abstract
Many of the most important decisions in our society are made within groups, yet we know little about how the physiological responses of group members predict the decisions that groups make. In the current work, we examine whether physiological linkage from “senders” to “receivers”—which occurs when a sender’s physiological response predicts a receiver’s physiological response—is associated with senders’ success at persuading the group to make a decision in their favor. We also examine whether experimentally manipulated status—an important predictor of social behavior—is associated with physiological linkage. In groups of 5, we randomly assigned 1 person to be high status, 1 low status, and 3 middle status. Groups completed a collaborative decision-making task that required them to come to a consensus on a decision to hire 1 of 5 firms. Unbeknownst to the 3 middle-status members, high- and low-status members surreptitiously were told to each argue for different firms. We measured cardiac interbeat intervals of all group members throughout the decision-making process to assess physiological linkage. We found that the more receivers were physiologically linked to senders, the more likely groups were to make a decision in favor of the senders. We did not find that people were physiologically linked to their group members as a function of their fellow group members’ status. This work identifies physiological linkage as a novel correlate of persuasion and highlights the need to understand the relationship between group members’ physiological responses during group decision-making.
(2020). Group Processes & Intergroup Relations.
https://doi.org/10.1177/1368430219890909
Abstract
Many of the most important decisions in our society are made within groups, yet we know little about how the physiological responses of group members predict the decisions that groups make. In the current work, we examine whether physiological linkage from “senders” to “receivers”—which occurs when a sender’s physiological response predicts a receiver’s physiological response—is associated with senders’ success at persuading the group to make a decision in their favor. We also examine whether experimentally manipulated status—an important predictor of social behavior—is associated with physiological linkage. In groups of 5, we randomly assigned 1 person to be high status, 1 low status, and 3 middle status. Groups completed a collaborative decision-making task that required them to come to a consensus on a decision to hire 1 of 5 firms. Unbeknownst to the 3 middle-status members, high- and low-status members surreptitiously were told to each argue for different firms. We measured cardiac interbeat intervals of all group members throughout the decision-making process to assess physiological linkage. We found that the more receivers were physiologically linked to senders, the more likely groups were to make a decision in favor of the senders. We did not find that people were physiologically linked to their group members as a function of their fellow group members’ status. This work identifies physiological linkage as a novel correlate of persuasion and highlights the need to understand the relationship between group members’ physiological responses during group decision-making.
Friday, February 14, 2020
The Moral Self and Moral Duties
J. Everett, J. Skorburg, and J. Savulescu
PsyArXiv
Created on 6 Jan 20
Abstract
Recent research has begun treating the perennial philosophical question, “what makes a person the same over time?” as an empirical question. A long tradition in philosophy holds that psychological continuity and connectedness of memories are at the heart of personal identity. More recent experimental work, following Strohminger & Nichols (2014), has suggested that persistence of moral character, more than memories, is perceived as essential for personal identity. While there is a growing body of evidence supporting these findings, a critique by Starmans & Bloom (2018) suggests that this research program conflates personal identity with mere similarity. To address this criticism, we explore how loss of someone’s morality or memories influence perceptions of identity change, and perceptions of moral duties towards the target of the change. We present participants with a classic ‘body switch’ thought experiment and after assessing perceptions of identity persistence, we present a moral dilemma, asking participants to imagine that one of the patients must die (Study 1) or be left alone in a care home for the rest of their life (Study 2). Our results highlight the importance of the continuity of moral character, suggesting lay intuitions are tracking (something like) personal identity, not just mere similarity.
The research is here.
PsyArXiv
Created on 6 Jan 20
Abstract
Recent research has begun treating the perennial philosophical question, “what makes a person the same over time?” as an empirical question. A long tradition in philosophy holds that psychological continuity and connectedness of memories are at the heart of personal identity. More recent experimental work, following Strohminger & Nichols (2014), has suggested that persistence of moral character, more than memories, is perceived as essential for personal identity. While there is a growing body of evidence supporting these findings, a critique by Starmans & Bloom (2018) suggests that this research program conflates personal identity with mere similarity. To address this criticism, we explore how loss of someone’s morality or memories influence perceptions of identity change, and perceptions of moral duties towards the target of the change. We present participants with a classic ‘body switch’ thought experiment and after assessing perceptions of identity persistence, we present a moral dilemma, asking participants to imagine that one of the patients must die (Study 1) or be left alone in a care home for the rest of their life (Study 2). Our results highlight the importance of the continuity of moral character, suggesting lay intuitions are tracking (something like) personal identity, not just mere similarity.
The research is here.
Judgment and Decision Making
Baruch Fischhoff and Stephen B. Broomell
Annual Review of Psychology
2020 71:1, 331-355
Abstract
The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.
The paper can be downloaded here.
Annual Review of Psychology
2020 71:1, 331-355
Abstract
The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.
The paper can be downloaded here.
Thursday, February 13, 2020
Groundbreaking Court Ruling Against Insurer Offers Hope in 2020
Katherine G. Kennedy
Psychiatric News
Originally posted 9 Jan 20
Here is an excerpt:
In his 106-page opinion, Judge Spero criticized UBH for using flawed, internally developed, and overly restrictive medical necessity guidelines that favored protecting the financial interests of UBH over medical treatment of its members.
“By a preponderance of the evidence,” Judge Spero wrote, “in each version of the Guidelines at issue in this case the defect is pervasive and results in a significantly narrower scope of coverage than is consistent with generally accepted standards of care.” His full decision can be accessed here.
As of this writing, we are still awaiting Judge Spero’s remedies order (a court-ordered directive that requires specific actions, such as reparations) against UBH. Following that determination, we will know what UBH will be required to do to compensate class members who suffered damages (that is, protracted illness or death) or their beneficiaries as a result of UBH’s denial of their coverage claims.
But waiting for the remedies order does not prevent us from looking for answers to critical questions like these:
The info is here.
Psychiatric News
Originally posted 9 Jan 20
Here is an excerpt:
In his 106-page opinion, Judge Spero criticized UBH for using flawed, internally developed, and overly restrictive medical necessity guidelines that favored protecting the financial interests of UBH over medical treatment of its members.
“By a preponderance of the evidence,” Judge Spero wrote, “in each version of the Guidelines at issue in this case the defect is pervasive and results in a significantly narrower scope of coverage than is consistent with generally accepted standards of care.” His full decision can be accessed here.
As of this writing, we are still awaiting Judge Spero’s remedies order (a court-ordered directive that requires specific actions, such as reparations) against UBH. Following that determination, we will know what UBH will be required to do to compensate class members who suffered damages (that is, protracted illness or death) or their beneficiaries as a result of UBH’s denial of their coverage claims.
But waiting for the remedies order does not prevent us from looking for answers to critical questions like these:
- Will Wit. v. UBH impact the insurance industry enough to catalyze widespread reforms in how utilization review guidelines are determined and used?
- How will the 50 offices of state insurance commissioners respond? Will these regulators mandate the use of clinical coverage guidelines that reflect the findings in Wit. v. UBH? Will they tighten their oversight with updated regulations and enforcement actions?
The info is here.
FDA and NIH let clinical trial sponsors keep results secret and break the law
Charles Piller
sciencemag.org
Originally posted 13 Jan 20
For 20 years, the U.S. government has urged companies, universities, and other institutions that conduct clinical trials to record their results in a federal database, so doctors and patients can see whether new treatments are safe and effective. Few trial sponsors have consistently done so, even after a 2007 law made posting mandatory for many trials registered in the database. In 2017, the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) tried again, enacting a long-awaited “final rule” to clarify the law’s expectations and penalties for failing to disclose trial results. The rule took full effect 2 years ago, on 18 January 2018, giving trial sponsors ample time to comply. But a Science investigation shows that many still ignore the requirement, while federal officials do little or nothing to enforce the law.
(cut)
Contacted for comment, none of the institutions disputed the findings of this investigation. In all 4768 trials Science checked, sponsors violated the reporting law more than 55% of the time. And in hundreds of cases where the sponsors got credit for reporting trial results, they have yet to be publicly posted because of quality lapses flagged by ClinicalTrials.gov staff.
The info is here.
sciencemag.org
Originally posted 13 Jan 20
For 20 years, the U.S. government has urged companies, universities, and other institutions that conduct clinical trials to record their results in a federal database, so doctors and patients can see whether new treatments are safe and effective. Few trial sponsors have consistently done so, even after a 2007 law made posting mandatory for many trials registered in the database. In 2017, the National Institutes of Health (NIH) and the Food and Drug Administration (FDA) tried again, enacting a long-awaited “final rule” to clarify the law’s expectations and penalties for failing to disclose trial results. The rule took full effect 2 years ago, on 18 January 2018, giving trial sponsors ample time to comply. But a Science investigation shows that many still ignore the requirement, while federal officials do little or nothing to enforce the law.
(cut)
Contacted for comment, none of the institutions disputed the findings of this investigation. In all 4768 trials Science checked, sponsors violated the reporting law more than 55% of the time. And in hundreds of cases where the sponsors got credit for reporting trial results, they have yet to be publicly posted because of quality lapses flagged by ClinicalTrials.gov staff.
The info is here.
Wednesday, February 12, 2020
Judge holds Pa. psychologist in contempt, calls her defiance ‘extraordinary’ in trucker’s case
John Beague
PennLive.com
Originally 18 Jan 20
A federal judge has held a Sunbury psychologist in contempt and sanctioned her $8,288 for failing to comply with a subpoena and a court order in a civil case stemming from a 2016 traffic crash.
U.S. Middle District Judge Matthew W. Brann, in an opinion issued Friday, said he has never encountered the “obstinance” displayed by Donna Pinter of Psychological Services Clinic Inc.
He called Pinter’s defiance “extraordinary” and pointed out that she never objected to the validity of the subpoena or court order and did not provide an adequate excuse.
“She forced the parties and this court to waste significant and limited resources litigating these motions and convening two hearings for what should have been a routine document production,” he wrote.
The defendants sought information about Kenneth Kerlin of Middleburg from Pinter because she has treated him for years and in his suit he claims the crash, which involved two tractor-trailers, has caused him mental suffering.
The info is here.
PennLive.com
Originally 18 Jan 20
A federal judge has held a Sunbury psychologist in contempt and sanctioned her $8,288 for failing to comply with a subpoena and a court order in a civil case stemming from a 2016 traffic crash.
U.S. Middle District Judge Matthew W. Brann, in an opinion issued Friday, said he has never encountered the “obstinance” displayed by Donna Pinter of Psychological Services Clinic Inc.
He called Pinter’s defiance “extraordinary” and pointed out that she never objected to the validity of the subpoena or court order and did not provide an adequate excuse.
“She forced the parties and this court to waste significant and limited resources litigating these motions and convening two hearings for what should have been a routine document production,” he wrote.
The defendants sought information about Kenneth Kerlin of Middleburg from Pinter because she has treated him for years and in his suit he claims the crash, which involved two tractor-trailers, has caused him mental suffering.
The info is here.
Empirical Work in Moral Psychology
Joshua May
Routledge Encyclopedia of Philosophy
Taylor and Francis
Originally published in 2017
Abstract
How do we form our moral judgments, and how do they influence behaviour? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.
Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will, as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave
Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as an additional tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.
The info is here.
Routledge Encyclopedia of Philosophy
Taylor and Francis
Originally published in 2017
Abstract
How do we form our moral judgments, and how do they influence behaviour? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.
Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will, as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave
Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as an additional tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.
The info is here.
Tuesday, February 11, 2020
How to build ethical AI
Carolyn Herzog
thehill.com
Originally posted 18 Jan 20
Here is an excerpt:
Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.
One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.
This leads back to transparency.
A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?
Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?
The info is here.
thehill.com
Originally posted 18 Jan 20
Here is an excerpt:
Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.
One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.
This leads back to transparency.
A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?
Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?
The info is here.
The Americans dying because they can't afford medical care
Michael Sainato
theguardian.com
Originally posted 7 Jan 2020
Here is an excerpt:
Finley is one of millions of Americans who avoid medical treatment due to the costs every year.
A December 2019 poll conducted by Gallup found 25% of Americans say they or a family member have delayed medical treatment for a serious illness due to the costs of care, and an additional 8% report delaying medical treatment for less serious illnesses. A study conducted by the American Cancer Society in May 2019 found 56% of adults in America report having at least one medical financial hardship, and researchers warned the problem is likely to worsen unless action is taken.
Dr Robin Yabroff, lead author of the American Cancer Society study, said last month’s Gallup poll finding that 25% of Americans were delaying care was “consistent with numerous other studies documenting that many in the United States have trouble paying medical bills”.
US spends the most on healthcare
Despite millions of Americans delaying medical treatment due to the costs, the US still spends the most on healthcare of any developed nation in the world, while covering fewer people and achieving worse overall health outcomes. A 2017 analysis found the United States ranks 24th globally in achieving health goals set by the United Nations. In 2018, $3.65tn was spent on healthcare in the United States, and these costs are projected to grow at an annual rate of 5.5% over the next decade.
The info is here.
theguardian.com
Originally posted 7 Jan 2020
Here is an excerpt:
Finley is one of millions of Americans who avoid medical treatment due to the costs every year.
A December 2019 poll conducted by Gallup found 25% of Americans say they or a family member have delayed medical treatment for a serious illness due to the costs of care, and an additional 8% report delaying medical treatment for less serious illnesses. A study conducted by the American Cancer Society in May 2019 found 56% of adults in America report having at least one medical financial hardship, and researchers warned the problem is likely to worsen unless action is taken.
Dr Robin Yabroff, lead author of the American Cancer Society study, said last month’s Gallup poll finding that 25% of Americans were delaying care was “consistent with numerous other studies documenting that many in the United States have trouble paying medical bills”.
US spends the most on healthcare
Despite millions of Americans delaying medical treatment due to the costs, the US still spends the most on healthcare of any developed nation in the world, while covering fewer people and achieving worse overall health outcomes. A 2017 analysis found the United States ranks 24th globally in achieving health goals set by the United Nations. In 2018, $3.65tn was spent on healthcare in the United States, and these costs are projected to grow at an annual rate of 5.5% over the next decade.
The info is here.
Monday, February 10, 2020
Can Robots Reduce Racism And Sexism?
Kim Elsesser
Forbes.com
Originally posted 16 Jan 20
Robots are becoming a regular part of our workplaces, serving as supermarket cashiers and building our cars. More recently they’ve been tackling even more complicated tasks like driving and sensing emotions. Estimates suggest that about half of the work humans currently do will be automated by 2055, but there may be a silver lining to the loss of human jobs to robots. New research indicates that robots at work can help reduce prejudice and discrimination.
Apparently, just thinking about robot workers leads people to think they have more in common with other human groups according to research published in American Psychologist. When the study participants’ awareness of robot workers increased, they became more accepting of immigrants and people of a different religion, race, and sexual orientation.
Basically, the robots reduced prejudice by highlighting the existence of a group that is not human. Study authors, Joshua Conrad Jackson, Noah Castelo and Kurt Gray, summarized, “The large differences between humans and robots may make the differences between humans seem smaller than they normally appear. Christians and Muslims have different beliefs, but at least both are made from flesh and blood; Latinos and Asians may eat different foods, but at least they eat.” Instead of categorizing people by race or religion, thinking about robots made participants more likely to think of everyone as belonging to one human category.
The info is here.
Forbes.com
Originally posted 16 Jan 20
Robots are becoming a regular part of our workplaces, serving as supermarket cashiers and building our cars. More recently they’ve been tackling even more complicated tasks like driving and sensing emotions. Estimates suggest that about half of the work humans currently do will be automated by 2055, but there may be a silver lining to the loss of human jobs to robots. New research indicates that robots at work can help reduce prejudice and discrimination.
Apparently, just thinking about robot workers leads people to think they have more in common with other human groups according to research published in American Psychologist. When the study participants’ awareness of robot workers increased, they became more accepting of immigrants and people of a different religion, race, and sexual orientation.
Basically, the robots reduced prejudice by highlighting the existence of a group that is not human. Study authors, Joshua Conrad Jackson, Noah Castelo and Kurt Gray, summarized, “The large differences between humans and robots may make the differences between humans seem smaller than they normally appear. Christians and Muslims have different beliefs, but at least both are made from flesh and blood; Latinos and Asians may eat different foods, but at least they eat.” Instead of categorizing people by race or religion, thinking about robots made participants more likely to think of everyone as belonging to one human category.
The info is here.
The medications that change who we are
Zaria Gorvett
BBC.com
Originally published 8 Jan 20
Here are two excerpts:
According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.
We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.
(cut)
Research into these effects couldn’t come at a better time. The world is in the midst of a crisis of over-medication, with the US alone buying up 49,000 tonnes of paracetamol every year – equivalent to about 298 paracetamol tablets per person – and the average American consuming $1,200 worth of prescription medications over the same period. And as the global population ages, our drug-lust is set to spiral even further out of control; in the UK, one in 10 people over the age of 65 already takes eight medications every week.
How are all these medications affecting our brains? And should there be warnings on packets?
The info is here.
BBC.com
Originally published 8 Jan 20
Here are two excerpts:
According to Golomb, this is typical – in her experience, most patients struggle to recognise their own behavioural changes, let alone connect them to their medication. In some instances, the realisation comes too late: the researcher was contacted by the families of a number of people, including an internationally renowned scientist and a former editor of a legal publication, who took their own lives.
We’re all familiar with the mind-bending properties of psychedelic drugs – but it turns out ordinary medications can be just as potent. From paracetamol (known as acetaminophen in the US) to antihistamines, statins, asthma medications and antidepressants, there’s emerging evidence that they can make us impulsive, angry, or restless, diminish our empathy for strangers, and even manipulate fundamental aspects of our personalities, such as how neurotic we are.
(cut)
Research into these effects couldn’t come at a better time. The world is in the midst of a crisis of over-medication, with the US alone buying up 49,000 tonnes of paracetamol every year – equivalent to about 298 paracetamol tablets per person – and the average American consuming $1,200 worth of prescription medications over the same period. And as the global population ages, our drug-lust is set to spiral even further out of control; in the UK, one in 10 people over the age of 65 already takes eight medications every week.
How are all these medications affecting our brains? And should there be warnings on packets?
The info is here.
Sunday, February 9, 2020
The Ethical Practice of Psychotherapy: Clearly Within Our Reach
Jeff Barnett
Psychotherapy, 56(4), 431-440
http://dx.doi.org/10.1037/pst0000272
Abstract
This introductory article to the special section on ethics in psychotherapy highlights the challenges and ethical dilemmas psychotherapists regularly face throughout their careers, and the limits of the American Psychological Association Ethics Code in offering clear guidance for how specifically to respond to each of these situations. Reasons for the Ethics Code’s naturally occurring limitations are shared. The role of ethical decision-making, the use of multiple sources of guidance, and the role of consultation with colleagues to augment and support the psychotherapist’s professional judgment are illustrated. Representative ethics challenges in a range of areas of practice are described, with particular attention given to tele-mental health and social media, interprofessional practice and collaboration with medical professionals, and self-care and the promotion of wellness. Key recommendations are shared to promote ethical conduct and to resolve commonly occurring ethical dilemmas in each of these areas of psychotherapy practice. Each of the six articles that follow in this special section on ethics in psychotherapy are introduced, and their main points are summarized.
Here is an excerpt:
Yet, the ethical practice of psychotherapy is complex and multifaceted. This is true as well for psychotherapy research, the supervision of psychotherapy by trainees, and all other professional roles in which psychotherapists may serve. Psychotherapists engage in complex and challenging work in a wide range of practice settings, with a diverse range of clients/patients with highly individualized treatment needs, histories, and circumstances, using a plethora of possible treatment techniques and strategies. Each possible combination of these factors can yield a range of complexities, often presenting psychotherapists with challenges and situations that may not have been anticipated and that tax the psychotherapist’s ability to choose the correct or most appropriate course of action. In such circumstances, ethical dilemmas (situations in which no right or correct course of action is readily apparent and where multiple factors may influence or impact one’s decision on how to proceed) are common. Knowing how to respond to these challenges and dilemmas is of paramount importance for psychotherapists so that we may fulfill our overarching obligations to our clients and all others we serve in our professional roles.
Psychotherapy, 56(4), 431-440
http://dx.doi.org/10.1037/pst0000272
Abstract
This introductory article to the special section on ethics in psychotherapy highlights the challenges and ethical dilemmas psychotherapists regularly face throughout their careers, and the limits of the American Psychological Association Ethics Code in offering clear guidance for how specifically to respond to each of these situations. Reasons for the Ethics Code’s naturally occurring limitations are shared. The role of ethical decision-making, the use of multiple sources of guidance, and the role of consultation with colleagues to augment and support the psychotherapist’s professional judgment are illustrated. Representative ethics challenges in a range of areas of practice are described, with particular attention given to tele-mental health and social media, interprofessional practice and collaboration with medical professionals, and self-care and the promotion of wellness. Key recommendations are shared to promote ethical conduct and to resolve commonly occurring ethical dilemmas in each of these areas of psychotherapy practice. Each of the six articles that follow in this special section on ethics in psychotherapy are introduced, and their main points are summarized.
Here is an excerpt:
Yet, the ethical practice of psychotherapy is complex and multifaceted. This is true as well for psychotherapy research, the supervision of psychotherapy by trainees, and all other professional roles in which psychotherapists may serve. Psychotherapists engage in complex and challenging work in a wide range of practice settings, with a diverse range of clients/patients with highly individualized treatment needs, histories, and circumstances, using a plethora of possible treatment techniques and strategies. Each possible combination of these factors can yield a range of complexities, often presenting psychotherapists with challenges and situations that may not have been anticipated and that tax the psychotherapist’s ability to choose the correct or most appropriate course of action. In such circumstances, ethical dilemmas (situations in which no right or correct course of action is readily apparent and where multiple factors may influence or impact one’s decision on how to proceed) are common. Knowing how to respond to these challenges and dilemmas is of paramount importance for psychotherapists so that we may fulfill our overarching obligations to our clients and all others we serve in our professional roles.
Subscribe to:
Posts (Atom)