Tom Feilden
BBC.com
Originally posted 22 Feb 17
Here is an excerpt:
The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.
Sadly nothing, it seems, could be further from the truth.
After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.
Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.
"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.
Concern over the reliability of the results published in scientific literature has been growing for some time.
According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.
Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.
"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."
The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.
The info is here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Friday, January 31, 2020
Strength of conviction won’t help to persuade when people disagree
Pressor
ucl.ac.uk
Originally poste 16 Dec 19
The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.
“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).
For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.
The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.
Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.
The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.
The info is here.
ucl.ac.uk
Originally poste 16 Dec 19
The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.
“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).
For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.
The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.
Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.
The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.
The info is here.
Thursday, January 30, 2020
Body Maps of Moral Concerns
Atari, M., Davani, A. M., & Dehghani, M.
(2018, December 4).
https://doi.org/10.31234/osf.io/jkewf
Abstract
The somatosensory reaction to different social circumstances has been proposed to trigger conscious emotional experiences. Here, we present a pre-registered experiment in which we examine the topographical maps associated with violations of different moral concerns. Specifically, participants (N = 596) were randomly assigned to scenarios of moral violations, and then drew their subjective somatosensory experience on two 48,954-pixel silhouettes. We demonstrate that bodily representations of different moral violations are slightly different. Further, we demonstrate that violations of moral concerns are felt in different parts of the body, and arguably result in different somatosensory experiences for liberals and conservatives. We also investigate how individual differences in moral concerns relate to bodily maps of moral violations. Finally, we use natural language processing to predict activation in body parts based on the semantic representation of textual stimuli. The findings shed light on the complex relationships between moral violations and somatosensory experiences.
(2018, December 4).
https://doi.org/10.31234/osf.io/jkewf
Abstract
The somatosensory reaction to different social circumstances has been proposed to trigger conscious emotional experiences. Here, we present a pre-registered experiment in which we examine the topographical maps associated with violations of different moral concerns. Specifically, participants (N = 596) were randomly assigned to scenarios of moral violations, and then drew their subjective somatosensory experience on two 48,954-pixel silhouettes. We demonstrate that bodily representations of different moral violations are slightly different. Further, we demonstrate that violations of moral concerns are felt in different parts of the body, and arguably result in different somatosensory experiences for liberals and conservatives. We also investigate how individual differences in moral concerns relate to bodily maps of moral violations. Finally, we use natural language processing to predict activation in body parts based on the semantic representation of textual stimuli. The findings shed light on the complex relationships between moral violations and somatosensory experiences.
UK ethical consumer spending hits record high, report shows
Rebecca Smithers
guardian.com
Originally posted 29 Dec 19
Ethical consumer spending has hit record levels in the UK, according to a new report that reveals the total market – including food, drinks, clothing, energy and eco-travel – has swelled to over £41bn.
Total ethical spending has risen almost fourfold in the past 20 years and outgrown all UK household expenditure, which has been broadly flat, according to the new study from Co-op.
The convenience retailer’s latest Ethical Consumerism report, which has tracked ethical expenditure year by year over the past two decades (adjusted for inflation) is a barometer of the extent to which UK consumers’ shopping habits reflect their concerns about the environment, animal welfare, social justice and human rights.
While back in 1999 the total size of the market was just £11.2bn, the report (which adjusts for inflation) says that, on a conservative basis, it has mushroomed to £41.1bn today. The average spend on ethical purchases per household has grown from a paltry £202 a year in 1999 to £1,278 in 2018. Over the same 20-year period, total general household expenditure has edged up by around 2% in real terms, according to the Office for National Statistics.
The info is here.
guardian.com
Originally posted 29 Dec 19
Ethical consumer spending has hit record levels in the UK, according to a new report that reveals the total market – including food, drinks, clothing, energy and eco-travel – has swelled to over £41bn.
Total ethical spending has risen almost fourfold in the past 20 years and outgrown all UK household expenditure, which has been broadly flat, according to the new study from Co-op.
The convenience retailer’s latest Ethical Consumerism report, which has tracked ethical expenditure year by year over the past two decades (adjusted for inflation) is a barometer of the extent to which UK consumers’ shopping habits reflect their concerns about the environment, animal welfare, social justice and human rights.
While back in 1999 the total size of the market was just £11.2bn, the report (which adjusts for inflation) says that, on a conservative basis, it has mushroomed to £41.1bn today. The average spend on ethical purchases per household has grown from a paltry £202 a year in 1999 to £1,278 in 2018. Over the same 20-year period, total general household expenditure has edged up by around 2% in real terms, according to the Office for National Statistics.
The info is here.
Wednesday, January 29, 2020
Why morals matter in foreign policy
Joseph Nye
aspistrategist.org.au
Originally published 10 Jan 20
Here is the conclusion:
Good moral reasoning should be three-dimensional, weighing and balancing intentions, consequences and means. A foreign policy should be judged accordingly. Moreover, a moral foreign policy must consider consequences such as maintaining an institutional order that encourages moral interests, in addition to particular newsworthy actions such as helping a dissident or a persecuted group in another country. And it’s important to include the ethical consequences of ‘nonactions’, such as President Harry S. Truman’s willingness to accept stalemate and domestic political punishment during the Korean War rather than follow General Douglas MacArthur’s recommendation to use nuclear weapons. As Sherlock Holmes famously noted, much can be learned from a dog that doesn’t bark.
It’s pointless to argue that ethics will play no role in the foreign policy debates that await this year. We should acknowledge that we always use moral reasoning to judge foreign policy, and we should learn to do it better.
The info is here.
aspistrategist.org.au
Originally published 10 Jan 20
Here is the conclusion:
Good moral reasoning should be three-dimensional, weighing and balancing intentions, consequences and means. A foreign policy should be judged accordingly. Moreover, a moral foreign policy must consider consequences such as maintaining an institutional order that encourages moral interests, in addition to particular newsworthy actions such as helping a dissident or a persecuted group in another country. And it’s important to include the ethical consequences of ‘nonactions’, such as President Harry S. Truman’s willingness to accept stalemate and domestic political punishment during the Korean War rather than follow General Douglas MacArthur’s recommendation to use nuclear weapons. As Sherlock Holmes famously noted, much can be learned from a dog that doesn’t bark.
It’s pointless to argue that ethics will play no role in the foreign policy debates that await this year. We should acknowledge that we always use moral reasoning to judge foreign policy, and we should learn to do it better.
The info is here.
In 2020, let’s stop AI ethics-washing and actually do something
Karen Hao
Here is an excerpt:
Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.
But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.
Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation.
The info is here.
technologyreview.com
Originally published 27 Dec 19Here is an excerpt:
Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.
But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.
Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation.
The info is here.
Tuesday, January 28, 2020
Why Misinformation Is About Who You Trust, Not What You Think
Brian Gallagher and Kevin Berger
Nautil.us
Originally published 14 Feb 19
Here is an excerpt:
When it comes to misinformation, twas always thus. What’s changed now?
O’Connor: It’s always been the case that humans have been dependent on social ties to gain knowledge and belief. There’s been misinformation and propaganda for hundreds of years. If you’re a governing body, you have interests you’re trying to protect. You want to control what people believe. What’s changed is social media and the structure of communication between people. Now people have tremendous ability to shape who they interact with. Say you’re an anti-vaxxer. You find people online who are also anti-vaxxers and communicate with them rather than people who challenge your beliefs.
The other important thing is that this new structure means that all sorts of influencers—the Russian government, various industry groups, other government groups—have direct access to people. They can communicate with people in a much more personal way. They can pose on Twitter and Facebook as a normal person who you might want to interact with. If you look at Facebook in the lead up to the 2016 election, the Russian Internet Research Agency created animal-lovers groups, Black Lives Matter groups, gun-rights groups, and anti-immigrant groups. They could build trust with people who would naturally be part of these groups. And once they grounded that trust, they could influence them by getting them not to vote or by driving polarization, causing more extreme rhetoric. They can make other people trust them in ways that would have been very difficult without social media.
Weatherall: People tend to trust their friends, their family, people who they share other affinities with. So if the message can look like it’s coming from those people, it can be very effective. Another thing that’s become widespread is the ability to produce easily shareable visual media. The memes we see on Twitter or on Facebook don’t really say anything, they conjure up an emotion—an emotion associated with an ideology or belief you might have. It’s a type of misinformation that supports your beliefs without ever coming out and saying something false or saying anything.
The interview is here.
Nautil.us
Originally published 14 Feb 19
Here is an excerpt:
When it comes to misinformation, twas always thus. What’s changed now?
O’Connor: It’s always been the case that humans have been dependent on social ties to gain knowledge and belief. There’s been misinformation and propaganda for hundreds of years. If you’re a governing body, you have interests you’re trying to protect. You want to control what people believe. What’s changed is social media and the structure of communication between people. Now people have tremendous ability to shape who they interact with. Say you’re an anti-vaxxer. You find people online who are also anti-vaxxers and communicate with them rather than people who challenge your beliefs.
The other important thing is that this new structure means that all sorts of influencers—the Russian government, various industry groups, other government groups—have direct access to people. They can communicate with people in a much more personal way. They can pose on Twitter and Facebook as a normal person who you might want to interact with. If you look at Facebook in the lead up to the 2016 election, the Russian Internet Research Agency created animal-lovers groups, Black Lives Matter groups, gun-rights groups, and anti-immigrant groups. They could build trust with people who would naturally be part of these groups. And once they grounded that trust, they could influence them by getting them not to vote or by driving polarization, causing more extreme rhetoric. They can make other people trust them in ways that would have been very difficult without social media.
Weatherall: People tend to trust their friends, their family, people who they share other affinities with. So if the message can look like it’s coming from those people, it can be very effective. Another thing that’s become widespread is the ability to produce easily shareable visual media. The memes we see on Twitter or on Facebook don’t really say anything, they conjure up an emotion—an emotion associated with an ideology or belief you might have. It’s a type of misinformation that supports your beliefs without ever coming out and saying something false or saying anything.
The interview is here.
Examining clinician burnout in health industries
Cynda Hylton Rushton |
Johns Hopkins Magazine
Originally posted 26 Dec 19
Here is an excerpt from the interview with Cynda Hylton Rushton:
How much is burnout really affecting clinicians?
Among nurses, 35-45% experience some form of burnout, with comparable rates among other providers and higher rates among physicians. It's important to note that burnout has been viewed as an occupational hazard rather than a mental health diagnosis. It is not a few days or even weeks of depletion or exhaustion. It is the cumulative, long-term distress and suffering that is slowly eroding the workforce and leading to significant job dissatisfaction and many leaving their professions. In some instances, serious health concerns and suicide can result.
What about the impact on patients?
Patient care can suffer when clinicians withdraw or are not fully engaged in their work. Moral distress, long hours, negative work environments, or organizational inefficiencies can all impact a clinician's ability to provide what they feel is quality, safe patient care. Likewise, patients are impacted when health care organizations are unable to attract and retain competent and compassionate clinicians.
What does this mean for nurses?
As the largest sector of the health care professions, nurses have the most patient interaction and are at the center of the health care team. Nurses are integral to helping patients to holistically respond to their health conditions, illness, or injury. If nurses are suffering from burnout and moral distress, the whole care team and the patient will experience serious consequences when nurses' capacities to adapt to the organizational and external pressures are eventually exceeded.
The info is here.
Monday, January 27, 2020
Nurses Continue to Rate Highest in Honesty, Ethics
RJ Reinhart
news.gallup.com
Originally posted 6 Jan 20
For the 18th year in a row, Americans rate the honesty and ethics of nurses highest among a list of professions that Gallup asks U.S. adults to assess annually. Currently, 85% of Americans say nurses' honesty and ethical standards are "very high" or "high," essentially unchanged from the 84% who said the same in 2018. Alternatively, Americans hold car salespeople in the lowest esteem, with 9% saying individuals in this field have high levels of ethics and honesty, similar to the 8% who said the same in 2018.
Nurses are consistently rated higher in honesty and ethics than all other professions that Gallup asks about, by a wide margin. Medical professions in general rate highly in Americans' assessments of honesty and ethics, with at least six in 10 U.S. adults saying medical doctors, pharmacists and dentists have high levels of these virtues. The only nonmedical profession that Americans now hold in a similar level of esteem is engineers, with 66% saying individuals in this field have high levels of honesty and ethics.
Americans' high regard for healthcare professionals contrasts sharply with their assessments of stockbrokers, advertising professionals, insurance salespeople, senators, members of Congress and car salespeople -- all of which garner less than 20% of U.S. adults saying they have high levels of honesty and ethics.
The public's low levels of belief in the honesty and ethical standards of senators and members of Congress may be a contributing factor in poor job approval ratings for the legislature. No more than 30% of Americans have approved of Congress in the past 10 years.
The info is here.
news.gallup.com
Originally posted 6 Jan 20
For the 18th year in a row, Americans rate the honesty and ethics of nurses highest among a list of professions that Gallup asks U.S. adults to assess annually. Currently, 85% of Americans say nurses' honesty and ethical standards are "very high" or "high," essentially unchanged from the 84% who said the same in 2018. Alternatively, Americans hold car salespeople in the lowest esteem, with 9% saying individuals in this field have high levels of ethics and honesty, similar to the 8% who said the same in 2018.
Nurses are consistently rated higher in honesty and ethics than all other professions that Gallup asks about, by a wide margin. Medical professions in general rate highly in Americans' assessments of honesty and ethics, with at least six in 10 U.S. adults saying medical doctors, pharmacists and dentists have high levels of these virtues. The only nonmedical profession that Americans now hold in a similar level of esteem is engineers, with 66% saying individuals in this field have high levels of honesty and ethics.
Americans' high regard for healthcare professionals contrasts sharply with their assessments of stockbrokers, advertising professionals, insurance salespeople, senators, members of Congress and car salespeople -- all of which garner less than 20% of U.S. adults saying they have high levels of honesty and ethics.
The public's low levels of belief in the honesty and ethical standards of senators and members of Congress may be a contributing factor in poor job approval ratings for the legislature. No more than 30% of Americans have approved of Congress in the past 10 years.
The info is here.
The Character of Causation: Investigating the Impact of Character, Knowledge, and Desire on Causal Attributions
Justin Sytsma
(2019) Preprint
Abstract
There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.
From the Conclusion:
Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.
In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.
The research is here.
(2019) Preprint
Abstract
There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.
From the Conclusion:
Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.
In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.
The research is here.
Sunday, January 26, 2020
Why Boards Should Worry about Executives’ Off-the-Job Behavior
Harvard Business Review
January-February Issues 2020
Here is an excerpt:
In their most recent paper, the researchers looked at whether executives’ personal legal records—everything from traffic tickets to driving under the influence and assault—had any relation to their tendency to execute trades on the basis of confidential inside information. Using U.S. federal and state crime databases, criminal background checks, and private investigators, they identified firms that had simultaneously employed at least one executive with a record and at least one without a record during the period from 1986 to 2017. This yielded a sample of nearly 1,500 executives, including 503 CEOs. Examining executive trades of company stock, they found that those were more profitable for executives with a record than for others, suggesting that the former had made use of privileged information. The effect was greatest among executives with multiple offenses and those with serious violations (anything worse than a traffic ticket).
Could governance measures curb such activity? Many firms have “blackout” policies to deter improper trading. Because the existence of those policies is hard to determine (few companies publish data on them), the researchers used a common proxy: whether the bulk of trades by a firm’s officers occurred within 21 days after an earnings announcement (generally considered an allowable window). They compared the trades of executives with a record at companies with and without blackout policies, with sobering results: Although the policies mitigated abnormally profitable trades among traffic violators, they had no effect on the trades of serious offenders. The latter were likelier than others to trade during blackouts and to miss SEC reporting deadlines. They were also likelier to buy or sell before major announcements, such as of earnings or M&A, and in the three years before their companies went bankrupt—evidence similarly suggesting they had profited from inside information. “While strong governance can discipline minor offenders, it appears to be largely ineffective for executives with more-serious criminal infractions,” the researchers write.
The info is here.
January-February Issues 2020
Here is an excerpt:
In their most recent paper, the researchers looked at whether executives’ personal legal records—everything from traffic tickets to driving under the influence and assault—had any relation to their tendency to execute trades on the basis of confidential inside information. Using U.S. federal and state crime databases, criminal background checks, and private investigators, they identified firms that had simultaneously employed at least one executive with a record and at least one without a record during the period from 1986 to 2017. This yielded a sample of nearly 1,500 executives, including 503 CEOs. Examining executive trades of company stock, they found that those were more profitable for executives with a record than for others, suggesting that the former had made use of privileged information. The effect was greatest among executives with multiple offenses and those with serious violations (anything worse than a traffic ticket).
Could governance measures curb such activity? Many firms have “blackout” policies to deter improper trading. Because the existence of those policies is hard to determine (few companies publish data on them), the researchers used a common proxy: whether the bulk of trades by a firm’s officers occurred within 21 days after an earnings announcement (generally considered an allowable window). They compared the trades of executives with a record at companies with and without blackout policies, with sobering results: Although the policies mitigated abnormally profitable trades among traffic violators, they had no effect on the trades of serious offenders. The latter were likelier than others to trade during blackouts and to miss SEC reporting deadlines. They were also likelier to buy or sell before major announcements, such as of earnings or M&A, and in the three years before their companies went bankrupt—evidence similarly suggesting they had profited from inside information. “While strong governance can discipline minor offenders, it appears to be largely ineffective for executives with more-serious criminal infractions,” the researchers write.
The info is here.
Saturday, January 25, 2020
Psychologist Who Waterboarded for C.I.A. to Testify at Guantánamo
Carol Rosenberg
The New York Times
Originally posted 20 Jan 20
Here is an excerpt:
Mr. Mohammed’s co-defendants were subject to violence, sleep deprivation, dietary manipulation and rectal abuse in the prison network from 2002, when the first of them, Ramzi bin al-Shibh was captured, to 2006, when all five were transferred to the prison at Guantánamo Bay. They will also be present in the courtroom.
In the black sites, the defendants were kept in solitary confinement, often nude, at times confined to a cramped box in the fetal position, hung by their wrists in painful positions and slammed head first into walls. Those techniques, approved by George W. Bush administration lawyers, were part of a desperate effort to force them to divulge Al Qaeda’s secrets — like the location of Osama bin Laden and whether there were terrorist sleeper cells deployed to carry out more attacks.
A subsequent internal study by the C.I.A. found proponents inflated the intelligence value of those interrogations.
The psychologists were called by lawyers to testify for one of the defendants, Mr. Mohammed’s nephew, Ammar al-Baluchi. All five defense teams are expected to question them about policy and for graphic details of conditions in the clandestine overseas prisons, including one in Thailand that for a time was run by Gina Haspel, now the C.I.A. director.
Mr. al-Baluchi’s lawyer, James G. Connell III, is spearheading an effort to persuade the judge to exclude from the trial the testimony of F.B.I. agents who questioned the defendants at Guantánamo in 2007. It was just months after their transfer there from years in C.I.A. prisons, and the defense lawyers argue that, although there was no overt violence during the F.B.I. interrogations, the defendants were so thoroughly broken in the black sites that they were powerless to do anything but tell the F.B.I. agents what they wanted to hear.
By law, prosecutors can use voluntary confessions only at the military commissions at Guantánamo.
The info is here.
The New York Times
Originally posted 20 Jan 20
Here is an excerpt:
Mr. Mohammed’s co-defendants were subject to violence, sleep deprivation, dietary manipulation and rectal abuse in the prison network from 2002, when the first of them, Ramzi bin al-Shibh was captured, to 2006, when all five were transferred to the prison at Guantánamo Bay. They will also be present in the courtroom.
In the black sites, the defendants were kept in solitary confinement, often nude, at times confined to a cramped box in the fetal position, hung by their wrists in painful positions and slammed head first into walls. Those techniques, approved by George W. Bush administration lawyers, were part of a desperate effort to force them to divulge Al Qaeda’s secrets — like the location of Osama bin Laden and whether there were terrorist sleeper cells deployed to carry out more attacks.
A subsequent internal study by the C.I.A. found proponents inflated the intelligence value of those interrogations.
The psychologists were called by lawyers to testify for one of the defendants, Mr. Mohammed’s nephew, Ammar al-Baluchi. All five defense teams are expected to question them about policy and for graphic details of conditions in the clandestine overseas prisons, including one in Thailand that for a time was run by Gina Haspel, now the C.I.A. director.
Mr. al-Baluchi’s lawyer, James G. Connell III, is spearheading an effort to persuade the judge to exclude from the trial the testimony of F.B.I. agents who questioned the defendants at Guantánamo in 2007. It was just months after their transfer there from years in C.I.A. prisons, and the defense lawyers argue that, although there was no overt violence during the F.B.I. interrogations, the defendants were so thoroughly broken in the black sites that they were powerless to do anything but tell the F.B.I. agents what they wanted to hear.
By law, prosecutors can use voluntary confessions only at the military commissions at Guantánamo.
The info is here.
Friday, January 24, 2020
Psychology accused of ‘collective self-deception’ over results
Jack Grove
The Times Higher Education
Originally published 10 Dec 19
Here is an excerpt:
If psychologists are serious about doing research that could make “useful real-world predictions”, rather than conducting highly contextualised studies, they should use “much larger and more complex datasets, experimental designs and statistical models”, Dr Yarkoni advises.
He also suggests that the “sweeping claims” made by many papers bear little relation to their results, maintaining that a “huge proportion of the quantitative inferences drawn in the published psychology literature are so inductively weak as to be at best questionable and at worst utterly insensible”.
Many psychologists were indulging in a “collective self-deception” and should start “acknowledging the fundamentally qualitative nature of their work”, he says, stating that “a good deal of what currently passes for empirical psychology is already best understood as insightful qualitative analysis dressed up as shoddy quantitative science”.
That would mean no longer including “scientific-looking inferential statistics” within papers, whose appearance could be considered an “elaborate rhetorical ruse used to mathematicise people into believing claims they would otherwise find logically unsound”.
The info is here.
The Times Higher Education
Originally published 10 Dec 19
Here is an excerpt:
If psychologists are serious about doing research that could make “useful real-world predictions”, rather than conducting highly contextualised studies, they should use “much larger and more complex datasets, experimental designs and statistical models”, Dr Yarkoni advises.
He also suggests that the “sweeping claims” made by many papers bear little relation to their results, maintaining that a “huge proportion of the quantitative inferences drawn in the published psychology literature are so inductively weak as to be at best questionable and at worst utterly insensible”.
Many psychologists were indulging in a “collective self-deception” and should start “acknowledging the fundamentally qualitative nature of their work”, he says, stating that “a good deal of what currently passes for empirical psychology is already best understood as insightful qualitative analysis dressed up as shoddy quantitative science”.
That would mean no longer including “scientific-looking inferential statistics” within papers, whose appearance could be considered an “elaborate rhetorical ruse used to mathematicise people into believing claims they would otherwise find logically unsound”.
The info is here.
How One Person Can Change the Conscience of an Organization
Nicholas W. Eyrich, Robert E. Quinn, and
David P. Fessell
Harvard Business Review
Originally published 27 Dec 19
Here is an excerpt:
A single person with a clarity of conscience and a willingness to speak up can make a difference. Contributing to the greater good is a deep and fundamental human need. When a leader, even a mid-level or lower level leader, skillfully brings a voice and a vision, others will follow and surprising things can happen—even culture change on a large scale. While Yamada did not set out to change a culture, his actions were catalytic and galvanized the organization. As news of the new “not for profit” focus of Tres Cantos spread, many of GSK’s top scientists volunteered to work there. Yamada’s voice spoke for many others, offering a clear path and a vision for a more positive future for all.
The info is here.
David P. Fessell
Harvard Business Review
Originally published 27 Dec 19
Here is an excerpt:
A single person with a clarity of conscience and a willingness to speak up can make a difference. Contributing to the greater good is a deep and fundamental human need. When a leader, even a mid-level or lower level leader, skillfully brings a voice and a vision, others will follow and surprising things can happen—even culture change on a large scale. While Yamada did not set out to change a culture, his actions were catalytic and galvanized the organization. As news of the new “not for profit” focus of Tres Cantos spread, many of GSK’s top scientists volunteered to work there. Yamada’s voice spoke for many others, offering a clear path and a vision for a more positive future for all.
The info is here.
Thursday, January 23, 2020
Colleges want freshmen to use mental health apps. But are they risking students’ privacy?
Deanna Paul
The New York Times
Originally posted 2 Jan 20
Here are two excepts:
TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.
The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.
Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.
(cut)
“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”
Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.
“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.
The info is here.
The New York Times
Originally posted 2 Jan 20
Here are two excepts:
TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.
The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.
Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.
(cut)
“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”
Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.
“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.
The info is here.
You Are Already Having Sex With Robots
Emma Grey Ellis
wired.com
Originally published 23 Aug 19
Here are two excerpts:
Carnegie Mellon roboticist Hans Moravec has written about emotions as devices for channeling behavior in helpful ways—for example, sexuality prompting procreation. He concluded that artificial intelligences, in seeking to please humanity, are likely to be highly emotional. By this definition, if you encoded an artificial intelligence with the need to please humanity sexually, their urgency to follow their programming constitutes sexual feelings. Feelings as real and valid as our own. Feelings that lead to the thing that feelings, probably, evolved to lead to: sex. One gets the sense that, for some digisexual people, removing the squishiness of the in-between stuff—the jealousy and hurt and betrayal and exploitation—improves their sexual enjoyment. No complications. The robot as ultimate partner. An outcome of evolution.
So the sexbotcalypse will come. It's not scary, it's just weird, and it's being motivated by millennia-old bad habits. Laziness, yes, but also something else. “I don’t see anything that suggests we’re going to buck stereotypes,” says Charles Ess, who studies virtue ethics and social robots at the University of Oslo. “People aren’t doing this out of the goodness of their hearts. They’re doing this to make money.”
(cut)
Technologizing sexual relationships will also fill one of the last blank spots in tech’s knowledge of (ad-targetable) human habits. Brianna Rader—founder of Juicebox, progenitor of Slutbot—has spoken about how difficult it is to do market research on sex. If having sex with robots or other forms of sex tech becomes commonplace, it wouldn’t be difficult anymore. “We have an interesting relationship with privacy in the US,” Kaufman says. “We’re willing to trade a lot of our privacy and information away for pleasures less complicated than an intimate relationship.”
The info is here.
wired.com
Originally published 23 Aug 19
Here are two excerpts:
Carnegie Mellon roboticist Hans Moravec has written about emotions as devices for channeling behavior in helpful ways—for example, sexuality prompting procreation. He concluded that artificial intelligences, in seeking to please humanity, are likely to be highly emotional. By this definition, if you encoded an artificial intelligence with the need to please humanity sexually, their urgency to follow their programming constitutes sexual feelings. Feelings as real and valid as our own. Feelings that lead to the thing that feelings, probably, evolved to lead to: sex. One gets the sense that, for some digisexual people, removing the squishiness of the in-between stuff—the jealousy and hurt and betrayal and exploitation—improves their sexual enjoyment. No complications. The robot as ultimate partner. An outcome of evolution.
So the sexbotcalypse will come. It's not scary, it's just weird, and it's being motivated by millennia-old bad habits. Laziness, yes, but also something else. “I don’t see anything that suggests we’re going to buck stereotypes,” says Charles Ess, who studies virtue ethics and social robots at the University of Oslo. “People aren’t doing this out of the goodness of their hearts. They’re doing this to make money.”
(cut)
Technologizing sexual relationships will also fill one of the last blank spots in tech’s knowledge of (ad-targetable) human habits. Brianna Rader—founder of Juicebox, progenitor of Slutbot—has spoken about how difficult it is to do market research on sex. If having sex with robots or other forms of sex tech becomes commonplace, it wouldn’t be difficult anymore. “We have an interesting relationship with privacy in the US,” Kaufman says. “We’re willing to trade a lot of our privacy and information away for pleasures less complicated than an intimate relationship.”
The info is here.
Wednesday, January 22, 2020
Association Between Physician Depressive Symptoms and Medical Errors
Pereira-Lima K, Mata DA, & others
JAMA Netw Open. 2019; 2(11):e1916097
Abstract
Importance Depression is highly prevalent among physicians and has been associated with increased risk of medical errors. However, questions regarding the magnitude and temporal direction of these associations remain open in recent literature.
Objective To provide summary relative risk (RR) estimates for the associations between physician depressive symptoms and medical errors.
Conclusions and Relevance Results of this study suggest that physicians with a positive screening for depressive symptoms are at higher risk for medical errors. Further research is needed to evaluate whether interventions to reduce physician depressive symptoms could play a role in mitigating medical errors and thus improving physician well-being and patient care.
From the Discussion
Studies have recommended the addition of physician well-being to the Triple Aim of enhancing the patient experience of care, improving the health of populations, and reducing the per capita cost of health care. Results of the present study endorse the Quadruple Aim movement by demonstrating not only that medical errors are associated with physician health but also that physician depressive symptoms are associated with subsequent errors. Given that few physicians with depression seek treatment and that recent evidence has pointed to the lack of organizational interventions aimed at reducing physician depressive symptoms, our findings underscore the need for institutional policies to remove barriers to the delivery of evidence-based treatment to physicians with depression.
https://doi.org/10.1001/jamanetworkopen.2019.16097
JAMA Netw Open. 2019; 2(11):e1916097
Abstract
Importance Depression is highly prevalent among physicians and has been associated with increased risk of medical errors. However, questions regarding the magnitude and temporal direction of these associations remain open in recent literature.
Objective To provide summary relative risk (RR) estimates for the associations between physician depressive symptoms and medical errors.
Conclusions and Relevance Results of this study suggest that physicians with a positive screening for depressive symptoms are at higher risk for medical errors. Further research is needed to evaluate whether interventions to reduce physician depressive symptoms could play a role in mitigating medical errors and thus improving physician well-being and patient care.
From the Discussion
Studies have recommended the addition of physician well-being to the Triple Aim of enhancing the patient experience of care, improving the health of populations, and reducing the per capita cost of health care. Results of the present study endorse the Quadruple Aim movement by demonstrating not only that medical errors are associated with physician health but also that physician depressive symptoms are associated with subsequent errors. Given that few physicians with depression seek treatment and that recent evidence has pointed to the lack of organizational interventions aimed at reducing physician depressive symptoms, our findings underscore the need for institutional policies to remove barriers to the delivery of evidence-based treatment to physicians with depression.
https://doi.org/10.1001/jamanetworkopen.2019.16097
‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground
Joe McKendrick
Forbes.com
Originally published 22 Dec 19
Here is an excerpt:
Inevitably, “there will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination,” warns Mike Walsh, CEO of Tomorrow, and author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, in a recent Harvard Business Review article. “At the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as ‘the algorithm made me do it.’”
It’s more than legal considerations that should drive new thinking about AI ethics. It’s about “maintaining trust between organizations and the people they serve, whether clients, partners, employees, or the general public,” a recent report out of Accenture maintains. The report’s authors, Ronald Sandler and John Basl, both with Northeastern University’s philosophy department, and Steven Tiell of Accenture, state that a well-organized data ethics capacity can help organizations manage risks and liabilities associated with such data misuse and negligence.
“It can also help organizations clarify and make actionable mission and organizational values, such as responsibilities to and respect for the people and communities they serve,” Sandler and his co-authors advocate. A data ethics capability also offers organizations “a path to address the transformational power of data-driven AI and machine learning decision-making in an anticipatory way, allowing for proactive responsible development and use that can help organizations shape good governance, rather than inviting strict oversight.”
The info is here.
Forbes.com
Originally published 22 Dec 19
Here is an excerpt:
Inevitably, “there will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination,” warns Mike Walsh, CEO of Tomorrow, and author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, in a recent Harvard Business Review article. “At the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as ‘the algorithm made me do it.’”
It’s more than legal considerations that should drive new thinking about AI ethics. It’s about “maintaining trust between organizations and the people they serve, whether clients, partners, employees, or the general public,” a recent report out of Accenture maintains. The report’s authors, Ronald Sandler and John Basl, both with Northeastern University’s philosophy department, and Steven Tiell of Accenture, state that a well-organized data ethics capacity can help organizations manage risks and liabilities associated with such data misuse and negligence.
“It can also help organizations clarify and make actionable mission and organizational values, such as responsibilities to and respect for the people and communities they serve,” Sandler and his co-authors advocate. A data ethics capability also offers organizations “a path to address the transformational power of data-driven AI and machine learning decision-making in an anticipatory way, allowing for proactive responsible development and use that can help organizations shape good governance, rather than inviting strict oversight.”
The info is here.
Tuesday, January 21, 2020
How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?
AMA J Ethics. 2018;20(9):E864-872.
doi: 10.1001/amajethics.2018.864.
Abstract
Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.
The info is here.
doi: 10.1001/amajethics.2018.864.
Abstract
Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.
The info is here.
10 Years Ago, DNA Tests Were The Future Of Medicine. Now They’re A Social Network — And A Data Privacy Mess
Peter Aldhaus
buzzfeednews.com
Originally posted 11 Dec 19
Here is an excerpt:
But DNA testing can reveal uncomfortable truths, too. Families have been torn apart by the discovery that the man they call “Dad” is not the biological father of his children. Home DNA tests can also be used to show that a relative is a rapist or a killer.
That possibility burst into the public consciousness in April 2018, with the arrest of Joseph James DeAngelo, alleged to be the Golden State Killer responsible for at least 13 killings and more than 50 rapes in the 1970s and 1980s. DeAngelo was finally tracked down after DNA left at the scene of a 1980 double murder was matched to people in GEDmatch who were the killer's third or fourth cousins. Through months of painstaking work, investigators working with the genealogist Barbara Rae-Venter built family trees that converged on DeAngelo.
Genealogists had long realized that databases like GEDmatch could be used in this way, but had been wary of working with law enforcement — fearing that DNA test customers would object to the idea of cops searching their DNA profiles and rummaging around in their family trees.
But the Golden State Killer’s crimes were so heinous that the anticipated backlash initially failed to materialize. Indeed, a May 2018 survey of more than 1,500 US adults found that 80% backed police using public genealogy databases to solve violent crimes.
“I was very surprised with the Golden State Killer case how positive the reaction was across the board,” CeCe Moore, a genealogist known for her appearances on TV, told BuzzFeed News a couple of months after DeAngelo’s arrest.
The info is here.
buzzfeednews.com
Originally posted 11 Dec 19
Here is an excerpt:
But DNA testing can reveal uncomfortable truths, too. Families have been torn apart by the discovery that the man they call “Dad” is not the biological father of his children. Home DNA tests can also be used to show that a relative is a rapist or a killer.
That possibility burst into the public consciousness in April 2018, with the arrest of Joseph James DeAngelo, alleged to be the Golden State Killer responsible for at least 13 killings and more than 50 rapes in the 1970s and 1980s. DeAngelo was finally tracked down after DNA left at the scene of a 1980 double murder was matched to people in GEDmatch who were the killer's third or fourth cousins. Through months of painstaking work, investigators working with the genealogist Barbara Rae-Venter built family trees that converged on DeAngelo.
Genealogists had long realized that databases like GEDmatch could be used in this way, but had been wary of working with law enforcement — fearing that DNA test customers would object to the idea of cops searching their DNA profiles and rummaging around in their family trees.
But the Golden State Killer’s crimes were so heinous that the anticipated backlash initially failed to materialize. Indeed, a May 2018 survey of more than 1,500 US adults found that 80% backed police using public genealogy databases to solve violent crimes.
“I was very surprised with the Golden State Killer case how positive the reaction was across the board,” CeCe Moore, a genealogist known for her appearances on TV, told BuzzFeed News a couple of months after DeAngelo’s arrest.
The info is here.
Monday, January 20, 2020
Chinese court sentences 'gene-editing' scientist to three years in prison
Huizhong Wu and Lusha Zhan
kfgo.com
Originally posted 29 Dec 19
A Chinese court sentenced the scientist who created the world's first "gene-edited" babies to three years in prison on Monday for illegally practising medicine and violating research regulations, the official Xinhua news agency said.
In November 2018, He Jiankui, then an associate professor at Southern University of Science and Technology in Shenzhen, said he had used gene-editing technology known as CRISPR-Cas9 to change the genes of twin girls to protect them from getting infected with the AIDS virus in the future.
The backlash in China and globally about the ethics of his research and work was fast and widespread.
Xinhua said He and his collaborators forged ethical review materials and recruited men with AIDS who were part of a couple to carry out the gene-editing. His experiments, it said, resulted in two women giving birth to three gene-edited babies.
The court also handed lesser sentences to Zhang Renli and Qin Jinzhou, who worked at two unnamed medical institutions, for having conspired with He in his work.
The info is here.
kfgo.com
Originally posted 29 Dec 19
A Chinese court sentenced the scientist who created the world's first "gene-edited" babies to three years in prison on Monday for illegally practising medicine and violating research regulations, the official Xinhua news agency said.
In November 2018, He Jiankui, then an associate professor at Southern University of Science and Technology in Shenzhen, said he had used gene-editing technology known as CRISPR-Cas9 to change the genes of twin girls to protect them from getting infected with the AIDS virus in the future.
The backlash in China and globally about the ethics of his research and work was fast and widespread.
Xinhua said He and his collaborators forged ethical review materials and recruited men with AIDS who were part of a couple to carry out the gene-editing. His experiments, it said, resulted in two women giving birth to three gene-edited babies.
The court also handed lesser sentences to Zhang Renli and Qin Jinzhou, who worked at two unnamed medical institutions, for having conspired with He in his work.
The info is here.
What Is Prudent Governance of Human Genome Editing?
Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.
Abstract
CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.
Here is an excerpt:
Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.
The info is here.
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.
Abstract
CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.
Here is an excerpt:
Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.
The info is here.
Sunday, January 19, 2020
A Right to a Human Decision
Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713
Abstract
Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.
This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.
The paper can be downloaded here.
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713
Abstract
Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.
This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.
The paper can be downloaded here.
Saturday, January 18, 2020
Could a Rising Robot Workforce Make Humans Less Prejudiced?
Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)
Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.
From the General Discussion
An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.
The research is here.
American Psychologist. (2019)
Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.
From the General Discussion
An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.
The research is here.
Friday, January 17, 2020
'DNA is not your destiny': Genetics a poor indicator of health
Nicole Bergot
Edmonton Journal
Originally posted 18 Dec 19
The vast majority of diseases, including many cancers, diabetes, and Alzheimer’s, have a genetic contribution of just five to 10 per cent, shows the meta-analysis of data from studies that examine relationships between common gene mutations, or single nucleotide polymorphisms (SNPs), and different conditions.
“Simply put, DNA is not your destiny, and SNPs are duds for disease prediction,” said study co-author David Wishart, professor in the department of biological sciences and the department of computing science.
But there are exceptions, including Crohn’s disease, celiac disease, and macular degeneration, which have a genetic contribution of approximately 40 to 50 per cent.
“Despite these rare exceptions, it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” said Wishart.
The info is here.
Edmonton Journal
Originally posted 18 Dec 19
The vast majority of diseases, including many cancers, diabetes, and Alzheimer’s, have a genetic contribution of just five to 10 per cent, shows the meta-analysis of data from studies that examine relationships between common gene mutations, or single nucleotide polymorphisms (SNPs), and different conditions.
“Simply put, DNA is not your destiny, and SNPs are duds for disease prediction,” said study co-author David Wishart, professor in the department of biological sciences and the department of computing science.
But there are exceptions, including Crohn’s disease, celiac disease, and macular degeneration, which have a genetic contribution of approximately 40 to 50 per cent.
“Despite these rare exceptions, it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” said Wishart.
The info is here.
Consciousness is real
Massimo Pigliucci
aeon.com
Originally published 16 Dec 19
Here is an excerpt:
Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).
By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)
The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.
The essay is here.
aeon.com
Originally published 16 Dec 19
Here is an excerpt:
Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).
By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)
The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.
The essay is here.
Thursday, January 16, 2020
Ethics In AI: Why Values For Data Matter
Marc Teerlink
forbes.com
Originally posted 18 Dec 19
Here is an excerpt:
Data Is an Asset, and It Must Have Values
Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.
According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.
One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.
They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).
So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.
The info is here.
forbes.com
Originally posted 18 Dec 19
Here is an excerpt:
Data Is an Asset, and It Must Have Values
Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.
According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.
One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.
They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).
So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.
The info is here.
Inaccurate group meta-perceptions drive negative out-group attributions in competitive contexts
Lees, J., Cikara, M.
Nat Hum Behav (2019)
Abstract
Across seven experiments and one survey (N=4282) people consistently overestimated out-group negativity towards the collective behavior of their in-group. This negativity bias in group meta-perception was present across multiple competitive (but not cooperative) intergroup contexts, and appears to be yoked to group psychology more generally; we observed negativity bias for estimation of out-group, anonymized-group, and even fellow in-group members’ perceptions. Importantly, in the context of American politics greater inaccuracy was associated with increased belief that the out-group is motivated by purposeful obstructionism. However, an intervention that informed participants of the inaccuracy of their beliefs reduced negative out-group attributions, and was more effective for those whose group meta-perceptions were more inaccurate. In sum, we highlight a pernicious bias in social judgments of how we believe ‘they’ see ‘our’ behavior, demonstrate how such inaccurate beliefs can exacerbate intergroup conflict, and provide an avenue for reducing the negative effects of inaccuracy.
From the Discussion
Our findings highlight a consistent, pernicious inaccuracy in social perception, along withhow these inaccurate perceptions relate to negative attributions towards out-groups. More broadly, inaccurate and overly negative GMPs exist across multiple competitive intergroup contexts, and we find no evidence they differ across the political spectrum. This suggests that there may be many domains of intergroup interaction where inaccurate GMPs could potentially diminish the likelihood of cooperation and instead exacerbate the possibility of conflict. However, our findings also highlight a straight-forward manner in which simply informing individuals of their inaccurate beliefs can reduce these negative attributions.
A version of the research can be downloaded here.
Nat Hum Behav (2019)
Abstract
Across seven experiments and one survey (N=4282) people consistently overestimated out-group negativity towards the collective behavior of their in-group. This negativity bias in group meta-perception was present across multiple competitive (but not cooperative) intergroup contexts, and appears to be yoked to group psychology more generally; we observed negativity bias for estimation of out-group, anonymized-group, and even fellow in-group members’ perceptions. Importantly, in the context of American politics greater inaccuracy was associated with increased belief that the out-group is motivated by purposeful obstructionism. However, an intervention that informed participants of the inaccuracy of their beliefs reduced negative out-group attributions, and was more effective for those whose group meta-perceptions were more inaccurate. In sum, we highlight a pernicious bias in social judgments of how we believe ‘they’ see ‘our’ behavior, demonstrate how such inaccurate beliefs can exacerbate intergroup conflict, and provide an avenue for reducing the negative effects of inaccuracy.
From the Discussion
Our findings highlight a consistent, pernicious inaccuracy in social perception, along withhow these inaccurate perceptions relate to negative attributions towards out-groups. More broadly, inaccurate and overly negative GMPs exist across multiple competitive intergroup contexts, and we find no evidence they differ across the political spectrum. This suggests that there may be many domains of intergroup interaction where inaccurate GMPs could potentially diminish the likelihood of cooperation and instead exacerbate the possibility of conflict. However, our findings also highlight a straight-forward manner in which simply informing individuals of their inaccurate beliefs can reduce these negative attributions.
A version of the research can be downloaded here.
Wednesday, January 15, 2020
French Executives Found Responsible for 35 Employees' Deaths by Suicide
Katie Way
vice.com
Originally posted 20 Dec 19
Today, in a landmark case for worker’s rights and workplace accountability, three former executives of telecommunication company Orange (formerly known as France Télécom) were charged with “collective moral harassment” after creating a work environment which was found to have directly contributed to the death by suicide of 35 employees. This included, according to NPR , 19 employees who died by suicide between 2008 and 2009, many of whom “left notes blaming the company or who killed themselves at work.”
Why would a company lead a terror campaign against its own workers? Money, of course: The plan was enacted as part of a push to get rid of 22,000 employees in order to counterbalance $50 million in debt incurred after the company privatized—it was formerly a piece of the French government’s Ministry of Posts and Telecommunications, meaning its employees were granted special protection as civil servants that prevented their higher-ups from firing them. According to the New York Times, the executives attempted to solve this dilemma by creating an “atmosphere of fear” and purposefully stoked “severe anxiety” in order to drive workers to quit. Former CEO Didier Lombard, sentenced to four months in jail and a $16,000 fine, reportedly called the strategies part of a plan to get rid of unwanted employees “either through the window or through the door.” Way to say the quiet part loud, Monsieur!
How should we balance morality and the law?
Peter Koch
BCM Blogs
Originally posted 20 Dec 19
I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”
The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?
The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.
Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.
There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.
Things get trickier, though, when we press more deeply into the matter.
The blog post is here.
BCM Blogs
Originally posted 20 Dec 19
I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”
The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?
The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.
Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.
There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.
Things get trickier, though, when we press more deeply into the matter.
The blog post is here.
Subscribe to:
Posts (Atom)