Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, October 21, 2019

An ethicist weighs in on our moral failure to act on climate change

Monique Deveaux
The Conversation
Originally published September 26, 2019

Here is an excerpt:

This call to collective moral and political responsibility is exactly right. As individuals, we can all be held accountable for helping to stop the undeniable environmental harms around us and the catastrophic threat posed by rising levels of CO2 and other greenhouse gases. Those of us with a degree of privilege and influence have an even greater responsibility to assist and advocate on behalf of those most vulnerable to the effects of global warming.

This group includes children everywhere whose futures are uncertain at best, terrifying at worst. It also includes those who are already suffering from severe weather events and rising water levels caused by global warming, and communities dispossessed by fossil fuel extraction. Indigenous peoples around the globe whose lands and water systems are being confiscated and polluted in the search for ever more sources of oil, gas and coal are owed our support and assistance. So are marginalized communities displaced by mountaintop removal and destructive dam energy projects, climate refugees and many others.

The message of climate activists is that we can't fulfill our responsibilities simply by making green choices as consumers or expressing support for their cause. The late American political philosopher Iris Young thought that we could only discharge our "political responsibility for injustice," as she put it, through collective political action.

The interests of the powerful, she warned, conflict with the political responsibility to take actions that challenge the status quo—but which are necessary to reverse injustices.

As the striking school children and older climate activists everywhere have repeatedly pointed out, political leaders have so far failed to enact the carbon emissions reduction policies that are so desperately needed. Despite UN Secretary General António Guterres' sombre words of warning at the Climate Action Summit, the UN is largely powerless in the face of governments that refuse to enact meaningful carbon-reducing policies, such as China and the U.S.

The info is here.

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Saturday, October 19, 2019

Forensic Clinicians’ Understanding of Bias

Tess Neal, Nina MacLean, Robert D. Morgan,
and Daniel C. Murrie
Psychology, Public Policy, and Law, 
Sep 16 , 2019, No Pagination Specified

Abstract:

Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.

Here is the conclusion:

These findings (along with Neal & Brodsky’s, 2016) suggest that forensic clinicians are in need of additional training not only to recognize biases but perhaps to begin to effectively mitigate harm from biases. For example, in predoctoral (e.g., internship) and postdoctoral (fellowships), didactic training could address bias, recognizing bias and providing strategies for minimizing bias. Additionally, supervisors could address identifying and reducing bias as a regular part of supervision (e.g., by including this as part of case conceptualization). However, further research is needed to determine the types of training and workflow strategies that best reduce bias. Future studies should focus on experimentally examining the presence of biases and ways to mitigate their effects in forensic evaluations.

The research is here.

Friday, October 18, 2019

Code of Ethics Can Guide Responsible Data Use

Katherine Noyes
The Wall Street Journal
Originally posted September 26, 2019

Here is an excerpt:

Associated with these exploding data volumes are plenty of practical challenges to overcome—storage, networking, and security, to name just a few—but far less straightforward are the serious ethical concerns. Data may promise untold opportunity to solve some of the largest problems facing humanity today, but it also has the potential to cause great harm due to human negligence, naivety, and deliberate malfeasance, Patil pointed out. From data breaches to accidents caused by self-driving vehicles to algorithms that incorporate racial biases, “we must expect to see the harm from data increase.”

Health care data may be particularly fraught with challenges. “MRIs and countless other data elements are all digitized, and that data is fragmented across thousands of databases with no easy way to bring it together,” Patil said. “That prevents patients’ access to data and research.” Meanwhile, even as clinical trials and numerous other sources continually supplement data volumes, women and minorities remain chronically underrepresented in many such studies. “We have to reboot and rebuild this whole system,” Patil said.

What the world of technology and data science needs is a code of ethics—a set of principles, akin to the Hippocratic Oath, that guides practitioners’ uses of data going forward, Patil suggested. “Data is a force multiplier that offers tremendous opportunity for change and transformation,” he explained, “but if we don’t do it right, the implications will be far worse than we can appreciate, in all sorts of ways.”

The info is here.

The Koch-backed right-to-try law has been a bust, but still threatens our health

Michael Hiltzik
The Los Angeles Times
Originally posted September 17, 2019

The federal right-to-try law, signed by President Trump in May 2018 as a sop to right-wing interests, including the Koch brothers network, always was a cruel sham perpetrated on sufferers of intractably fatal diseases.

As we’ve reported, the law was promoted as a compassionate path to experimental treatments for those patients — but in fact was a cynical ploy aimed at emasculating the Food and Drug Administration in a way that would undermine public health and harm all patients.

Now that a year has passed since the law’s enactment, the assessments of how it has functioned are beginning to flow in. As NYU bioethicist Arthur Caplan observed to Ed Silverman’s Pharmalot blog, “the right to try remains a bust.”

His judgment is seconded by the veteran pseudoscience debunker David Gorski, who writes: “Right-to-try has been a spectacular failure thus far at getting terminally ill patients access to experimental drugs.”

That should come as no surprise, Gorski adds, because “right-to-try was never about helping terminally ill patients. ... It was always about ideology more than anything else. It was always about weakening the FDA’s ability to regulate drug approval.”

The info is here.

Thursday, October 17, 2019

AI ethics and the limits of code(s)

Machine learningGeoff Mulgan
nesta.org.uk
Originally published September 16, 2019

Here is an excerpt:

1. Ethics involve context and interpretation - not just deduction from codes.

Too much writing about AI ethics uses a misleading model of what ethics means in practice. It assumes that ethics can be distilled into principles from which conclusions can then be deduced, like a code. The last few years have brought a glut of lists of principles (including some produced by colleagues at Nesta). Various overviews have been attempted in recent years. A recent AI Ethics Guidelines Global Inventory collects over 80 different ethical frameworks. There’s nothing wrong with any of them and all are perfectly sensible and reasonable. But this isn’t how most ethical reasoning happens. The lists assume that ethics is largely deductive, when in fact it is interpretive and context specific, as is wisdom. One basic reason is that the principles often point in opposite directions - for example, autonomy, justice and transparency. Indeed, this is also the lesson of medical ethics over many decades. Intense conversation about specific examples, working through difficult ambiguities and contradictions, counts for a lot more than generic principles.

The info is here.

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Wednesday, October 16, 2019

Birmingham psychologist defrauded state Medicaid of more than $1.5 million, authorities say

Carol Robinson
Sharon Waltz
al.com
Originally published August 15, 2019

A Birmingham psychologist has been charged with defrauding the Alabama Medicaid Agency of more than $1 million by filing false claims for counseling services that were not provided.

Sharon D. Waltz, 50, has agreed to plead guilty to the charge and pay restitution in the amount of $1.5 million, according to a joint announcement Thursday by Northern District of Alabama U.S. Attorney Jay Town, Department of Health and Human Services -Office of Inspector General Special Agent Derrick L. Jackson and Alabama Attorney General Steve Marshall.

“The greed of this defendant deprived mental health care to many at-risk young people in Alabama, with the focus on profit rather than the efficacy of care,” Town said. “The costs are not just monetary but have social and health impacts on the entire Northern District. This prosecution, and this investigation, demonstrates what is possible when federal and state law enforcement agencies work together.”

The info is here.

Tribalism is Human Nature

Clark, Cory & Liu, Brittany & Winegard, Bo & Ditto, Peter.  (2019).
Current Directions in Psychological Science. 
10.1177/0963721419862289.

Abstract

Humans evolved in the context of intense intergroup competition, and groups comprised of loyal members more often succeeded than those that were not. Therefore, selective pressures have consistently sculpted human minds to be "tribal," and group loyalty and concomitant cognitive biases likely exist in all groups. Modern politics is one of the most salient forms of modern coalitional conflict and elicits substantial cognitive biases. Given the common evolutionary history of liberals and conservatives, there is little reason to expect pro-tribe biases to be higher on one side of the political spectrum than the other. We call this the evolutionarily plausible null hypothesis and recent research has supported it. In a recent meta-analysis, liberals and conservatives showed similar levels of partisan bias, and a number of pro-tribe cognitive tendencies often ascribed to conservatives (e.g., intolerance toward dissimilar others) have been found in similar degrees in liberals. We conclude that tribal bias is a natural and nearly ineradicable feature of human cognition, and that no group—not even one’s own—is immune.

Conclusion 

Humans are tribal creatures. They were not designed to reason dispassionately about the world; rather, they were designed to reason in ways that promote the interests of their coalition (and hence, themselves). It would therefore be surprising if a particular group of individuals did not display such tendencies, and recent work suggests, at least in the U.S. political sphere, that both liberals and conservatives are substantially biased—and to similar degrees. Historically, and perhaps even in modern society, these tribal biases are quite useful for group cohesion but perhaps also for other moral purposes (e.g., liberal bias in favor of disadvantaged groups might help increase equality). Also, it is worth noting that a bias toward viewing one’s own tribe in a favorable light is not necessarily irrational. If one’s goal is to be admired among one’s own tribe, fervidly supporting their agenda and promoting their goals, even if that means having or promoting erroneous beliefs, is often a reasonable strategy (Kahan et al., 2017). The incentives for holding an accurate opinion about global climate change, for example, may not be worth the 12 social rejection and loss of status that could accompany challenging the views of one’s political ingroup.

The info is here.

Tuesday, October 15, 2019

Want To Reduce Suicides? Follow The Data — To Medical Offices, Motels And Even Animal Shelters

Maureen O’Hagan
Kaiser Health News
Originally published September 23, 2019

Here is an excerpt:

Experts have long believed that suicide is preventable, and there are evidence-based programs to train people how to identify and respond to folks in crisis and direct them to help. That’s where Debra Darmata, Washington County’s suicide prevention coordinator, comes in. Part of Darmata’s job involves running these training programs, which she described as like CPR but for mental health.

The training is typically offered to people like counselors, educators or pastors. But with the new data, the county realized they were missing people who may have been the last to see the decedents alive. They began offering the training to motel clerks and housekeepers, animal shelter workers, pain clinic staffers and more.

It is a relatively straightforward process: Participants are taught to recognize signs of distress. Then they learn how to ask a person if he or she is in crisis. If so, the participants’ role is not to make the person feel better or to provide counseling or anything of the sort. It is to call a crisis line, and the experts will take over from there.

Since 2014, Darmata said, more than 4,000 county residents have received training in suicide prevention.

“I’ve worked in suicide prevention for 11 years,” Darmata said, “and I’ve never seen anything like it.”

The sheriff’s office has begun sending a deputy from its mental health crisis team when doing evictions. On the eviction paperwork, they added the crisis line number and information on a county walk-in mental health clinic. Local health care organizations have new procedures to review cases involving patient suicides, too.

The info is here.

Why not common morality?

Rhodes R 
Journal of Medical Ethics 
Published Online First: 11 September 2019. 
doi: 10.1136/medethics-2019-105621

Abstract

This paper challenges the leading common morality accounts of medical ethics which hold that medical ethics is nothing but the ethics of everyday life applied to today’s high-tech medicine. Using illustrative examples, the paper shows that neither the Beauchamp and Childress four-principle account of medical ethics nor the Gert et al 10-rule version is an adequate and appropriate guide for physicians’ actions. By demonstrating that medical ethics is distinctly different from the ethics of everyday life and cannot be derived from it, the paper argues that medical professionals need a touchstone other than common morality for guiding their professional decisions. That conclusion implies that a new theory of medical ethics is needed to replace common morality as the standard for understanding how medical professionals should behave and what medical professionalism entails. En route to making this argument, the paper addresses fundamental issues that require clarification: what is a profession? how is a profession different from a role? how is medical ethics related to medical professionalism? The paper concludes with a preliminary sketch for a theory of medical ethics.

Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Why we don’t always punish: Preferences for non-punitive responses to moral violations

Joseph Heffner & Oriel FeldmanHall
Scientific Reports, volume 9, 
Article number: 13219 (2019) 

Abstract

While decades of research demonstrate that people punish unfair treatment, recent work illustrates that alternative, non-punitive responses may also be preferred. Across five studies (N = 1,010) we examine non-punitive methods for restoring justice. We find that in the wake of a fairness violation, compensation is preferred to punishment, and once maximal compensation is available, punishment is no longer the favored response. Furthermore, compensating the victim—as a method for restoring justice—also generalizes to judgments of more severe crimes: participants allocate more compensation to the victim as perceived severity of the crime increases. Why might someone refrain from punishing a perpetrator? We investigate one possible explanation, finding that punishment acts as a conduit for different moral signals depending on the social context in which it arises. When choosing partners for social exchange, there are stronger preferences for those who previously punished as third-party observers but not those who punished as victims. This is in part because third-parties are perceived as relatively more moral when they punish, while victims are not. Together, these findings demonstrate that non-punitive alternatives can act as effective avenues for restoring justice, while also highlighting that moral reputation hinges on whether punishment is enacted by victims or third-parties.

The research is here.

Readers may want to think about patients in psychotherapy and licensing board actions.

Sunday, October 13, 2019

A Successful Artificial Memory Has Been Created

Robert Martone
A Successful Artificial Memory Has Been CreatedScientific American
Originally posted August27, 2019

Here is the conclusion:

There are legitimate motives underlying these efforts. Memory has been called “the scribe of the soul,” and it is the source of one’s personal history. Some people may seek to recover lost or partially lost memories. Others, such as those afflicted with post-traumatic stress disorder or chronic pain, might seek relief from traumatic memories by trying to erase them.

The methods used here to create artificial memories will not be employed in humans anytime soon: none of us are transgenic like the animals used in the experiment, nor are we likely to accept multiple implanted fiber-optic cables and viral injections. Nevertheless, as technologies and strategies evolve, the possibility of manipulating human memories becomes all the more real. And the involvement of military agencies such as DARPA invariably renders the motivations behind these efforts suspect. Are there things we all need to be afraid of or that we must or must not do? The dystopian possibilities are obvious.

Creating artificial memories brings us closer to learning how memories form and could ultimately help us understand and treat dreadful diseases such as Alzheimer’s. Memories, however, cut to the core of our humanity, and we need to be vigilant that any manipulations are approached ethically.

The info is here.

Saturday, October 12, 2019

Lolita understood that some sex is transactional. So did I

<p>Detail from film poster for <em>Lolita </em>(1962). <em>Photo by Getty</em></p>Tamara MacLeod
aeon.co
Originally published September 11, 2019

Here is an excerpt:

However, I think that it is the middle-class consciousness of liberal feminism that excluded sex work from its platform. After all, wealthier women didn’t need to do sex work as such; they operated within the state-sanctioned transactional boundaries of marriage. The dissatisfaction of the 20th-century housewife was codified as a struggle for liberty and independence as an addition to subsidised material existence, making a feminist discourse on work less about what one has to do, and more about what one wants to do. A distinction within women’s work emerged: if you don’t enjoy having sex with your husband, it’s just a problem with the marriage. If you don’t enjoy sex with a client, it’s because you can’t consent to your own exploitation. It is a binary view of sex and consent, work and not-work, when the reality is somewhat murkier. It is a stubborn blindness to the complexity of human relations, and maybe of human psychology itself, descending from the viscera-obsessed, radical absolutisms of Andrea Dworkin.

The housewife who married for money and then fakes orgasms, the single mother who has sex with a man she doesn’t really like because he’s offering her some respite: where are the delineations between consent and exploitation, sex and duty? The first time I traded sex for material gain, I had some choices, but they were limited. I chose to be exploited by the man with the resources I needed, choosing his house over homelessness. Lolita was a child, and she was exploited, but she was also conscious of the function of her body in a patriarchal economy. Philosophically speaking, most of us do indeed consent to our own exploitation.

The info is here.

Friday, October 11, 2019

Dying is a Moral Event. NJ Law Caught Up With Morality

T. Patrick Hill
Star-Ledge Guest Column
Originally posted September 9, 2019

New Jersey’s Medical-Aid-in-Dying legislation authorizes physicians to issue a prescription to end the lives of their patients who have been diagnosed with a terminal illness, are expected to die within six months, and have requested their physicians to help them do so. While the legislation does not require physicians to issue the prescription, it does require them to transfer a patient’s medical records to another physician who has agreed to prescribe the lethal medication.

(cut)

The Medical Aid in Dying Act goes even further, concluding that its passage serves the public’s interests, even as it endorses the “right of a qualified terminally ill patient …to obtain medication that the patient may choose to self-administer in order to bring about the patient’s humane and dignified death.”

The info is here.

Is there a right to die?

Eric Mathison
Baylor Medical College of Medicine Blog
Originally posted May 31, 2019

How people think about death is undergoing a major transformation in the United States. In the past decade, there has been a significant rise in assisted dying legalization, and more states are likely to legalize it soon.

People are adapting to a healthcare system that is adept at keeping people alive, but struggles when aggressive treatment is no longer best for the patient. Many people have concluded, after witnessing a loved one suffer through a prolonged dying process, that they don’t want that kind of death for themselves.

Public support for assisted dying is high. Gallup has tracked Americans’ support for it since 1951. The most recent survey, from 2017, found that 73% of Americans support legalization. Eighty-one percent of Democrats and 67% of Republicans support it, making this a popular policy regardless of political affiliation.

The effect has been a recent surge of states passing assisted dying legislation. New Jersey passed legislation in April, meaning seven states (plus the District of Columbia) now allow it. In addition to New Jersey, California, Colorado, Hawaii, and D.C. all passed legislation in the past three years, and seventeen states are considering legislation this year. Currently, around 20% of Americans live in states where assisted dying is legal.

The info is here.

Thursday, October 10, 2019

Moral Distress and Moral Strength Among Clinicians in Health Care Systems: A Call for Research

Connie M. Ulrich and Christine Grady
NAM Perspectives. 
https://doi.org/10.31478/201909c


Here is an excerpt:

Evidence shows that dissatisfaction and wanting to leave one’s job—and the profession altogether—often follow morally distressing encounters. Ethics education that builds cognitive and communication skills, teaches clinicians ethical concepts, and helps them gain communication skills and confidence may be essential in building moral strength. One study found, for example, that among practicing nurses and social workers, those with the least ethics education were also the least confident, the least likely to use ethics resources (if available), and the least likely to act on their ethical concerns. In this national study, as many as 23 percent of nurses reported having had no ethics education at all. But the question remains—is ethics education enough?

Many factors likely support or hinder a clinician’s capacity and willingness to act with moral strength. More research is needed to investigate how interdisciplinary ethics education and institutional resources can help nurses, physicians, and others voice their ethical concerns, help them agree on morally acceptable actions, and support their capacity and propensity to act with moral strength and confidence. Research on moral distress and ethical concerns in everyday clinical practice can begin to build a knowledge base that will inform clinical training—in both educational and health care institutions—and that will help create organizational structures and processes to prepare and support clinicians to encounter potentially distressing situations with moral strength. Research can help tease out what is important and predictive for taking (or not taking) ethical action in morally distressing circumstances. This knowledge would be useful for designing strategies to support clinician well-being. Indeed, studies should focus on the influences that affect clinicians’ ability and willingness to become involved or take ownership of ethically-laden patient care issues, and their level of confidence in doing so.

Our illusory sense of agency has a deeply important social purpose

<p>French captain Zinedine Zidane is sent off during the 2006 World Cup final in Germany. <em>Photo by Shaun Botterill/Getty</em></p>Chris Frith
aeon.com
Originally published September 22, 2019

Here are two excerpts:

We humans like to think of ourselves as mindful creatures. We have a vivid awareness of our subjective experience and a sense that we can choose how to act – in other words, that our conscious states are what cause our behaviour. Afterwards, if we want to, we might explain what we’ve done and why. But the way we justify our actions is fundamentally different from deciding what to do in the first place.

Or is it? Most of the time our perception of conscious control is an illusion. Many neuroscientific and psychological studies confirm that the brain’s ‘automatic pilot’ is usually in the driving seat, with little or no need for ‘us’ to be aware of what’s going on. Strangely, though, in these situations we retain an intense feeling that we’re in control of what we’re doing, what can be called a sense of agency. So where does this feeling come from?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

(cut)

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

The info is here.

Wednesday, October 9, 2019

Whistle-blowers act out of a sense of morality

Alice Walton
review.chicagobooth.edu
Originally posted September 16, 2019

Here is an excerpt:

To understand the factors that predict the likelihood of whistle-blowing, the researchers analyzed data from more than 42,000 participants in the ongoing Merit Principles Survey, which has polled US government employees since 1979, and which covers whistle-blowing. Respondents answer questions about their past experiences with unethical behavior, the approaches they’d take in dealing with future unethical behavior, and their personal characteristics, including their concern for others and their feelings about their organizations.

Concern for others was the strongest predictor of whistle-blowing, the researchers find. This was true both of people who had already blown the whistle on bad behavior and of people who expected they might in the future.

Loyalty to an immediate community—or ingroup, in psychological terms—was also linked to whistle-blowing, but in an inverse way. “The greater people’s concern for loyalty, the less likely they were to blow the whistle,” write the researchers. 

Organizational factors—such as people’s perceptions about their employer, their concern for their job, and their level of motivation or engagement—were largely unconnected to whether people spoke up. The only ones that appeared to matter were how fair people perceived their organization to be, as well as the extent to which the organization educated its employees about ways to expose bad behavior and the rights of whistle-blowers. The data suggest these two factors were linked to whether whistle-blowers opted to address the unethical behavior through internal or external avenues. 

The info is here.

Moral and religious convictions: Are they the same or different things?

Skitka LJ, Hanson BE, Washburn AN, Mueller AB (2018)
PLoS ONE 13(6): e0199311.
https://doi.org/10.1371/journal.pone.0199311

Abstract

People often assume that moral and religious convictions are functionally the same thing. But are they? We report on 19 studies (N = 12,284) that tested whether people’s perceptions that their attitudes are reflections of their moral and religious convictions across 30 different issues were functionally the same (the equivalence hypothesis) or different constructs (the distinct constructs hypothesis), and whether the relationship between these constructs was conditional on political orientation (the political asymmetry hypothesis). Seven of these studies (N = 5,561, and 22 issues) also had data that allowed us to test whether moral and religious conviction are only closely related for those who are more rather than less religious (the secularization hypothesis), and a narrower form of the political asymmetry and secularization hypotheses, that is, that people’s moral and religious convictions may be tightly connected constructs only for religious conservatives. Meta-analytic tests of each of these hypotheses yielded weak support for the secularization hypothesis, no support for the equivalence or political asymmetry hypotheses, and the strongest support for the distinct constructs hypothesis.

From the Discussion

People’s lay theories often confound these constructs: If something is perceived as religious, it will also be perceived as moral (and vice versa). Contrary to both people’s lay theories and various scholarly theories of religion, however, we found that the degree to which people perceive a given attitude as a moral or religious conviction is largely orthogonal, sharing only on average 14% common variance.

(cut)

Religious and moral conviction were more strongly related to each other among the religious than the non-religious for 59% of the issues we examined, a finding consistent with the secularization hypothesis. That said, the effect size in support of the secularization hypothesis was very small; the interaction of religiosity and religious conviction only explained a little more than 1% of the variance in moral conviction overall. Taken together, the overwhelming evidence therefore seems most consistent with the distinct constructs hypothesis: Moral and religious convictions are largely independent constructs.

Tuesday, October 8, 2019

A Social Identity Approach to Engaging Christians in the Issue of Climate Change

Goldberg, M. H., Gustafson, A., Ballew, M. T.,
Rosenthal, S. A., & Leiserowitz, A.
(2019). Science Communication, 
41(4), 442–463.
https://doi.org/10.1177/1075547019860847

Abstract

Using two nationally representative surveys (total N = 2,544) and two experiments (total N = 1,620), we investigate a social identity approach to engaging Christians in the issue of climate change. Results show Christian Americans say “protecting God’s creation” is a top reason for wanting to reduce global warming. An exploratory experiment and a preregistered replication tested a “stewardship frame” message with Christian Americans and found significant increases in pro-environmental and climate change beliefs, which were explained by increases in viewing environmental protection as a moral and religious issue, and perceptions that other Christians care about environmental protection.

From the Discussion:

Two studies using large diverse samples demonstrate that a social identity approach to engaging Christians in the issue of climate change is a promising strategy. In Study 1, in a combined sample of two nationally representative waves of survey data, we found that “protect God’s creation” is one of the most important motivations Christians report for wanting to mitigate global warming. This is important because it indicates that many Americans, and especially Christians, are willing to view climate change through a religious lens, and that messages that frame climate change as a religious issue could encourage greater engagement in the issue among this population.

Greta Thunberg To U.S.: 'You Have A Moral Responsibility' On Climate Change

Bill Chappell and Ailsa Chang
NPR.org
Originally published September 13, 2019

Greta Thunberg led a protest at the White House on Friday. But she wasn't looking to go inside — "I don't want to meet with people who don't accept the science," she says.

The young Swedish activist joined a large crowd of protesters who had gathered outside, calling for immediate action to help the environment and reverse an alarming warming trend in average global temperatures.

She says her message for President Trump is the same thing she tells other politicians: Listen to science, and take responsibility.

Thunberg, 16, arrived in the U.S. last week after sailing across the Atlantic to avoid the carbon emissions from jet travel. She plans to spend nearly a week in Washington, D.C. — but she doesn't plan to meet with anyone from the Trump administration during that time.

"I haven't been invited to do that yet. And honestly I don't want to do that," Thunberg tells NPR's Ailsa Chang. If people in the White House who reject climate change want to change their minds, she says, they should rely on scientists and professionals to do that.

But Thunberg also believes the U.S. has an "incredibly important" role to play in fighting climate change.

"You are such a big country," she says. "In Sweden, when we demand politicians to do something, they say, 'It doesn't matter what we do — because just look at the U.S.'

The info is here.

Monday, October 7, 2019

Ethics a distant second to profits in Silicon Valley

Gabriel Fairman
www.sdtimes.com
Originally published September 9, 2019

Here is an excerpt:

For ethics to become a part of the value system that drives behavior in Silicon Valley, it would have to be incentivized as such. I have a hard time envisioning a world were ethics can offer shareholders huge returns. Ethics is about doing the right thing, and the right thing and the lucrative thing typically don’t necessarily go hand in hand.

Everyone can understand ethics. Basic questions such as “Will this be good for the world in a year, 10 years or 20 years?”, “Would I want this for my kids?” are easy litmus tests to differentiate between ethical and unethical conduct. The challenge is that considerations on ethics slow down development by raising challenges and concerns early on.  Ethics are about amplifying potential problems that can be foreseen down the road.

On the other hand, venture-funded start-ups are about minimizing the ramifications of these problems as they move on quickly. How can ethics compete with billion-dollar exits? It can’t. Ethics are just this thing that we read about in articles or hear about in lectures. It is not driving day-to-day decision-making. You listen to people in boardrooms asking, “How will this impact our valuation?,” or “What is the ROI of this initiative?” but you don’t hear top-level execs brainstorming about how their product or company could be more ethical because there is no compensation tied to that. The way we have built our world, ethics are just fluff.

We are also extraordinary at differentiating private vs. public lives. Many people working at tech companies don’t allow their kids to use electronic devices ubiquitously or would not want their kids bossed around by an algorithm as they let go of full-time employee benefits. But they promote these things and further them because these things are highly profitable, not because they are fundamentally good. This key distinction between private and public behavior allows people to behave in wildly hypocritical ways, by helping advance the very things they do not want in their own homes.

The info is here.

A Theranos Whistleblower’s Mission to Make Tech Ethical

Brian Gallagher
ethicalsystems.org
Originally published September 12, 2019

Here is an excerpt from the interview:

Is Theranos emblematic of a cultural trend or an anomaly of unethical behavior?

My initial impression was that Theranos was some very bizarre one-off scandal. But as I started to review thousands of startups, I realized that there is quite a lot of unethical behavior in tech. The stories may not be quite as grandiose or large-scale as Theranos’, but it was really common to see companies lie to investors, mislead customers, and create abusive work environments. Many founders lacked an understanding of how their products could have negative impacts on society. The frustration of seeing the same mistakes happen over and over again made it clear that something needed to be done about this.

How has your experience at Theranos helped shape your understanding of the link between ethics and culture?

If the company had effective and ethically mature leadership, the company may not have used underdeveloped technology on patients without their consent. If the board was constructed in a way to properly challenge the product, perhaps it would have been developed. If employees weren’t scared and disillusioned, perhaps constructive conversations about novel solutions could have arisen. On rare occasions are these scandals a sort of random surprise or the result of an unexpected disaster. They are often an accumulation of poor ethical decisions. Having a culture where, at every stakeholder level, people can speak-up and be properly considered when they see something wrong is crucial. It makes the difference in building ethical organizations and preventing large disastrous events from happening.

The info is here.

Sunday, October 6, 2019

Thinking Fast and Furious: Emotional Intensity and Opinion Polarization in Online Media

David Asker & Elias Dinas
Public Opinion Quarterly
Published: 09 September 2019
https://doi.org/10.1093/poq/nfz042

Abstract

How do online media increase opinion polarization? The “echo chamber” thesis points to the role of selective exposure to homogeneous views and information. Critics of this view emphasize the potential of online media to expand the ideological spectrum that news consumers encounter. Embedded in this discussion is the assumption that online media affects public opinion via the range of information that it offers to users. We show that online media can induce opinion polarization even among users exposed to ideologically heterogeneous views, by heightening the emotional intensity of the content. Higher affective intensity provokes motivated reasoning, which in turn leads to opinion polarization. The results of an online experiment focusing on the comments section, a user-driven tool of communication whose effects on opinion formation remain poorly understood, show that participants randomly assigned to read an online news article with a user comments section subsequently express more extreme views on the topic of the article than a control group reading the same article without any comments. Consistent with expectations, this effect is driven by the emotional intensity of the comments, lending support to the idea that motivated reasoning is the mechanism behind this effect.

From the Discussion:

These results should not be taken as a challenge to the echo chamber argument, but rather as a complement to it. Selective exposure to desirable information and motivated rejection of undesirable information constitute separate mechanisms whereby online news audiences may develop more extreme views. Whereas there is already ample empirical evidence about the first mechanism, previous research on the second has been scant. Our contribution should thus be seen as an attempt to fill this gap.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Friday, October 4, 2019

When Patients Request Unproven Treatments

Casey Humbyrd and Matthew Wynia
medscape.com
Originally posted March 25, 2019

Here is an excerpt:

Ethicists have made a variety of arguments about these injections. The primary arguments against them have focused on the perils of physicians becoming sellers of "snake oil," promising outlandish benefits and charging huge sums for treatments that might not work. The conflict of interest inherent in making money by providing an unproven therapy is a legitimate ethical concern. These treatments are very expensive and, as they are unproven, are rarely covered by insurance. As a result, some patients have turned to crowdfunding sites to pay for these questionable treatments.

But the profit motive may not be the most important ethical issue at stake. If it were removed, hypothetically, and physicians provided the injections at cost, would that make this practice more acceptable?

No. We believe that physicians who offer these injections are skipping the most important step in the ethical adoption of any new treatment modality: research that clarifies the benefits and risks. The costs of omitting that important step are much more than just monetary.

For the sake of argument, let's assume that stem cells are tremendously successful and that they heal arthritic joints, making them as good as new. By selling these injections to those who can pay before the treatment is backed by research, physicians are ensuring unavailability to patients who can't pay, because insurance won't cover unproven treatments.

The info is here.

Google bans ads for unproven medical treatments

Megan Graham
www.cnbc.com
Originally posted September 6, 2019

Google on Friday announced a new health care and medicines policy that bans advertising for “unproven or experimental medical techniques,” which it says includes most stem cell, cellular and gene therapies.

A blog post from Google policy advisor Adrienne Biddings said the company will prohibit ads selling treatments “that have no established biomedical or scientific basis.” It will also extend the policy to treatments that are rooted in scientific findings and preliminary clinical experience “but currently have insufficient formal clinical testing to justify widespread clinical use.” The change was first reported by The Washington Post.

The new Google ads policy may put the heat on for the stem cell clinic industry, which has until recently been largely unregulated and has some players who have been accused of taking advantage of seriously ill patients, The Washington Post reported.

“We know that important medical discoveries often start as unproven ideas — and we believe that monitored, regulated clinical trials are the most reliable way to test and prove important medical advances,” Biddings said. “At the same time, we have seen a rise in bad actors attempting to take advantage of individuals by offering untested, deceptive treatments. Often times, these treatments can lead to dangerous health outcomes and we feel they have no place on our platforms.”

The Google post included a quote from the president of the International Society for Stem Cell Research, Deepak Srivastava, who said the new policy is a “much-needed and welcome step to curb the marketing of unscrupulous medical products such as unproven stem cell therapies.”

The info is here.

Thursday, October 3, 2019

Empathy in the Age of the EMR

Danielle Ofri
The Lancet

Here is an excerpt:

Keeping the doctor-patient connection from eroding in the age of the EMR is an uphill battle. We all know that the eye contact that Fildes depicts is a critical ingredient for communication and connection, but when the computer screen is so demanding of focus that the patient becomes a distraction, even an impediment—this is hopelessly elusive.

Recently, I was battling the EMR during a visit with a patient who had particularly complicated medical conditions. We hadn’t seen each other in more than a year, so there was much to catch up on. Each time she raised an issue, I turned to the computer to complete the requisite documentation for that concern. In that pause, however, my patient intuited a natural turn of conversation. Thinking that it was now her turn to talk, she would bring up the next thing on her mind. But of course I wasn’t finished with the last thing, so I would say, “Would you mind holding that thought for a second? I just need to finish this one thing…”

I’d turn back to the computer and fall silent to finish documenting. After a polite minute, she would apparently sense that it was again her turn in the conversation and thus begin her next thought. I was torn because I didn’t want to stop her in her tracks, but we’ve been so admonished about the risks inherent in distracted multitasking that I wanted to focus fully on the thought I was entering into the computer. I know it’s rude to cut someone off, but preserving a clinical train of thought is crucial for avoiding medical error.

The info is here.

Deception and self-deception

Peter Schwardmann and Joel van der Weele
Nature Human Behaviour (2019)

Abstract

There is ample evidence that the average person thinks he or she is more skillful, more beautiful and kinder than others and that such overconfidence may result in substantial personal and social costs. To explain the prevalence of overconfidence, social scientists usually point to its affective benefits, such as those stemming from a good self-image or reduced anxiety about an uncertain future. An alternative theory, first advanced by evolutionary biologist Robert Trivers, posits that people self-deceive into higher confidence to more effectively persuade or deceive others. Here we conduct two experiments (combined n = 688) to test this strategic self-deception hypothesis. After performing a cognitively challenging task, half of our subjects are informed that they can earn money if, during a short face-to-face interaction, they convince others of their superior performance. We find that the privately elicited beliefs of the group that was informed of the profitable deception opportunity exhibit significantly more overconfidence than the beliefs of the control group. To test whether higher confidence ultimately pays off, we experimentally manipulate the confidence of the subjects by means of a noisy feedback signal. We find that this exogenous shift in confidence makes subjects more persuasive in subsequent face-to-face interactions. Overconfidence emerges from these results as the product of an adaptive cognitive technology with important social benefits, rather than some deficiency or bias.

From the Discussion section

The results of our experiment demonstrate that the strategic environment matters for cognition about the self. We observe that deception opportunities increase average overconfidence relative to others, and that, under the right circumstances, increased confidence can pay off. Our data thus support the the idea that overconfidence is strategically employed for social gain.

Our results do not allow for decisive statements about the exact cognitive channels underlying such self-deception. While we find some indications that an aversion to lying increases overconfidence, the evidence is underwhelming.13 When it comes to the ability to deceive others, we find that even when we control for the message, confidence leads to higher evaluations in some conditions. This is  consistent with the idea that self-deception improves the deception technology of contestants, possibly by eliminating non-verbal give-away cues.

The research is here. 

Wednesday, October 2, 2019

Evolutionary Thinking Can Help Companies Foster More Ethical Culture

Brian Gallagher
ethicalsystems.org
Originally published August 20, 2019


Here are two excerpts:

How might human beings be mismatched to the modern business environment?

Many problems of the modern workplace have not been viewed through a mismatch lens, so at this point these are still hypotheses. But let’s take the role of managers, for example. Humans have a strong aversion to dominance, which is a result of our egalitarian nature that served us well in the small-scale societies in which we evolved. One of the biggest causes of job dissatisfaction, people report, is the interaction with their line manager. Many people find this relationship extremely stressful, as it infringes on their sense of autonomy, to be dominated by someone who controls them and gives them orders. Or take the physical work environment that looks nothing like our ancestral environment—our ancestors were always outside, working as they socialized and getting plenty of physical exercise while they hunted and gathered in tight social groups. Now we are forced to spend much of our daytime in tall buildings with small offices surrounded by genetic strangers and no natural scenes to speak of.

(cut)

What can business leaders learn from evolutionary psychology about how to structure relationships between bosses and employees?

One of the most important lessons from our research is that leaders are effective to the extent that they enable their teams to be effective. This sounds obvious, but leadership is really about the team and the followers. Individuals gladly follow leaders who they respect because of their skills and competence, and they have a hard time, by contrast, following a leader who is dominant and threatening. Yet human nature is also such that if you give someone power, they will use it—there is a fundamental leader-follower conflict. To keep managers from following the easy route of threat and dominance, every healthy organization should have mechanisms in place to curtail their power. In small-scale societies, as the anthropological literature makes clear, leaders are kept in check because they can only exercise influence in their domain of expertise, nothing else. What’s more, there should be room to gossip about and ridicule leaders, and leaders should be regularly replaced in order to prevent them building up a power base. Why not have feedback sessions where employees can provide regular inputs in the assessment of their bosses? Why not include workers in hiring board members? Many public and private organizations in Europe are currently experimenting with these power-leveling mechanisms.

The info is here.

Seven Key Misconceptions about Evolutionary Psychology

Image result for evolutionary psychologyLaith Al-Shawaf
www.areomagazine.com
Originally published August 20, 2019

Evolutionary approaches to psychology hold the promise of revolutionizing the field and unifying it with the biological sciences. But among both academics and the general public, a few key misconceptions impede its application to psychology and behavior. This essay tackles the most pervasive of these.

Misconception 1: Evolution and Learning Are Conflicting Explanations for Behavior

People often assume that if something is learned, it’s not evolved, and vice versa. This is a misleading way of conceptualizing the issue, for three key reasons.

First, many evolutionary hypotheses are about learning. For example, the claim that humans have an evolved fear of snakes and spiders does not mean that people are born with this fear. Instead, it means that humans are endowed with an evolved learning mechanism that acquires a fear of snakes more easily and readily than other fears. Classic studies in psychology show that monkeys can acquire a fear of snakes through observational learning, and they tend to acquire it more quickly than a similar fear of other objects, such as rabbits or flowers. It is also harder for monkeys to unlearn a fear of snakes than it is to unlearn other fears. As with monkeys, the hypothesis that humans have an evolved fear of snakes does not mean that we are born with this fear. Instead, it means that we learn this fear via an evolved learning mechanism that is biologically prepared to acquire some fears more easily than others.

Second, learning is made possible by evolved mechanisms instantiated in the brain. We are able to learn because we are equipped with neurocognitive mechanisms that enable learning to occur—and these neurocognitive mechanisms were built by evolution. Consider the fact that both children and puppies can learn, but if you try to teach them the same thing—French, say, or game theory—they end up learning different things. Why? Because the dog’s evolved learning mechanisms are different from those of the child. What organisms learn, and how they learn it, depends on the nature of the evolved learning mechanisms housed in their brains.

The info is here.


Tuesday, October 1, 2019

NACAC Agrees to Change Its Code of Ethics

Scott Jaschik
insidehighered.com
Originally published September 30-, 2019

When the Assembly of the National Association for College Admission Counseling has in years past debated measures to regulate the recruiting of international students or the proper rules for waiting lists and many other issues, debate has been heated. It was anything but heated this year, although the issue before the delegates was arguably more important than any of those.

Delegates voted Saturday -- 211 to 3 -- to strip provisions from the Code of Ethics and Professional Practice that may violate antitrust laws. The provisions are:

  • Colleges must not offer incentives exclusive to students applying or admitted under an early decision application plan. Examples of incentives include the promise of special housing, enhanced financial aid packages, and special scholarships for early decision admits. Colleges may, however, disclose how admission rates for early decision differ from those for other admission plans."
  • College choices should be informed, well-considered, and free from coercion. Students require a reasonable amount of time to identify their college choices; complete applications for admission, financial aid, and scholarships; and decide which offer of admission to accept. Once students have committed themselves to a college, other colleges must respect that choice and cease recruiting them."
  • Colleges will not knowingly recruit or offer enrollment incentives to students who are already enrolled, registered, have declared their intent, or submitted contractual deposits to other institutions. May 1 is the point at which commitments to enroll become final, and colleges must respect that. The recognized exceptions are when students are admitted from a wait list, students initiate inquiries themselves, or cooperation is sought by institutions that provide transfer programs."
  • Colleges must not solicit transfer applications from a previous year’s applicant or prospect pool unless the students have themselves initiated a transfer inquiry or the college has verified prior to contacting the students that they are either enrolled at a college that allows transfer recruitment from other colleges or are not currently enrolled in a college."

Before they approved the measure to strip the provisions, the delegates approved (unanimously) rules that would limit discussion, but they didn't need the rules. There was no discussion on stripping the provisions, which most NACAC members learned of only at the beginning of the month. The Justice Department has been investigating NACAC for possible violations of antitrust laws for nearly two years, but the details of that investigation have not been generally known for most of that time. The Justice Department believes that with these rules, colleges are colluding to take away student choices.

The info is here.

The Moral Rot of the MIT Media Lab

Image result for mit media labJustin Peters
www.slate.com
Originally published September 8, 2019

Here is an excerpt:

I made my final emotional break with the Media Lab in 2016, when its now-disgraced former director Joi Ito announced the launch of its inaugural “Disobedience Award,” which sought to celebrate “responsible, ethical disobedience aimed at challenging the norms, rules, or laws that sustain society’s injustices” and which was “made possible through the generosity of Reid Hoffman, Internet entrepreneur, co-founder and executive chairman of LinkedIn, and most importantly an individual who cares deeply about righting society’s wrongs.” I realized that the things I had once found so exciting about the Media Lab—the architecturally distinct building, the quirky research teams, the robots and the canisters and the exhibits—amounted to a shrewd act of merchandising intended to lure potential donors into cutting ever-larger checks. The lab’s leaders weren’t averse to making the world a better place, just as long as the sponsors got what they wanted in the process.

It is this moral vacuity that has now thrown the Media Lab and MIT into an existential crisis. After the financier Jeffrey Epstein was arrested in July on federal sex-trafficking charges, journalists soon learned that Epstein enjoyed giving money to scientists almost as much as he enjoyed coercing girls into sex. The Media Lab was one beneficiary of Epstein’s largesse. Over the past several years, Ito accepted approximately $1.725 million from Epstein, who was already a convicted felon at the time Ito took charge of the place in 2011; $525,000 was earmarked for the lab, while the rest of the money went to Ito’s private startup investment funds. The New Yorker’s Ronan Farrow further reported on Friday that Epstein helped secure an additional $7.5 million for the Media Lab from other wealthy donors, and that the lab sought to hide the extent of its relationship with Epstein. Ito was Epstein’s contact at the Media Lab. The director even visited Epstein’s private Caribbean island as part of the courtship process.

The info is here.