Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cognition. Show all posts
Showing posts with label Cognition. Show all posts

Thursday, July 16, 2020

Cognitive Bias and Public Health Policy During the COVID-19 Pandemic

Halpern SD, Truog RD, and Miller FG.
JAMA. 
Published online June 29, 2020.
doi:10.1001/jama.2020.11623

Here is an excerpt:

These cognitive errors, which distract leaders from optimal policy making and citizens from taking steps to promote their own and others’ interests, cannot merely be ascribed to repudiations of science. Rather, these biases are pervasive and may have been evolutionarily selected. Even at academic medical centers, where a premium is placed on having science guide policy, COVID-19 action plans prioritized expanding critical care capacity at the outset, and many clinicians treated seriously ill patients with drugs with little evidence of effectiveness, often before these institutions and clinicians enacted strategies to prevent spread of disease.

Identifiable Lives and Optimism Bias

The first error that thwarts effective policy making during crises stems from what economists have called the “identifiable victim effect.” Humans respond more aggressively to threats to identifiable lives, ie, those that an individual can easily imagine being their own or belonging to people they care about (such as family members) or care for (such as a clinician’s patients) than to the hidden, “statistical” deaths reported in accounts of the population-level tolls of the crisis. Similarly, psychologists have described efforts to rescue endangered lives as an inviolable goal, such that immediate efforts to save visible lives cannot be abandoned even if more lives would be saved through alternative responses.

Some may view the focus on saving immediately threatened lives as rational because doing so entails less uncertainty than policies designed to save invisible lives that are not yet imminently threatened. Individuals who harbor such instincts may feel vindicated knowing that during the present pandemic, few if any patients in the US who could have benefited from a ventilator were denied one.

Yet such views represent a second reason for the broad endorsement of policies that prioritize saving visible, immediately jeopardized lives: that humans are imbued with a strong and neurally mediated3 tendency to predict outcomes that are systematically more optimistic than observed outcomes. Early pandemic prediction models provided best-case, worst-case, and most-likely estimates, fully depicting the intrinsic uncertainty.4 Sound policy would have attempted to minimize mortality by doing everything possible to prevent the worst case, but human optimism bias led many to act as if the best case was in fact the most likely.

The info is here.

Saturday, July 11, 2020

Why Do People Avoid Facts That Could Help Them?

Francesca Gino
Scientific American
Originally posted 16 June 20

In our information age, an unprecedented amount of data are right at our fingertips. We run genetic tests on our unborn children to prepare for the worst. We get regular cancer screenings and monitor our health on our wrist and our phone. And we can learn about our ancestral ties and genetic predispositions with a simple swab of saliva.

Yet there’s some information that many of us do not want to know. A study of more than 2,000 people in Germany and Spain by Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin and Rocio Garcia-Retamero of the University of Granada in Spain found that 90 percent of them would not want to find out, if they could, when their partner would die or what the cause would be. And 87 percent also reported not wanting to be aware of the date of their own death. When asked if they’d want to know if, and when, they’d get divorced, more than 86 percent said no.

Related research points to a similar conclusion: We often prefer to avoid learning information that could cause us pain. Investors are less likely to log on to their stock portfolios on days when the market is down. And one laboratory experiment found that subjects who were informed that they were rated less attractive than other participants were willing to pay money not to find out their exact rank.

More consequentially, people avoid learning certain information related to their health even if having such knowledge would allow them to identify therapies to manage their symptoms or treatment. As one study found, only 7 percent of people at high risk for Huntington’s disease elect to find out whether they have the condition, despite the availability of a genetic test that is generally paid for by health insurance plans and the clear usefulness of the information for alleviating the chronic disease’s symptoms. Similarly,participants in a laboratory experiment chose to forgo part of their earnings to avoid learning the outcome of a test for a treatable sexually transmitted disease. Such avoidance was even greater when the disease symptoms were more severe.

The info is here.

Friday, July 10, 2020

Aging in an Era of Fake News

Brashier, N. M., & Schacter, D. L. (2020).
Current Directions in 
Psychological Science, 29(3), 316–323.

Abstract

Misinformation causes serious harm, from sowing doubt in modern medicine to inciting violence. Older adults are especially susceptible—they shared the most fake news during the 2016 U.S. election. The most intuitive explanation for this pattern lays the blame on cognitive deficits. Although older adults forget where they learned information, fluency remains intact, and knowledge accumulated across decades helps them evaluate claims. Thus, cognitive declines cannot fully explain older adults’ engagement with fake news. Late adulthood also involves social changes, including greater trust, difficulty detecting lies, and less emphasis on accuracy when communicating. In addition, older adults are relative newcomers to social media and may struggle to spot sponsored content or manipulated images. In a post-truth world, interventions should account for older adults’ shifting social goals and gaps in their digital literacy.

(cut)

The focus on “facts” at the expense of long-term trust is one reason why I see news organizations being ineffective in preventing, and in some cases facilitating, the establishment of “alternative narratives”. News reporting, as with any other type of declaration, can be ideologically, politically, and emotionally contested. The key differences in the current environment involve speed and transparency: First, people need to be exposed to the facts before the narrative can be strategically distorted through social media, distracting “leaks”, troll operations, and meme warfare. Second, while technological solutions for “fake news” are a valid effort, platforms policing content through opaque technologies adds yet another disruption in the layer of trust that should be reestablished directly between news organizations and their audiences.

A pdf can be found here.

Sunday, May 10, 2020

Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly

Pim Haselager & Giulio Mecacci (2020)
AJOB Neuroscience, 11:2, 113-119
DOI: 10.1080/21507740.2020.1740353

Abstract

The human species is combining an increased understanding of our cognitive machinery with the development of a technology that can profoundly influence our lives and our ways of living together. Our sciences enable us to see our strengths and weaknesses, and build technology accordingly. What would future historians think of our current attempts to build increasingly smart systems, the purposes for which we employ them, the almost unstoppable goldrush toward ever more commercially relevant implementations, and the risk of superintelligence? We need a more profound reflection on what our science shows us about ourselves, what our technology allows us to do with that, and what, apparently, we aim to do with those insights and applications. As the smartest species on the planet, we don’t need more intelligence. Since we appear to possess an underdeveloped capacity to act ethically and empathically, we rather require the kind of technology that enables us to act more consistently upon ethical principles. The problem is not to formulate ethical rules, it’s to put them into practice. Cognitive neuroscience and AI provide the knowledge and the tools to develop the moral crutches we so clearly require. Why aren’t we building them? We don’t need superintelligence, we need superethics.

The article is here.

Friday, May 8, 2020

Repetition increases Perceived Truth even for Known Falsehoods

Lisa Fazio
PsyArXiv
Originally posted 23 March 20
 
Abstract

Repetition increases belief in false statements. This illusory truth effect occurs with many different types of statements (e.g., trivia facts, news headlines, advertisements), and even occurs when the false statement contradicts participants’ prior knowledge. However, existing studies of the effect of prior knowledge on the illusory truth effect share a common flaw; they measure participants’ knowledge after the experimental manipulation and thus conditionalize responses on posttreatment variables. In the current study, we measure prior knowledge prior to the experimental manipulation and thus provide a cleaner measurement of the causal effect of repetition on belief. We again find that prior knowledge does not protect against the illusory truth effect. Repeated false statements were given higher truth ratings than novel statements, even when they contradicted participants’ prior knowledge.

From the Discussion

As in previous research (Brashier et al., 2017; Fazio et al., 2015), prior knowledge did not protect participants from the illusory truth effect.Repeated falsehoods were rated as being more true than novel falsehoods, even when they both contradicted participants’ prior knowledge. By measuring prior knowledge before the experimental session, this study avoids conditioning on posttreatment variables and provides cleaner evidence for the effect (Montgomery et al., 2018). Whether prior knowledge is measured before or after the manipulation, it is clear that repetition increases belief in falsehoods that contradict existing knowledge.

The research is here.

Monday, May 4, 2020

Suggestions for a New Integration in the Psychology of Morality

Diane Sunar
Social and Personality Psychology Compass
(2009): 447–474

Abstract

To prepare a basis for a new model of morality, theories in the psychology of morality are reviewed, comparing those put forward before and after the emergence of evolutionary psychology in the last quarter of the 20th century. Concepts of embodied sociality and reciprocal moral emotions are introduced. Three ‘morality clusters’ consisting of relational models (Fiske, 1991), moral domains (Shweder, Much, Mahapatra, & Park, 1997) and reciprocal sets of other-blaming and selfconscious emotions are linked to three evolutionary bases for morality (kin selection, social hierarchy, and reciprocal altruism). Evidence regarding these concepts is marshaled to support the model. The ‘morality clusters’ are compared with classifications based on Haidt’s moral foundations (Haidt & Graham 2007). Further evidence regarding hierarchy based on sexual selection, exchange and
reciprocity, moral development, cultural differences and universals, and neurological discoveries, especially mirror neurons, is also discussed.

An Alternative Model

Alternative combinations of these elements have been suggested, most notably by Haidt and his colleagues (Graham, Haidt, & Nosek, forthcoming; Haidt & Joseph, 2008), mapping Shweder’s three ethics or moral domains, and Fiske’s relational models, onto Haidt’s moral foundations. As described above, these authors match community with ingroup/loyalty and authority; autonomy with harm/care and fairness/reciprocity; and divinity with purity/sanctity. In addition, they suggest that three of the foundations can be matched with three of Fiske’s relational models (leaving out MP). In this scheme, fairness/reciprocity is linked with EM, care and ingroup morality with CS, and authority/respect with AR. Harm and purity as moral foundations are not linked with relational models, as they argue that these two foundations ‘are not primarily modes of interpersonal relationship (Haidt & Joseph, 2008; p. 386). Similar to my proposed clusters, they also link the morality of harm and care to kin selection and that of fairness to evolved mechanisms of reciprocal altruism, but in contrast see purity as a derivative of disgust mechanisms without a specific social basis.

The paper is here.

Tuesday, April 21, 2020

Piercing the Smoke Screen: Dualism, Free Will, and Christianity

S. Murray, E. Murray, & T. Nadelhoffer
PsyArXiv Preprints
Originally created on 18 Feb 20

Abstract

Research on the folk psychology of free will suggests that people believe free will is incompatible with determinism and that human decision-making cannot be exhaustively characterized by physical processes. Some suggest that certain elements of Western cultural history, especially Christianity, have helped to entrench these beliefs in the folk conceptual economy. Thus, on the basis of this explanation, one should expect to find three things: (1) a significant correlation between belief in dualism and belief in free will, (2) that people with predominantly incompatibilist commitments are likely to exhibit stronger dualist beliefs than people with predominantly compatibilist commitments, and (3) people who self-identify as Christians are more likely to be dualists and incompatibilists than people who do not self-identify as Christians. We present the results of two studies (n = 378) that challenge two of these expectations. While we do find a significant correlation between belief in dualism and belief in free will, we found no significant difference in dualist tendencies between compatibilists and incompatibilists. Moreover, we found that self-identifying as Christian did not significantly predict preference for a particular metaphysical conception of free will. This calls into question assumptions about the relationship between beliefs about free will, dualism, and Christianity.

The research is here.

Wednesday, March 4, 2020

How Common Mental Shortcuts Can Cause Major Physician Errors

Anupam B. Jena and Andrew R. Olenski
The New York Times
Originally posted 20 Feb 20

Here is an excerpt:

In health care, such unconscious biases can lead to disparate treatment of patients and can affect whether similar patients live or die.

Sometimes these cognitive biases are simple overreactions to recent events, what psychologists term availability bias. One study found that when patients experienced an unlikely adverse side effect of a drug, their doctor was less likely to order that same drug for the next patient whose condition might call for it, even though the efficacy and appropriateness of the drug had not changed.

A similar study found that when mothers giving birth experienced an adverse event, their obstetrician was more likely to switch delivery modes for the next patient (C-section vs. vaginal delivery), regardless of the appropriateness for that next patient. This cognitive bias resulted in both higher spending and worse outcomes.

Doctor biases don’t affect treatment decisions alone; they can shape the profession as a whole. A recent study analyzed gender bias in surgeon referrals and found that when the patient of a female surgeon dies, the physician who made the referral to that surgeon sends fewer patients to all female surgeons in the future. The study found no such decline in referrals for male surgeons after a patient death.

This list of biases is far from exhaustive, and though they may be disconcerting, uncovering new systematic mistakes is critical for improving clinical practice.

The info is here.

Monday, March 2, 2020

The Dunning-Kruger effect, or why the ignorant think they’re experts

Alexandru Micu
zmescience.com
Originally posted 13 Feb 20

Here is an excerpt:

It’s not specific only to technical skills but plagues all walks of human existence equally. One study found that 80% of drivers rate themselves as above average, which is literally impossible because that’s not how averages work. We tend to gauge our own relative popularity the same way.

It isn’t limited to people with low or nonexistent skills in a certain matter, either — it works on pretty much all of us. In their first study, Dunning and Kruger also found that students who scored in the top quartile (25%) routinely underestimated their own competence.

A fuller definition of the Dunning-Kruger effect would be that it represents a bias in estimating our own ability that stems from our limited perspective. When we have a poor or nonexistent grasp on a topic, we literally know too little of it to understand how little we know. Those who do possess the knowledge or skills, however, have a much better idea of where they sit. But they also think that if a task is clear and simple to them, it must be so for everyone else as well.

A person in the first group and one in the second group are equally liable to use their own experience and background as the baseline and kinda just take it for granted that everyone is near that baseline. They both partake in the “illusion of confidence” — for one, that confidence is in themselves, for the other, in everyone else.

The info is here.

Thursday, February 20, 2020

Harvey Weinstein’s ‘false memory’ defense is not backed by science

Anne DePrince & Joan Cook
The Conversation
Originally posted 10 Feb 20

Here is an excerpt:

In 1996, pioneering psychologist Jennifer Freyd introduced the concept of betrayal trauma. She made plain how forgetting, not thinking about and even mis-remembering an assault may be necessary and adaptive for some survivors. She argued that the way in which traumatic events, like sexual violence, are processed and remembered depends on how much betrayal there is. Betrayal happens when the victim depends on the abuser, such as a parent, spouse or boss. The victim has to adapt day-to-day because they are (or feel) stuck in that relationship. One way that victims can survive is by thinking or remembering less about the abuse or telling themselves it wasn’t abuse.

Since 1996, compelling scientific evidence has shown a strong relationship between amnesia and victims’ dependence on abusers. Psychologists and other scientists have also learned much about the nature of memory, including memory for traumas like sexual assault. What gets into memory and later remembered is affected by a host of factors, including characteristics of the person and the situation. For example, some individuals dissociate during or after traumatic events. Dissociation offers a way to escape the inescapable, such that people feel as if they have detached from their bodies or the environment. It is not surprising to us that dissociation is linked with incomplete memories.

Memory can also be affected by what other people do and say. For example, researchers recently looked at what happened when they told participants not to think about some words that they had just studied. Following that instruction, those who had histories of trauma suppressed more memories than their peers did.

The info is here.

Tuesday, February 18, 2020

Is it okay to sacrifice one person to save many? How you answer depends on where you’re from.

Sigal Samuel
vox.com
Originally posted 24 Jan 20

Here is an excerpt:

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

The info is here.

Tuesday, February 4, 2020

Bounded awareness: Implications for ethical decision making

Max H. Bazerman and Ovul Sezer
Organizational Behavior and Human Decision Processes
Volume 136, September 2016, Pages 95-105

Abstract

In many of the business scandals of the new millennium, the perpetrators were surrounded by people who could have recognized the misbehavior, yet failed to notice it. To explain such inaction, management scholars have been developing the area of behavioral ethics and the more specific topic of bounded ethicality—the systematic and predictable ways in which even good people engage in unethical conduct without their own awareness. In this paper, we review research on both bounded ethicality and bounded awareness, and connect the two areas to highlight the challenges of encouraging managers and leaders to notice and act to stop unethical conduct. We close with directions for future research and suggest that noticing unethical behavior should be considered a critical leadership skill.

Bounded Ethicality

Within the broad topic of behavioral ethics is the much more specific topic of bounded ethicality (Chugh, Banaji, & Bazerman, 2005). Chugh et al. (2005) define bounded ethicality as the psychological processes that lead people to engage in ethically questionable behaviors that are inconsistent with their own preferred ethics. That is, if they were more reflective about their choices, they would make a different decision. This definition runs parallel to the concepts of bounded rationality (March & Simon, 1958) and bounded awareness (Chugh & Bazerman, 2007). In all three cases, a cognitive shortcoming keeps the actor from taking the action that she would choose with greater awareness. Importantly, if people overcame these boundaries, they would make decisions that are more in line with their ethical standards. Note that behavioral ethicists do not ask decision makers to follow particular values or rules, but rather try to help decision makers adhere more closely
to their own personal values with greater reflection.

The paper can be downloaded here.

Friday, January 31, 2020

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
ucl.ac.uk
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Thursday, January 30, 2020

Body Maps of Moral Concerns

Atari, M., Davani, A. M., & Dehghani, M.
(2018, December 4).
https://doi.org/10.31234/osf.io/jkewf

Abstract

The somatosensory reaction to different social circumstances has been proposed to trigger conscious emotional experiences. Here, we present a pre-registered experiment in which we examine the topographical maps associated with violations of different moral concerns. Specifically, participants (N = 596) were randomly assigned to scenarios of moral violations, and then drew their subjective somatosensory experience on two 48,954-pixel silhouettes. We demonstrate that bodily representations of different moral violations are slightly different. Further, we demonstrate that violations of moral concerns are felt in different parts of the body, and arguably result in different somatosensory experiences for liberals and conservatives. We also investigate how individual differences in moral concerns relate to bodily maps of moral violations. Finally, we use natural language processing to predict activation in body parts based on the semantic representation of textual stimuli. The findings shed light on the complex relationships between moral violations and somatosensory experiences.

Wednesday, January 15, 2020

How should we balance morality and the law?

Peter Koch
BCM Blogs
Originally posted 20 Dec 19

I was recently discussing a clinical case with medical students and physicians that involved balancing murky ethical issues and relevant laws. One participant leaned back and said: “Well, if we know the laws, then that’s the end of the story!”

The laws were clear about what ought to (legally) be done, but following the laws in this case would likely produce a bad outcome. We ended up divided about how to proceed with the case, but this discussion raised a bigger question: Exactly how much should we weigh the law in moral deliberations?

The basic distinction between the legal and moral is easy enough to identify. Most people agree that what is legal is not necessarily moral and what is immoral should not necessarily be illegal.

Slavery in the U.S. is commonly used as an example. “Of course,” a good modern citizen will say, “slavery was wrong even when it was legal.” The passing of the 13 amendment did not make slavery morally wrong; it was wrong already, and the legal structures finally caught up to the moral structures.

There are plenty of acts that are immoral but that should not be illegal. For example, perhaps it is immoral to gossip about your friend’s personal life, but most would agree that this sort of gossip should not be outlawed. The basic distinction between the legal and the moral appears to be simple enough.

Things get trickier, though, when we press more deeply into the matter.

The blog post is here.

Friday, January 10, 2020

Ethically Adrift: How Others Pull Our Moral Compass from True North, and How we Can Fix It

Moore, C., and F. Gino.
Research in Organizational Behavior 
33 (2013): 53–77.

Abstract

This chapter is about the social nature of morality. Using the metaphor of the moral compass to describe individuals' inner sense of right and wrong, we offer a framework to help us understand social reasons why our moral compass can come under others' control, leading even good people to cross ethical boundaries. Departing from prior work focusing on the role of individuals' cognitive limitations in explaining unethical behavior, we focus on the socio-psychological processes that function as triggers of moral neglect, moral justification and immoral action, and their impact on moral behavior. In addition, our framework discusses organizational factors that exacerbate the detrimental effects of each trigger. We conclude by discussing implications and recommendations for organizational scholars to take a more integrative approach to developing and evaluating theory about unethical behavior.

From the Summary

Even when individuals are aware of the ethical dimensions of the choices they are making, they may still engage in unethical behavior as long as they recruit justifications for it. In this section, we discussed the role of two social–psychological processes – social comparison and self-verification – that facilitate moral justification, which will lead to immoral behavior. We also discussed three characteristics of organizational life that amplify these social–psychological processes. Specifically, we discussed how organizational identification, group loyalty, and framing or euphemistic language can all affect the likelihood and extent to which individuals justify their actions, by judging them as ethical when in fact they are morally contentious. Finally, we discussed moral disengagement, moral hypocrisy, and moral licensing as intrapersonal consequences of these social facilitators of moral justification.

The paper can be downloaded here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Thursday, December 12, 2019

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.