Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Language. Show all posts
Showing posts with label Language. Show all posts

Thursday, January 11, 2024

The paucity of morality in everyday talk

Atari, M., Mehl, M.R., Graham, J. et al. 
Sci Rep 13, 5967 (2023).


Given its centrality in scholarly and popular discourse, morality should be expected to figure prominently in everyday talk. We test this expectation by examining the frequency of moral content in three contexts, using three methods: (a) Participants’ subjective frequency estimates (N = 581); (b) Human content analysis of unobtrusively recorded in-person interactions (N = 542 participants; n = 50,961 observations); and (c) Computational content analysis of Facebook posts (N = 3822 participants; n = 111,886 observations). In their self-reports, participants estimated that 21.5% of their interactions touched on morality (Study 1), but objectively, only 4.7% of recorded conversational samples (Study 2) and 2.2% of Facebook posts (Study 3) contained moral content. Collectively, these findings suggest that morality may be far less prominent in everyday life than scholarly and popular discourse, and laypeople, presume.


Overall, the findings of this research suggest that morality is far less prevalent in everyday talk than previously assumed. While participants overestimated the frequency of moral content in their self-reports, objective measures revealed that moral topics are relatively rare in everyday conversations and online interactions.

The study's authors propose several explanations for this discrepancy between subjective and objective findings. One possibility is that people tend to remember instances of moral talk more vividly than other types of conversation. Additionally, people may be more likely to report that they engage in moral talk when they are explicitly asked about it, as this may make them more aware of their own moral values.

Regardless of the underlying reasons, the findings of this research suggest that morality is not as prominent in everyday life as is often assumed. This may have implications for how we understand and promote moral behavior in society.

Tuesday, November 7, 2023

Psychologist attitudes towards disclosure and believability of childhood sexual abuse: Can biases affect perception, judgement, and action?

Singh, A., Morrison, B. W., &;Morrison, N. M.
(2023). Child Abuse & Neglect, 146, 106506.


The perception of CSA disclosure belief is critical to long-term outcomes for CSA survivors. Despite disclosures often occurring in clinical settings CSA survivors do not always report a sense of clinician belief in response to their disclosure. Ascertaining the factors that influence clinician belief is essential to improving outcomes.

This study examined whether language (i.e., word choice to describe abuse) and ongoing relationship status with a perpetrator impact perceptions of CSA belief amongst psychologists.

This 2 × 2 within subject's study examined relationship effects (ongoing verses estranged) and language effects (consensual verses abusive), embedded in fictitious vignettes, on believability. Seventy-five participants completed demographic surveys, rated and discussed belief in four vignettes, and completed validated scales capturing clinician trauma history and CSA myth endorsement.

A significant main effect for relationship was found with ongoing victim-perpetrator relationships being less believed than depictions of estranged relationships (F(1,3) = 15.57, p = .001, h2 = 0.174). While no main effect for language was found (F(1,3) = 0.06, p = .801, h2 = 0.001) content analysis of the open-ended items revealed 80 % of psychologists reported being influenced by the language manipulations. Correlations revealed male psychologists were less likely to believe disclosures and more likely to endorse CSA myths than females, and psychologists who had engaged in trauma training appeared to have heightened disclosure belief and lower myth endorsement.

While psychologists generally report belief in CSA disclosures they appear to evaluate specific disclosure aspects to inform this level of belief. Issues around social desirability, measure sensitivity, and learning effects are discussed alongside the importance of trauma training for psychologists.

Here are the important points for mental health professionals:

Enhanced Sensitivity to Biases: Psychologists should be aware of their own biases and how they might affect their perceptions of CSA disclosures. This self-reflection can help mitigate the impact of biases on evaluations and decisions.

Trauma-Informed Training: Providing psychologists with comprehensive trauma-informed training can improve their understanding of CSA, its effects, and appropriate responses to disclosures. This training can foster empathy, reduce skepticism, and enhance the believability of disclosures.

Standardized Assessment Procedures: Implementing standardized assessment protocols for CSA allegations can help ensure consistency and reduce the influence of biases on individual psychologists' judgments.

Support for Survivors: Fostering a supportive and validating environment for CSA survivors is crucial to encourage disclosures and facilitate healing. This involves believing survivors, avoiding judgmental attitudes, and providing appropriate resources and support.

Tuesday, August 22, 2023

The (moral) language of hate

Brendan Kennedy et al.
PNAS Nexus, Volume 2,
Issue 7, July 2023, 210


Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.

Significance Statement

Only recently have researchers begun to propose that violence and prejudice may have roots in moral intuitions. Can it be the case, we ask, that the act of verbalizing hatred involves a moral component, and that hateful and moral language are inseparable constructs? Across three studies focusing on the language of morality and hate, including historical text analysis of Nazi propaganda, implicit associations across 25 languages, and extremist right-wing communications on social media, we demonstrate that moral language, and specifically, Purity-related language (i.e. language about physical purity, avoidance of disgusting things, and resisting our carnal desires in favor of a higher, divine nature) and Loyalty related language are concomitant with hateful and exclusionary language.


Here are some of the key findings of the study:
  • Hateful language is often associated with moral foundations such as purity, loyalty, and authority.
  • The type of moral content invoked through hate speech varies by context.
  • Purity language is prominent in hateful propaganda and online hate speech.
  • Loyalty language is invoked in hateful slurs across languages.
  • Authority language is invoked in hateful rhetoric that targets political figures or institutions.
The study's findings have important implications for understanding and mitigating hate speech.  By understanding the moral foundations that underlie hateful language, we can develop more effective strategies for countering it. For example, we can challenge the moral claims made by hate speech and offer alternative moral frameworks that promote tolerance and understanding.

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Monday, April 17, 2023

Generalized Morality Culturally Evolves as an Adaptive Heuristic in Large Social Networks

Jackson, J. C., Halberstadt, J., et al.
(2023, March 22).


Why do people assume that a generous person should also be honest? Why can a single criminal conviction destroy someone’s moral reputation? And why do we even use words like “moral” and “immoral”? We explore these questions with a new model of how people perceive moral character. According to this model, people can vary in the extent that they perceive moral character as “localized” (varying across many contextually embedded dimensions) vs. “generalized” (varying along a single dimension from morally bad to morally good). This variation might be at least partly the product of cultural evolutionary adaptations to predicting cooperation in different kinds of social networks. As networks grow larger and more complex, perceptions of generalized morality are increasingly valuable for predicting cooperation during partner selection, especially in novel contexts. Our studies show that social network size correlates with perceptions of generalized morality in US and international samples (Study 1), and that East African hunter-gatherers with greater exposure outside their local region perceive morality as more generalized compared to those who have remained in their local region (Study 2). We support the adaptive value of generalized morality in large and unfamiliar social networks with an agent-based model (Study 3), and experimentally show that generalized morality outperforms localized morality when people predict cooperation in contexts where they have incomplete information about previous partner behavior (Study 4). Our final study shows that perceptions of morality have become more generalized over the last 200 years of English-language history, which suggests that it may be co-evolving with rising social complexity and anonymity in the English-speaking world (Study 5). We also present several supplemental studies which extend our findings. We close by discussing the implications of this theory for the cultural evolution of political systems, religion, and taxonomical theories of morality.

General Discussion

The word“moral” has taken a strange journey over the last several centuries. The word did not yet exist when Plato and Aristotle composed their theories of virtue. It was only when Cicero translated Aristotle’s Nicomachean Ethics that he coined the term “moralis” as the Latin translation of Aristotle’s “ēthikós”(Online Etymology Dictionary, n.d.).It is an ironic slight to Aristotle—who favored concrete particulars in lieu of abstract forms—that the word has become increasingly abstract and all-encompassing throughout its lexical evolution, with a meaning that now approaches Plato’s “form of the good.” We doubt that this semantic drift isa coincidence.

Instead, it may signify a cultural evolutionary shift in people’s perceptions of moral character as increasingly generalized as people inhabit increasingly larger and more unfamiliar social networks. Here we support this perspective with five studies. Studies 1-2 find that social network size correlates with the prevalence of generalized morality. Studies 1a-b explicitly tie beliefs in generalized morality to social network size with large surveys.  Study 2 conceptually replicates this finding in a Hadza hunter-gatherer camp, showing that Hadza hunter-gatherers with more external exposure perceive their campmates using more generalized morality. Studies 3-4 show that generalized morality can be adaptive for predicting cooperation in large and unfamiliar networks. Study 3 is an agent-based model which shows that, given plausible assumptions, generalized morality becomes increasingly valuable as social networks grow larger and less familiar. Study 4 is an experiment which shows that generalized morality is particularly valuable when people interact with unfamiliar partners in novel situations. Finally, Study 5 shows that generalized morality has risen over English-language history, such that words for moral attributes (e.g., fair, loyal, caring) have become more semantically generalizable over the last two hundred years of human history.

Friday, October 28, 2022

Gender and ethnicity bias in medicine: a text analysis of 1.8 million critical care records

David M Markowitz
PNAS Nexus, Volume 1, Issue 4,
September 2022, pg157


Gender and ethnicity biases are pervasive across many societal domains including politics, employment, and medicine. Such biases will facilitate inequalities until they are revealed and mitigated at scale. To this end, over 1.8 million caregiver notes (502 million words) from a large US hospital were evaluated with natural language processing techniques in search of gender and ethnicity bias indicators. Consistent with nonlinguistic evidence of bias in medicine, physicians focused more on the emotions of women compared to men and focused more on the scientific and bodily diagnoses of men compared to women. Content patterns were relatively consistent across genders. Physicians also attended to fewer emotions for Black/African and Asian patients compared to White patients, and physicians demonstrated the greatest need to work through diagnoses for Black/African women compared to other patients. Content disparities were clearer across ethnicities, as physicians focused less on the pain of Black/African and Asian patients compared to White patients in their critical care notes. This research provides evidence of gender and ethnicity biases in medicine as communicated by physicians in the field and requires the critical examination of institutions that perpetuate bias in social systems.

Significance Statement

Bias manifests in many social systems, including education, policing, and politics. Gender and ethnicity biases are also common in medicine, though empirical investigations are often limited to small-scale, qualitative work that fails to leverage data from actual patient–physician records. The current research evaluated over 1.8 million caregiver notes and observed patterns of gender and ethnicity bias in language. In these notes, physicians focused more on the emotions of women compared to men, and physicians focused less on the emotions of Black/African patients compared to White patients. These patterns are consistent with other work investigating bias in medicine, though this study is among the first to document such disparities at the language level and at a massive scale.

From the Discussion Section

This evidence is important because it establishes a link between communication patterns and bias that is often unobserved or underexamined in medicine. Bias in medicine has been predominantly revealed through procedural differences among ethnic groups, how patients of different ethnicities perceive their medical treatment, and structures that are barriers-to-entry for women and ethnic minorities. The current work revealed that the language found in everyday caregiver notes reflects disparities and indications of bias—new pathways that can complement other approaches to signal physicians who treat patients inequitably. Caregiver notes, based on their private nature, are akin to medical diaries for physicians as they attend to patients, logging the thoughts, feelings, and diagnoses of medical professionals. Caregivers have the herculean task of tending to those in need, though the current evidence suggests bias and language-based disparities are a part of this system. 

Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.

Monday, May 23, 2022

Recognizing and Dismantling Raciolinguistic Hierarchies in Latinx Health

Ortega, P., et al.
AMA J Ethics. 2022;24(4):E296-304.
doi: 10.1001/amajethics.2022.296.


Latinx individuals represent a linguistically and racially diverse, growing US patient population. Raciolinguistics considers intersections of language and race, prioritizes lived experiences of non-English speakers, and can help clinicians more deftly conceptualize heterogeneity and complexity in Latinx health experiences. This article discusses how raciolinguistic hierarchies (ie, practices of attaching social value to some languages but not others) can undermine the quality of Latinx patients’ health experiences. This article also offers language-appropriate clinical and educational strategies for promoting health equity.


Hispanic/Latinx (hereafter, Latinx) individuals in the United States represent a culturally, racially, and linguistically diverse and rapidly growing population. Attempting to categorize all Latinx individuals in a single homogeneous group may result in inappropriate stereotyping,1 inaccurate counting,2, 3 ineffective health interventions that insufficiently target at-risk subgroups,4 and suboptimal health communication.5 A more helpful approach is to use raciolinguistics to conceptualize the heterogeneous, complex Latinx experience as it relates to health. Raciolinguistics is the study of the historical and contemporary co-naturalization of race and language and their intertwining in the identities of individuals and communities. As an emerging field that grapples with the intersectionality of language and race, raciolinguistics provides a unique perspective on the lived experiences of people who speak non-English languages and people of color.6 As such, understanding raciolinguistics is relevant to providing language-concordant care7 to patients with limited English proficiency (LEP), who have been historically marginalized by structural barriers, racism, and other forms of discrimination in health care.

In this manuscript, we explore how raciolinguistics can help clinicians to appropriately conceptualize the heterogeneous, complex Latinx experience as it relates to health care. We then use the raciolinguistic perspective to inform strategies to dismantle structural barriers to health equity for Latinx patients pertaining to (1) Latinx patients’ health care experiences and (2) medical education.



A raciolinguistic perspective can inform how health care practices and medical education should be critically examined to support Latinx populations comprising heterogeneous communities and complex individuals with varying and intersecting cultural, social, linguistic, racial, ancestral, spiritual, and other characteristics. Future studies should explore the outcomes of raciolinguistic reforms of health services and educational interventions across the health professions to ensure effectiveness in improving health care for Latinx patients.

Thursday, October 14, 2021

A Minimal Turing Test

McCoy, J. P., and Ullman, T.D.
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 1-8


We introduce the Minimal Turing Test, an experimental paradigm for studying perceptions and meta-perceptions of different social groups or kinds of agents, in which participants must use a single word to convince a judge of their identity. We illustrate the paradigm by having participants act as contestants or judges in a Minimal Turing Test in which contestants must convince a judge they are a human, rather than an artificial intelligence. We embed the production data from such a large-scale Minimal Turing Test in a semantic vector space, and construct an ordering over pairwise evaluations from judges. This allows us to identify the semantic structure in the words that people give, and to obtain quantitative measures of the importance that people place on different attributes. Ratings from independent coders of the production data provide additional evidence for the agency and experience dimensions discovered in previous work on mind perception. We use the theory of Rational Speech Acts as a framework for interpreting the behavior of contestants and judges in the Minimal Turing Test.

Sunday, May 23, 2021

Moral concerns are differentially observable in language

Kennedy, B., et al. (2020, May 7). 


Language is a psychologically rich medium for human expression and communication. While language usage has been shown to be a window into various aspects of people's social worlds, including their personality traits and everyday environment, its correspondence to people's moral concerns has yet to be considered. Here, we examine the relationship between language usage and the moral concerns of Care, Fairness, Loyalty, Authority, and Purity as conceptualized by Moral Foundations Theory. We collected Facebook status updates (N = 107,798) from English-speaking participants (n = 2,691) along with their responses on the Moral Foundations Questionnaire. Overall, results suggested that self-reported moral concerns may be traced in language usage, though the magnitude of this effect varied considerably among moral concerns. Across a diverse selection of Natural Language Processing methods, Fairness concerns were consistently least correlated with language usage whereas Purity concerns were found to be the most traceable. In exploratory follow-up analyses, each moral concern was found to be differentially related to distinct patterns of relational, emotional, and social language. Our results are the first to relate individual differences in moral concerns to language usage, and to uncover the signatures of moral concerns in language.


Among the five moral foundations (Care, Fairness, Loyalty, Authority, and Purity),Purity concerns are most traceable in social media language. Fairness concerns, on the other hand, are least traceable. Individuals who highly endorsed Purity shared religious and spiritual content on Facebook, whereas people who scored higher on Fairness were slightly more likely to share content related to social justice and equality. High levels of Care, Loyalty, and Authority were found to motivate a mixed collection of socially-oriented language categories. The link between moral concerns and language was found to extend beyond exclusively moral language. Overall, this research establishes a missing link in moral psychology by providing evidence that individual-level moral concerns are differentially associated with language data collected from individuals’ Facebook accounts.

1. Moral concerns are observable in language.
2. This signal is differential: each moral domain maps onto a distinct linguistic signature.
3. Exclusive moral language is not a great predictor of individuals' moral concerns.

Tuesday, December 29, 2020

Effects of Language on Visual Perception

Lupyan, G., et al. (2020, April 28). 


Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition, and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.

  • Our ability to detect, discriminate, and recognize perceptual stimuli is influenced both by their physical features and our prior experiences.
  • One potent prior experience is language. How might learning a language affect perception?
  • We review evidence of linguistic effects on perception, focusing on the effects of language on visual recognition, discrimination, and detection.
  • Language exerts both off-line and on-line effects on visual processing; these effects naturally emerge from taking a predictive processing approach to perception.
In sum, language shapes perception in terms of higher-level processes (recognition) and lower-level processes (discrimination and detection).

Very important research in terms of psychotherapy and the language we use.

Thursday, September 17, 2020

In this election, ‘costly signal deployment’

Christina Pazzanese
Harvard Gazette
Originally posted 15 Sept 20

Here is an excerpt:


Trump isn’t merely saying things that his base likes to hear. All politicians do that, and to the extent that they can do so honestly, that’s exactly what they are supposed to do. But Trump does more than this in his use of “costly signals.” A tattoo is a costly signal. You can tell your romantic partner that you love them, but there’s nothing stopping you from changing your mind the next day. But if you get a tattoo of your partner’s name, you’ve sent a much stronger signal about how committed you are. Likewise, a gang tattoo binds you to the gang, especially if it’s in a highly visible place such as the neck or the face. It makes you scary and unappealing to most people, limiting your social options, and thus, binding you to the gang. Trump’s blatant bigotry, misogyny, and incitements to violence make him completely unacceptable to liberals and moderates. And, thus, his comments function like gang tattoos. He’s not merely saying things that his supporters want to hear. By making himself permanently and unequivocally unacceptable to the opposition, he’s “proving” his loyalty to their side. This is why, I think, the Republican base trusts Trump like no other.

There is costly signaling on the left, but it’s not coming from Biden, who is trying to appeal to as many voters as possible. Bernie Sanders is a better example. Why does Bernie Sanders call himself a socialist? What he advocates does not meet the traditional dictionary definition of socialism. And politicians in Europe who hold similar views typically refer to themselves as “social democrats” rather than “democratic socialists.” “Socialism” has traditionally been a scare word in American politics. Conservatives use it as an epithet to describe policies such as the Affordable Care Act, which, ironically, is very much a market-oriented approach to achieving universal health insurance. It’s puzzling, then, that a politician would choose to describe himself with a scare word when he could accurately describe his views with less-scary words. But it makes sense if one thinks of this as a costly signal. By calling himself a socialist, Sanders makes it very clear where his loyalty lies, as vanishingly few Republicans would support someone who calls himself a socialist.

Wednesday, April 22, 2020

Your Code of Conduct May Be Sending the Wrong Message

F. Gino, M, Kouchaki, & Y. Feldman
Harvard Business Review
Originally posted 13 March 20

Here is an excerpt:

We examined the relationship between the language used (personal or impersonal) in these codes and corporate illegality. Research assistants blind to our research questions and hypotheses coded each document based on the degree to which it used “we” or “member/employee” language. Next, we searched media sources for any type of illegal acts these firms may have been involved in, such as environmental violations, anticompetitive actions, false claims, and fraudulent actions. Our analysis showed that firms that used personal language in their codes of conduct were more likely to be found guilty of illegal behaviors.

We found this initial evidence to be compelling enough to dig further into the link between personal “we” language and unethical behavior. What would explain such a link? We reasoned that when language communicating ethical standards is personal, employees tend to assume they are part of a community where members are easygoing, helpful, cooperative, and forgiving. By contrast, when the language is impersonal — for example, “organizational members are expected to put customers first” — employees feel they are part of a transactional relationship in which members are more formal and distant.

Here’s the problem: When we view our organization as tolerant and forgiving, we believe we’re less likely to be punished for misconduct. Across nine different studies, using data from lab- and field-based experiments as well as a large dataset of S&P firms, we find that personal language (“we,” “us”) leads to less ethical behavior than impersonal language (“employees,” “members”) does, apparently because people encountering more personal language believe their organization is less serious about punishing wrongdoing.

The info is here.

Tuesday, March 10, 2020

Three Unresolved Issues in Human Morality

Jerome Kagan
Perspectives on Psychological Science
First Published March 28, 2018


This article discusses three major, but related, controversies surrounding the idea of morality. Is the complete pattern of features defining human morality unique to this species? How context dependent are moral beliefs and the emotions that often follow a violation of a moral standard? What developmental sequence establishes a moral code? This essay suggests that human morality rests on a combination of cognitive and emotional processes that are missing from the repertoires of other species. Second, the moral evaluation of every behavior, whether by self or others, depends on the agent, the action, the target of the behavior, and the context. The ontogeny of morality, which begins with processes that apes possess but adds language, inference, shame, and guilt, implies that humans are capable of experiencing blends of thoughts and feelings for which no semantic term exists. As a result, conclusions about a person’s moral emotions based only on questionnaires or interviews are limited to this evidence.

From the Summary

The human moral sense appears to contain some features not found in any other animal. The judgment of a behavior as moral or immoral, by self or community, depends on the agent, the action, and the setting. The development of a moral code involves changes in both cognitive and affective processes that are the result of maturation and experience. The ideas in this essay have pragmatic implications for psychological research. If most humans want others to regard them as moral agents, and, therefore, good persons, their answers to questionnaires or to interviewers as well as behaviors in laboratories will tend to conform to their understanding of what the examiner regards as the society’s values. That is why investigators should try to gather evidence on the behaviors that their participants exhibit in their usual settings.

The article is here.

Tuesday, January 14, 2020

Emotion semantics show both cultural variation and universal structure

Jackson, C. J., Watts, J. and others.
Science  20 Dec 2019:
Vol. 366, Issue 6472, pp. 1517-1522
DOI: 10.1126/science.aaw8160


Many human languages have words for emotions such as “anger” and “fear,” yet it is not clear whether these emotions have similar meanings across languages, or why their meanings might vary. We estimate emotion semantics across a sample of 2474 spoken languages using “colexification”—a phenomenon in which languages name semantically related concepts with the same word. Analyses show significant variation in networks of emotion concept colexification, which is predicted by the geographic proximity of language families. We also find evidence of universal structure in emotion colexification networks, with all families differentiating emotions primarily on the basis of hedonic valence and physiological activation. Our findings contribute to debates about universality and diversity in how humans understand and experience emotion.

Monday, December 9, 2019

The rise of the greedy-brained ape: Book Review

Shilluk tribes people gather in a circle under a large tree for traditional storytellingTim Radford
Originally published 30 Oct 19

Here is an excerpt:

For her hugely enjoyable sprint through human evolutionary history, Vince (erstwhile news editor of this journal) intertwines many threads: language and writing; the command of tools, pursuit of beauty and appetite for trinkets; and the urge to build things, awareness of time and pursuit of reason. She tracks the cultural explosion, triggered by technological discovery, that gathered pace with the first trade in obsidian blades in East Africa at least 320,000 years ago. That has climaxed this century with the capacity to exploit 40% of the planet’s total primary production.

How did we do it? Vince examines, for instance, our access to and use of energy. Other primates must chew for five hours a day to survive. Humans do so for no more than an hour. We are active 16 hours a day, a tranche during which other mammals sleep. We learn by blind variation and selective retention. Vince proposes that our ancestors enhanced that process of learning from each other with the command of fire: it is 10 times more efficient to eat cooked meat than raw, and heat releases 50% of all the carbohydrates in cereals and tubers.

Thus Homo sapiens secured survival and achieved dominance by exploiting extra energy. The roughly 2,000 calories ideally consumed by one human each day generates about 90 watts: enough energy for one incandescent light bulb. At the flick of a switch or turn of a key, the average human now has access to roughly 2,300 watts of energy from the hardware that powers our lives — and the richest have much more.

The book review is here.

Sunday, October 27, 2019

Language Is the Scaffold of the Mind

Anna Ivanova
Originally posted September 26, 2019

Can you imagine a mind without language? More specifically, can you imagine your mind without language? Can you think, plan, or relate to other people if you lack words to help structure your experiences?

Many great thinkers have drawn a strong connection between language and the mind. Oscar Wilde called language “the parent, and not the child, of thought”; Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world”; and Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.”

After all, language is what makes us human, what lies at the root of our awareness, our intellect, our sense of self. Without it, we cannot plan, cannot communicate, cannot think. Or can we?

Imagine growing up without words. You live in a typical industrialized household, but you are somehow unable to learn the language of your parents. That means that you do not have access to education; you cannot properly communicate with your family other than through a set of idiosyncratic gestures; you never get properly exposed to abstract ideas such as “justice” or “global warming.” All you know comes from direct experience with the world.

It might seem that this scenario is purely hypothetical. There aren’t any cases of language deprivation in modern industrialized societies, right? It turns out there are. Many deaf children born into hearing families face exactly this issue. They cannot hear and, as a result, do not have access to their linguistic environment. Unless the parents learn sign language, the child’s language access will be delayed and, in some cases, missing completely.

The info is here.

Wednesday, October 2, 2019

Seven Key Misconceptions about Evolutionary Psychology

Image result for evolutionary psychologyLaith Al-Shawaf
Originally published August 20, 2019

Evolutionary approaches to psychology hold the promise of revolutionizing the field and unifying it with the biological sciences. But among both academics and the general public, a few key misconceptions impede its application to psychology and behavior. This essay tackles the most pervasive of these.

Misconception 1: Evolution and Learning Are Conflicting Explanations for Behavior

People often assume that if something is learned, it’s not evolved, and vice versa. This is a misleading way of conceptualizing the issue, for three key reasons.

First, many evolutionary hypotheses are about learning. For example, the claim that humans have an evolved fear of snakes and spiders does not mean that people are born with this fear. Instead, it means that humans are endowed with an evolved learning mechanism that acquires a fear of snakes more easily and readily than other fears. Classic studies in psychology show that monkeys can acquire a fear of snakes through observational learning, and they tend to acquire it more quickly than a similar fear of other objects, such as rabbits or flowers. It is also harder for monkeys to unlearn a fear of snakes than it is to unlearn other fears. As with monkeys, the hypothesis that humans have an evolved fear of snakes does not mean that we are born with this fear. Instead, it means that we learn this fear via an evolved learning mechanism that is biologically prepared to acquire some fears more easily than others.

Second, learning is made possible by evolved mechanisms instantiated in the brain. We are able to learn because we are equipped with neurocognitive mechanisms that enable learning to occur—and these neurocognitive mechanisms were built by evolution. Consider the fact that both children and puppies can learn, but if you try to teach them the same thing—French, say, or game theory—they end up learning different things. Why? Because the dog’s evolved learning mechanisms are different from those of the child. What organisms learn, and how they learn it, depends on the nature of the evolved learning mechanisms housed in their brains.

The info is here.

Thursday, August 1, 2019

Google Contractors Listen to Recordings of People Using Virtual Assistant

Sarah E. Needleman and Parmy Olson
The Wall Street Journal
Originally posted July 11, 2019

Here are two excerpts:

In a blog post Thursday, Google confirmed it employs people world-wide to listen to a small sample of recordings.

The public broadcaster’s report said the recordings potentially expose sensitive information about users such as names and addresses.

It also said Google, in some cases, is recording voices of customers even when they aren’t using Google Assistant [emphasis added].

In its blog post, Google said language experts listen to 0.2% of “audio snippets” taken from the Google Assistant to better understand different languages, accents and dialects.


It is common practice for makers of virtual assistants to record and listen to some of what their users say so they can improve on the technology, said Bret Kinsella, chief executive of Voicebot.ai, a research firm focused on voice technology and artificial intelligence.

“Anything with speech recognition, you generally have humans at one point listening and annotating to sort out what types of errors are occurring,” he said.

In May, however, a coalition of privacy and child-advocacy groups filed a complaint with federal regulators about Amazon potentially preserving conversations of young users through its Echo Dot Kids devices.

The info is here.

Saturday, June 1, 2019

Does It Matter Whether You or Your Brain Did It?

Uri Maoz, K. R. Sita, J. J. A. van Boxtel, and L. Mudrik
Front. Psychol., 30 April 2019


Despite progress in cognitive neuroscience, we are still far from understanding the relations between the brain and the conscious self. We previously suggested that some neuroscientific texts that attempt to clarify these relations may in fact make them more difficult to understand. Such texts—ranging from popular science to high-impact scientific publications—position the brain and the conscious self as two independent, interacting subjects, capable of possessing opposite psychological states. We termed such writing ‘Double Subject Fallacy’ (DSF). We further suggested that such DSF language, besides being conceptually confusing and reflecting dualistic intuitions, might affect people’s conceptions of moral responsibility, lessening the perception of guilt over actions. Here, we empirically investigated this proposition with a series of three experiments (pilot and two preregistered replications). Subjects were presented with moral scenarios where the defendant was either (1) clearly guilty, (2) ambiguous, or (3) clearly innocent while the accompanying neuroscientific evidence about the defendant was presented using DSF or non-DSF language. Subjects were instructed to rate the defendant’s guilt in all experiments. Subjects rated the defendant in the clearly guilty scenario as guiltier than in the two other scenarios and the defendant in the ambiguously described scenario as guiltier than in the innocent scenario, as expected. In Experiment 1 (N = 609), an effect was further found for DSF language in the expected direction: subjects rated the defendant less guilty when the neuroscientific evidence was described using DSF language, across all levels of culpability. However, this effect did not replicate in Experiment 2 (N = 1794), which focused on different moral scenario, nor in Experiment 3 (N = 1810), which was an exact replication of Experiment 1. Bayesian analyses yielded strong evidence against the existence of an effect of DSF language on the perception of guilt. Our results thus challenge the claim that DSF language affects subjects’ moral judgments. They further demonstrate the importance of good scientific practice, including preregistration and—most critically—replication, to avoid reaching erroneous conclusions based on false-positive results.