Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Learning. Show all posts
Showing posts with label Learning. Show all posts

Sunday, February 18, 2024

Amazon AGI Team Say Their AI is Showing "Emergent Properties"

Noor Al-Sibai
Futurism.com
Originally posted 15 Feb 24

A new Amazon AI model, according to the researchers who built it, is exhibiting language abilities that it wasn't trained on.

In a not-yet-peer-reviewed academic paper, the team at Amazon AGI — which stands for "artificial general intelligence," or human-level AI — say their large language model (LLM) is exhibiting "state-of-the-art naturalness" at conversational text. Per the examples shared in the paper, the model does seem sophisticated.

As the paper indicates, the model was able to come up with all sorts of sentences that, according to criteria crafted with the help of an "expert linguist," showed it was making the types of language leaps that are natural in human language learners but have been difficult to obtain in AI.

Named "Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS, the initial model was trained on 100,000 hours of "public domain speech data," 90 percent in English, to teach it how Americans talk. To test out how large models would need to be to show "emergent abilities," or abilities they were not trained on, the Amazon AGI team trained two smaller models, one on 1,000 hours of speech data and another on 10,000, to see which of the three — if any — exhibited the type of language naturalness they were looking for.


My overall conclusion from the paper linked in the article:

BASE TTS (Text To Speech) represents a significant leap forward in TTS technology, offering superior naturalness, efficiency, and potential for real-world applications like voicing LLM outputs. While limitations exist, the research paves the way for future advancements in multilingual, data-efficient, and context-aware TTS models.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv
https://doi.org/10.31234/osf.io/vzwrn

Abstract

To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.


The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Tuesday, August 8, 2023

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al. 
Nat Rev Psychol 1, 524–536 (2022).

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

(cut)

Individual benefits

Intellectual humility might also have direct consequences for individuals’ wellbeing. People who reason about social conflicts in an intellectually humbler manner and consider others’ perspectives (components of wise reasoning) are more likely to report higher levels of life satisfaction and less negative affect compared to people who do not. Leaders who are higher in intellectual humility are also higher in emotional intelligence and receive higher satisfaction ratings from their followers, which suggests that intellectual humility could benefit professional life. Nonetheless, intellectual humility is not associated with personal wellbeing in all contexts: religious leaders who see their religious beliefs as fallible have lower wellbeing relative to leaders who are less intellectually humble in their beliefs.

Intellectual humility might also help people to make well informed decisions. Intellectually humbler people are better able to differentiate between strong and weak arguments, even if those arguments go against their initial beliefs9. Intellectual humility might also protect against memory distortions. Intellectually humbler people are less likely to claim falsely that they have seen certain statements before116. Likewise, intellectually humbler people are more likely to scrutinize misinformation and are more likely to intend to receive the COVID-19 vaccine.

Lastly, intellectual humility is positively associated with knowledge acquisition, learning and educational achievement. Intellectually humbler people are more motivated to learn and more knowledgeable about general facts. Likewise, intellectually humbler high school and university students expend greater effort when learning difficult material, are more receptive to assignment feedback and earn higher grades.

Despite evidence of individual benefits associated with intellectual humility, much of this work is correlational. Thus, associations could be the product of confounding factors such as agreeableness, intelligence or general virtuousness. Longitudinal or experimental studies are needed to address the question of whether and under what circumstances intellectual humility promotes individual benefits. Notably, philosophical theorizing about the situation-specific virtuousness of the construct suggests that high levels of intellectual humility are unlikely to benefit all people in all situations.


What is intellectual humility? Intellectual humility is the ability to recognize the limits of one's knowledge and to be open to new information and perspectives.

Predictors of intellectual humility: There are a number of factors that can predict intellectual humility, including:
  • Personality traits: People who are high in openness to experience and agreeableness are more likely to be intellectually humble.
  • Cognitive abilities: People who are better at thinking critically and evaluating evidence are also more likely to be intellectually humble.
  • Cultural factors: People who live in cultures that value open-mindedness and tolerance are more likely to be intellectually humble.
Consequences of intellectual humility: Intellectual humility has a number of positive consequences, including:
  • Better decision-making: Intellectually humble people are more likely to make better decisions because they are more open to new information and perspectives.
  • Enhanced learning: Intellectually humble people are more likely to learn from their mistakes and to grow as individuals.
  • Stronger relationships: Intellectually humble people are more likely to have strong relationships because they are more willing to listen to others and to consider their perspectives.

Overall, intellectual humility is a valuable trait that can lead to a number of positive outcomes.

Monday, May 22, 2023

New evaluation guidelines for dementia

The Monitor on Psychology
Vol. 54, No. 3
Print Version: Page 40

Updated APA guidelines are now available to help psychologists evaluate patients with dementia and their caregivers with accuracy and sensitivity and learn about the latest developments in dementia science and practice.

APA Guidelines for the Evaluation of Dementia and Age-Related Cognitive Change (PDF, 992KB) was released in 2021 and reflects updates in the field since the last set of guidelines, released in 2011, said geropsychologist and University of Louisville professor Benjamin T. Mast, PhD, ABPP, who chaired the task force that produced the guidelines.

“These guidelines aspire to help psychologists gain not only a high level of technical expertise in understanding the latest science and procedures for evaluating dementia,” he said, “but also have a high level of sensitivity and empathy for those undergoing a life change that can be quite challenging.”

Major updates since 2011 include:

Discussion of new DSM terminology. The new guidelines discuss changes in dementia diagnosis and diagnostic criteria reflected in the most recent version of the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition). In particular, the DSM-5 changed the term “dementia” to “major neurocognitive disorder,” and “mild cognitive impairment” to “minor neurocognitive disorder.” As was true with earlier nomenclature, providers and others amend these terms depending on the cause or causes of the disorder, for example, “major neurocognitive disorder due to traumatic brain injury.” That said, the terms “dementia” and “mild cognitive impairment” are still widely used in medicine and mental health care.

Discussion of new research guidelines. The new guidelines also discuss research advances in the field, in particular the use of biomarkers to detect various forms of dementia. Examples are the use of amyloid imaging—PET scans with a radio tracer that selectively binds to amyloid plaques—and analysis of amyloid and tau in cerebrospinal fluid. While these techniques are still mainly used in major academic medical centers, it is important for clinicians to know about them because they may eventually be used in clinical practice, said Bonnie Sachs, PhD, ABPP, an associate professor and neuropsychologist at Wake Forest University School of Medicine. “These developments change the way we think about things like Alzheimer’s disease, because they show there is a long preclinical asymptomatic phase before people start to show memory problems,” she said.

Sunday, January 1, 2023

The Central Role of Lifelong Learning & Humility in Clinical Psychology

Washburn, J. J., Teachman, B. A., et al. 
(2022). Clinical Psychological Science, 0(0).
https://doi.org/10.1177/21677026221101063

Abstract

Lifelong learning plays a central role in the lives of clinical psychologists. As psychological science advances and evidence-based practices develop, it is critical for clinical psychologists to not only maintain their competencies but to also evolve them. In this article, we discuss lifelong learning as a clinical, ethical, and scientific imperative in the myriad dimensions of the clinical psychologist’s professional life, arguing that experience alone is not sufficient. Attitude is also important in lifelong learning, and we call for clinical psychologists to adopt an intellectually humble stance and embrace “a beginner’s mind” when approaching new knowledge and skills. We further argue that clinical psychologists must maintain and refresh their critical-thinking skills and seek to minimize their biases, especially as they approach the challenges and opportunities of lifelong learning. We intend for this article to encourage psychologists to think differently about how they approach lifelong learning.

Here is an excerpt:

Schwartz (2008) was specifically referencing the importance of teaching graduate students to embrace what they do not know, viewing it as an opportunity instead of a threat. The same is true, perhaps even more so, for psychologists engaging in lifelong learning.

As psychologists progress in their careers, they are told repeatedly that they are experts in their field and sometimes THE expert in their own tiny subfield. Psychologists spend their days teaching others what they know and advising students how to make their own discoveries. But expertise is a double-edged sword. Of course, it serves psychologists well in that they are less likely to repeat past mistakes, but it is a disadvantage if they become too comfortable in their expert role. The Egyptian mathematician, Ptolemy, devised a system based on the notion that the sun revolved around the earth that guided astronomers for centuries until Copernicus proved him wrong. Although Newton devised the laws of physics, Einstein showed that the principles of Newtonian physics were wholly bound by context and only “right” within certain constraints. Science is inherently self-correcting, and the only thing that one can count on is that most of what people believe today will be shown to be wrong in the not-too-distant future. One of the authors (S. D. Hollon) recalls that the two things that he knew for sure coming out of graduate school was that neural tissues do not regenerate and that you cannot inherit acquired characteristics. It turns out that both are wrong. Lifelong learning and the science it is based on require psychologists to continuously challenge their expertise. Before becoming experts, psychologists often experience impostor phenomenon during education and training (Rokach & Boulazreg, 2020). Embracing the self-doubt that comes with feeling like an impostor can motivate lifelong learning, even for areas in which one feels like an expert. This means not only constantly learning about new topics but also recognizing that as psychologists tackle tough problems and their associated research questions, complex and often interdisciplinary approaches are required to develop meaningful answers. It is neither feasible nor desirable to become an expert in all domains. This means that psychologists need to routinely surround themselves with people who make them question or expand their expertise.

Here is the conclusion:

Lifelong learning should, like doctoral programs in clinical psychology, concentrate much more on thinking than training. Lifelong learning must encourage critical and independent thinking in the process of mastering relevant bodies of knowledge and the development of specific skills. Specifically, lifelong learning must reinforce the need for clinical psychologists to reflect carefully and critically on what they read, hear, and say and to think abstractly. Such abstract thinking is as relevant after one’s graduate career as before.

Saturday, November 12, 2022

Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information

Litovsky, Y. Loewenstein, G. et al.
PNAS, Vol. 119 | No. 34
August 23, 2022

Abstract

We often talk about interacting with information as we would with a physical good (e.g., “consuming content”) and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., “holding on to” or “letting go of” our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory—loss aversion and different risk preferences for gains versus losses—also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of “owning” it. Study 3 provides a conceptual replication of the classic “Asian Disease” gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.

Significance

We build on Abelson and Prentice’s conjecture that beliefs are not merely valued as guides to interacting with the world, but as cherished possessions. Extending this idea to information, we show that three key phenomena which characterize the valuation of money and material goods—loss aversion, the endowment effect, and the gain-loss framing effect—also apply to noninstrumental information. We discuss, more generally, how the analogy between noninstrumental information and material goods can help make sense of the complex ways in which people deal with the huge expansion of available information in the digital age.

From the Discussion

Economists have traditionally treated the value of information as derivative of its consequences for decision-making. While prior research on noninstrumental information has shown that this narrow view of information may be incomplete, only a few accounts have attempted to explain intrinsic preferences for information. One such account argues that people seek (or avoid) information inasmuch as doing so helps them maintain their cherished beliefs. Another proposes that people choose which information to seek or avoid by considering how it will impact their actions, affect, and cognition. Yet, outside of the curiosity literature, no existing account of information valuation considers preferences for information that has neither instrumental nor (concrete) hedonic value. By showing that key features of Prospect Theory’s value function also apply to individuals’ valuation of (even noninstrumental) information, the current paper suggests that we may also value information in some of the same fundamental ways that we value physical goods.

Tuesday, August 2, 2022

How to end cancel culture

Jennifer Stefano
Philadelphia Inquirer
Originally posted 25 JUL 22

Here is an excerpt:

Radical politics requires radical generosity toward those with whom we disagree — if we are to remain a free and civil society that does not descend into violence. Are we not a people defined by the willingness to spend our lives fighting against what another has said, but give our lives to defend her right to say it? Instead of being hypersensitive fragilistas, perhaps we could give that good old-fashioned American paradox a try again.

But how? Start by engaging in the democratic process by first defending people’s right to be awful. Then use that right to point out just how awful someone’s words or deeds are. Accept that you have freedom of speech, not freedom from offense. A free society best holds people accountable in the arena of ideas. When we trade debate for the dehumanizing act of cancellation, we head down a dangerous path — even if the person who would be canceled has behaved in a dehumanizing way toward others.

Canceling those with opinions most people deem morally wrong and socially unacceptable (racism, misogyny) leads to a permissiveness in simply labeling speech we do not like as those very things without any reason or recourse. Worse, cancel culture is creating a society where dissenting or unpopular opinions become a risk. Canceling isn’t about debate but dehumanizing.

Speech is free. The consequences are not. Actress Constance Wu attempted suicide after she was canceled in 2019 for publicly tweeting she didn’t love her job on a hit TV show. Her words harmed no one, but she was publicly excoriated for them. Private DMs from her fellow Asian actresses telling her she was a “blight” on the Asian American community made her believe she didn’t deserve to live. Wu didn’t lose her job for her words, but she nearly lost her life.

Cancel culture does more than make the sinner pay a penance. It offers none of the healing redemption necessary for a free and civil society. In America, we have always believed in second chances. It is the basis for the bipartisan work on issues like criminal justice reform. Our achievements here have been a bright spot.

We as a civil society want to give the formerly incarcerated a second chance. How about doing the same for each other?

Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
TechCrunch
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.



Tuesday, June 28, 2022

You Think Failure Is Hard? So Is Learning From It

Eskreis-Winkler, L., & Fishbach, A. (2022).
Perspectives on Psychological Science. 
https://doi.org/10.1177/17456916211059817

Abstract

Society celebrates failure as a teachable moment. But do people actually learn from failure? Although lay wisdom suggests people should, a review of the research suggests that this is hard. We present a unifying framework that points to emotional and cognitive barriers that make learning from failure difficult. Emotions undermine learning because people find failure ego-threatening. People tend to look away from failure and not pay attention to it to protect their egos. Cognitively, people also struggle because the information in failure is less direct than the information in success and thus harder to extract. Beyond identifying barriers, this framework suggests inroads by which barriers might be addressed. Finally, we explore implications. We outline what, exactly, people miss out on when they overlook the information in failure. We find that the information in failure is often high-quality information that can be used to predict success.

Conclusion

From a young age, we are told that there is information in failure, and we ought to learn from it. Yet, people struggle to see the information in failure. As a result, they struggle to learn.  We present a unifying framework that identifies the emotional and cognitive barriers that make it difficult for people to learn from failure.

Understanding these barriers is especially important when one considers the information in failure. The information in failure is both rich and unique—indeed it is often richer, more informative, and more useful than the information in success.

What to do in a world where the information in failure is rich, yet people struggle to see it? One recommendation is to explore the solutions that we propose here. Remove the ego from failure, shore up the ego so it can tolerate failure, and ease the cognitive burdens of learning from failure to promote it in practice and through culture. We believe such techniques are well worth understanding and investing in, since there is so much to learn from the information in failure when we see it.

Sunday, April 17, 2022

Leveraging artificial intelligence to improve people’s planning strategies

F. Callaway, et al.
PNAS, 2022, 119 (12) e2117432119 

Abstract

Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

Significance

Many bad decisions and their devastating consequences could be avoided if people used optimal decision strategies. Here, we introduce a principled computational approach to improving human decision making. The basic idea is to give people feedback on how they reach their decisions. We develop a method that leverages artificial intelligence to generate this feedback in such a way that people quickly discover the best possible decision strategies. Our empirical findings suggest that a principled computational approach leads to improvements in decision-making competence that transfer to more difficult decisions in more complex environments. In the long run, this line of work might lead to apps that teach people clever strategies for decision making, reasoning, goal setting, planning, and goal achievement.

From the Discussion

We developed an intelligent system that automatically discovers optimal decision strategies and teaches them to people by giving them metacognitive feedback while they are deciding what to do. The general approach starts from modeling the kinds of decision problems people face in the real world along with the constraints under which those decisions have to be made. The resulting formal model makes it possible to leverage artificial intelligence to derive an optimal decision strategy. To teach people this strategy, we then create a simulated decision environment in which people can safely and rapidly practice making those choices while an intelligent tutor provides immediate, precise, and accurate feedback on how they are making their decision. As described above, this feedback is designed to promote metacognitive reinforcement learning.

Monday, September 6, 2021

Paranoia and belief updating during the COVID-19 crisis

Suthaharan, P., Reed, E.J., Leptourgos, P. et al. 
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01176-8

Abstract

The COVID-19 pandemic has made the world seem less predictable. Such crises can lead people to feel that others are a threat. Here, we show that the initial phase of the pandemic in 2020 increased individuals’ paranoia and made their belief updating more erratic. A proactive lockdown made people’s belief updating less capricious. However, state-mandated mask-wearing increased paranoia and induced more erratic behaviour. This was most evident in states where adherence to mask-wearing rules was poor but where rule following is typically more common. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable. People who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines and the QAnon conspiracy theories. These beliefs were associated with erratic task behaviour and changed priors. Taken together, we found that real-world uncertainty increases paranoia and influences laboratory task behaviour.

Discussion

The COVID-19 pandemic has been associated with increased paranoia. The increase was less pronounced in states that enforced a more proactive lockdown and more pronounced at reopening in states that mandated mask-wearing. Win-switch behaviour and volatility priors tracked these changes in paranoia with policy. We explored cultural variations in rule following (CTL) as a possible contributor to the increased paranoia that we observed. State tightness may originate in response to threats such as natural disasters, disease, territorial and ideological conflict. Tighter states typically evince more coordinated threat responses. They have also experienced greater mortality from pneumonia and influenza throughout their history. However, paranoia was highest in tight states with a mandate, with lower mask adherence during reopening. It may be that societies that adhere rigidly to rules are less able to adapt to unpredictable change. Alternatively, these societies may prioritize protection from ideological and economic threats over a public health crisis or perhaps view the disease burden as less threatening.

Wednesday, July 15, 2020

Empathy is both a trait and a skill. Here's how to strengthen it.

Kristen Rogers
CNN.com
Originally posted 24 June 20

Here is an excerpt:

Types of empathy

Empathy is more about looking for a common humanity, while sympathy entails feeling pity for someone's pain or suffering, Konrath said.

"Whereas empathy is the ability to perceive accurately what another person is feeling, sympathy is compassion or concern stimulated by the distress of another," Lerner said. "A common example of empathy is accurately detecting when your child is afraid and needs encouragement. A common example of sympathy is feeling sorry for someone who has lost a loved one."

(cut)

A "common mistake is to leap into sympathy before empathically understanding what another person is feeling," Lerner said. Two types of empathy can prevent that relationship blunder.

Emotional empathy, sometimes called compassion, is more intuitive and involves care and concern for others.

Cognitive empathy requires effort and more systematic thinking, so it may lead to more empathic accuracy, Lerner said.  It entails considering others' and their perspectives and imagining what it's like to be them, Konrath added.

Some work managers and colleagues, for example, have had to practice empathy for parents juggling remote work with child care and virtual learning duties, said David Anderson, senior director of national programs and outreach at the Child Mind Institute….   But since the outset of the pandemic in March, that empathy has faded — reflecting the notion that cognitive empathy does take effort.

It takes work to interpret what someone is feeling by all of his cues: facial expressions, tones of voice, posture, words and more. Then you have to connect those cues with what you know about him and the situation in order to accurately infer his feelings.

"This kind of inference is a highly complex social-cognitive task" that might involve a variation of mental processes, Lerner said.

The info is here.

Thursday, June 18, 2020

Measuring Information Preferences

E. H. Ho, D. Hagmann, & G. Loewenstein
Management Science
Published Online:13 Mar 2020

Abstract

Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economic reasons. We design and validate an information preferences scale to measure an individual’s desire to obtain or avoid information that may be unpleasant but could improve future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. In three studies incorporating responses from over 2,300 individuals, we present tests of the scale’s reliability and validity. We show that the scale predicts a real decision to obtain (or avoid) information in each of the domains as well as decisions from out-of-sample, unrelated domains. Across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available. Moreover, we find that information preferences are a stable trait but that an individual’s preference for information can differ across domains.

General Discussion

Making good decisions is often contingent on obtaining information, even when that
information is uncertain and has the potential to produce unhappiness. Substantial empirical
evidence suggests that people are often ready to make worse decisions in the service of avoiding
potentially painful information. We propose that this tendency to avoid information is a trait that
is separate from those measured previously, and developed a scale to measure it. The scale asks
respondents to imagine how they would respond to a variety of hypothetical decisions involving
information acquisition/avoidance. The predictive validity of the IPS appears to be largely driven
by its domain items, and although it incorporates domain-specific subscales, it appears to be
sufficiently universal to capture preferences for information in a broad range of domains.

The research is here.

We already knew, to some extent, that there are cases where people avoid information.  This is important in psychotherapy in which avoidance promotes confirmatory hypothesis testing, which enhances overconfidence.  We need to help people embrace information that may be inconsistent or incongruent with their worldview.

Monday, March 16, 2020

Video Games Need More Complex Morality Systems

Hayes Madsen
screenrant.com
Originally published 26 Feb 20

Hereis an excerpt:

Perhaps a bigger issue is the simple fact that games separate decisions into these two opposed ideas. There's a growing idea that games need to represent morality as shades of grey, rather than black and white. Titles like The Witcher 3 further this effort by trying to make each conflict not have a right or wrong answer, as well as consequences, but all too often the neutral path is ignored. Even with multiple moral options, games generally reward players for being good or evil. Take inFamous for example, as making moral choices rewards you with good or bad karma, which in turn unlocks new abilities and powers. The problem here is that great powers are locked away for players on either end, cordoning off gameplay based on your moral choices.

Video games need to make more of an effort to make any choice matter for players, and if they decide to go back and forth between good and evil, that should be represented, not discouraged. Things are seldom black and white, and for games to represent that properly there needs to be incentive across the board, whether the player wants to be good, evil, or anything in between.

Moral choices can shape the landscape of game worlds, even killing characters or entire races. Yet, choices don't always need to be so dramatic or earth-shattering. Characterization is important for making huge decisions, but the smaller day-to-day decisions often have a bigger impact on fleshing out characters.

The info is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Wednesday, October 2, 2019

Seven Key Misconceptions about Evolutionary Psychology

Image result for evolutionary psychologyLaith Al-Shawaf
www.areomagazine.com
Originally published August 20, 2019

Evolutionary approaches to psychology hold the promise of revolutionizing the field and unifying it with the biological sciences. But among both academics and the general public, a few key misconceptions impede its application to psychology and behavior. This essay tackles the most pervasive of these.

Misconception 1: Evolution and Learning Are Conflicting Explanations for Behavior

People often assume that if something is learned, it’s not evolved, and vice versa. This is a misleading way of conceptualizing the issue, for three key reasons.

First, many evolutionary hypotheses are about learning. For example, the claim that humans have an evolved fear of snakes and spiders does not mean that people are born with this fear. Instead, it means that humans are endowed with an evolved learning mechanism that acquires a fear of snakes more easily and readily than other fears. Classic studies in psychology show that monkeys can acquire a fear of snakes through observational learning, and they tend to acquire it more quickly than a similar fear of other objects, such as rabbits or flowers. It is also harder for monkeys to unlearn a fear of snakes than it is to unlearn other fears. As with monkeys, the hypothesis that humans have an evolved fear of snakes does not mean that we are born with this fear. Instead, it means that we learn this fear via an evolved learning mechanism that is biologically prepared to acquire some fears more easily than others.

Second, learning is made possible by evolved mechanisms instantiated in the brain. We are able to learn because we are equipped with neurocognitive mechanisms that enable learning to occur—and these neurocognitive mechanisms were built by evolution. Consider the fact that both children and puppies can learn, but if you try to teach them the same thing—French, say, or game theory—they end up learning different things. Why? Because the dog’s evolved learning mechanisms are different from those of the child. What organisms learn, and how they learn it, depends on the nature of the evolved learning mechanisms housed in their brains.

The info is here.


Tuesday, May 28, 2019

Should Students Take Smart Drugs?

Darian Meacham
www.philosophersmag.com
Originally posted December 8, 2017

If this were a straightforward question, you would not be reading about it in a philosophy magazine. But you are, so it makes sense that we try to clarify the terms of the discussion before wading in too far. Unfortunately (or fortunately depending on how you look at it), when philosophers set out to de-obfuscate what look to be relatively forthright questions, things usually get more complicated rather than less: each of the operative terms at stake in the question, ‘should students take smart drugs?’ opens us up onto larger debates about the nature of medicine, health, education, learning, and creativity as well as economic, political and social structures and norms. So, in a sense, a seemingly rather narrow question about a relatively peripheral issue in the education sector morphs into a much larger question about how we think about and value learning; what constitutes psychiatric illness and in what ways should we deal with it; and what sort of productivity should educational institutions like universities, but also secondary and even primary schools value and be oriented towards?

The first question that needs to be addressed is what is a ‘smart drug’? I have in mind two things when I use the term here:

(1) On the one hand, existing psychostimulants normally prescribed for children and adults with a variety of conditions, most prominently ADHD (Attention Deficit Hyperactivity Disorder), but also various others like narcolepsy, sleep-work disorder and schizophrenia. Commonly known by brand and generic names like Adderall, Ritalin, and Modafinil, these drugs are often sold off-label or on the grey market for what could be called non-medical or ‘enhancement’ purposes. The off-label use of psychostimulants for cognitive enhancement purposes is reported to be quite widespread in the USA. So the debate over the use of smart drugs is very much tied up with debates about how the behavioural and cognitive disorders for which these drugs are prescribed are diagnosed and what the causes of such conditions are.

(2) On the other hand, the philosophical-ethical debate around smart drugs need not be restricted to currently existing technologies. Broader issues at stake in the debate allow us to reflect on questions surrounding possible future cognitive enhancement technologies, and even much older ones. In this sense, the question about the use of smart drugs situates itself in a broader discussion about cognitive enhancement and enhancement in general.

The info is here.

Sunday, March 24, 2019

An Ethical Obligation for Bioethicists to Utilize Social Media

Herron, PD
Hastings Cent Rep. 2019 Jan;49(1):39-40.
doi: 10.1002/hast.978.

Here is an excerpt:

Unfortunately, it appears that bioethicists are no better informed than other health professionals, policy experts, or (even) elected officials, and they are sometimes resistant to becoming informed. But bioethicists have a duty to develop our knowledge and usefulness with respect to social media; many of our skills can and should be adapted to this area. There is growing evidence of the power of social media to foster dissemination of misinformation. The harms associated with misinformation or “fake news” are not new threats. Historically, there have always been individuals or organized efforts to propagate false information or to deceive others. Social media and other technologies have provided the ability to rapidly and expansively share both information and misinformation. Bioethics serves society by offering guidance about ethical issues associated with advances in medicine, science, and technology. Much of the public’s conversation about and exposure to these emerging issues occurs online. If we bioethicists are not part of the mix, we risk yielding to alternative and less authoritative sources of information. Social media’s transformative impact has led some to view it as not just a personal tool but the equivalent to a public utility, which, as such, should be publicly regulated. Bioethicists can also play a significant part in this dialogue. But to do so, we need to engage with social media. We need to ensure that our understanding of social media is based on experiential use, not just abstract theory.

Bioethics has expanded over the past few decades, extending beyond the academy to include, for example, clinical ethics consultants and leadership positions in public affairs and public health policy. These varied roles bring weighty responsibilities and impose a need for critical reflection on how bioethicists can best serve the public interest in a way that reflects and is accountable to the public’s needs.

Monday, February 25, 2019

Information Processing Biases in the Brain: Implications for Decision-Making and Self-Governance

Sali, A.W., Anderson, B.A. & Courtney, S.M.
Neuroethics (2018) 11: 259.
https://doi.org/10.1007/s12152-016-9251-1

Abstract

To make behavioral choices that are in line with our goals and our moral beliefs, we need to gather and consider information about our current situation. Most information present in our environment is not relevant to the choices we need or would want to make and thus could interfere with our ability to behave in ways that reflect our underlying values. Certain sources of information could even lead us to make choices we later regret, and thus it would be beneficial to be able to ignore that information. Our ability to exert successful self-governance depends on our ability to attend to sources of information that we deem important to our decision-making processes. We generally assume that, at any moment, we have the ability to choose what we pay attention to. However, recent research indicates that what we pay attention to is influenced by our prior experiences, including reward history and past successes and failures, even when we are not aware of this history. Even momentary distractions can cause us to miss or discount information that should have a greater influence on our decisions given our values. Such biases in attention thus raise questions about the degree to which the choices that we make may be poorly informed and not truly reflect our ability to otherwise exert self-governance.

Here is part of the Conclusion:

In order to consistently make decisions that reflect our goals and values, we need to gather the information necessary to guide these decisions, and ignore information that is irrelevant. Although the momentary acquisition of irrelevant information will not likely change our goals, biases in attentional selection may still profoundly influence behavioral outcomes, tipping the balance between competing options when faced with a single goal (e.g., save the least competent swimmer) or between simultaneously competing goals (e.g., relieve drug craving and withdrawal symptoms vs. maintain abstinence). An important component of self-governance might, therefore, be the ability to exert control over how we represent our world as we consider different potential courses of action.

Wednesday, October 31, 2018

Learning Others’ Political Views Reduces the Ability to Assess and Use Their Expertise in Nonpolitical Domains

Marks, Joseph and Copland, Eloise and Loh, Eleanor and Sunstein, Cass R. and Sharot, Tali.
Harvard Public Law Working Paper No. 18-22. (April 13, 2018).

Abstract

On political questions, many people are especially likely to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. Here, we test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that person is less skilled in that domain than someone with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. The results demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.

You can download the paper here.

Probably a good resource to contemplate before discussing politics in psychotherapy.