Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Preferences. Show all posts
Showing posts with label Preferences. Show all posts

Thursday, December 30, 2021

When Helping Is Risky: The Behavioral and Neurobiological Trade-off of Social and Risk Preferences

Gross, J., Faber, N. S., et al.  (2021).
Psychological Science, 32(11), 1842–1855.
https://doi.org/10.1177/09567976211015942

Abstract

Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.

From the Discussion

From a practical perspective, both methylphenidate (sold under the trade name Ritalin) and atomoxetine (sold under the trade name Strattera) are prescription drugs used to treat attention-deficit/hyperactivity disorder and are regularly used off-label by people who aim to enhance their cognitive performance (Maier et al., 2018). Thus, our results have implications for the ethics of and policy for the use of psychostimulants. Indeed, the Global Drug Survey taken in 2015 and 2017 revealed that 3.2% and 6.6% of respondents, respectively, reported using psychostimulants such as methylphenidate for cognitive enhancement (Maier et al., 2018). Both in the professional ethical debate as well as in the general public, concerns about the medical safety and the fairness of such cognitive enhancements are discussed (Faber et al., 2016). However, our finding that methylphenidate alters helping behavior through increased risk seeking demonstrates that substances aimed at changing cognitive functioning can also influence social behavior. Such “social” side effects of cognitive enhancement (whether deemed positive or negative) are currently unknown to both users and administrators and thus do not receive much attention in the societal debate about psychostimulant use (Faulmüller et al., 2013).

Saturday, October 9, 2021

Nudgeability: Mapping Conditions of Susceptibility to Nudge Influence

de Ridder, D., Kroese, F., & van Gestel, L. (2021). 
Perspectives on psychological science 
Advance online publication. 
https://doi.org/10.1177/1745691621995183

Abstract

Nudges are behavioral interventions to subtly steer citizens' choices toward "desirable" options. An important topic of debate concerns the legitimacy of nudging as a policy instrument, and there is a focus on issues relating to nudge transparency, the role of preexisting preferences people may have, and the premise that nudges primarily affect people when they are in "irrational" modes of thinking. Empirical insights into how these factors affect the extent to which people are susceptible to nudge influence (i.e., "nudgeable") are lacking in the debate. This article introduces the new concept of nudgeability and makes a first attempt to synthesize the evidence on when people are responsive to nudges. We find that nudge effects do not hinge on transparency or modes of thinking but that personal preferences moderate effects such that people cannot be nudged into something they do not want. We conclude that, in view of these findings, concerns about nudging legitimacy should be softened and that future research should attend to these and other conditions of nudgeability.

From the General Discussion

Finally, returning to the debates on nudging legitimacy that we addressed at the beginning of this article, it seems that concerns should be softened insofar as nudges do impose choice without respecting basic ethical requirements for good public policy. More than a decade ago, philosopher Luc Bovens (2009) formulated the following four principles for nudging to be legitimate: A nudge should allow people to act in line with their overall preferences; a nudge should not induce a change in preferences that would not hold under nonnudge conditions; a nudge should not lead to “infantilization,” such that people are no longer capable of making autonomous decisions; and a nudge should be transparent so that people have control over being in a nudge situation. With the findings from our review in mind, it seems that these legitimacy requirements are fulfilled. Nudges do allow people to act in line with their overall preferences, nudges allow for making autonomous decisions insofar as nudge effects do not depend on being in a System 1 mode of thinking, and making the nudge transparent does not compromise nudge effects.

Wednesday, June 24, 2020

Shifting prosocial intuitions: neurocognitive evidence for a value-based account of group-based cooperation

Leor M Hackel, Julian A Wills, Jay J Van Bavel
Social Cognitive and Affective Neuroscience
nsaa055, https://doi.org/10.1093/scan/nsaa055

Abstract

Cooperation is necessary for solving numerous social issues, including climate change, effective governance and economic stability. Value-based decision models contend that prosocial tendencies and social context shape people’s preferences for cooperative or selfish behavior. Using functional neuroimaging and computational modeling, we tested these predictions by comparing activity in brain regions previously linked to valuation and executive function during decision-making—the ventromedial prefrontal cortex (vmPFC) and dorsolateral prefrontal cortex (dlPFC), respectively. Participants played Public Goods Games with students from fictitious universities, where social norms were selfish or cooperative. Prosocial participants showed greater vmPFC activity when cooperating and dlPFC-vmPFC connectivity when acting selfishly, whereas selfish participants displayed the opposite pattern. Norm-sensitive participants showed greater dlPFC-vmPFC connectivity when defying group norms. Modeling expectations of cooperation was associated with activity near the right temporoparietal junction. Consistent with value-based models, this suggests that prosocial tendencies and contextual norms flexibly determine whether people prefer cooperation or defection.

From the Discussion section

The current research further indicates that norms shape cooperation. Participants who were most attentive to norms aligned their behavior with norms and showed greater right dlPFC-vmPFC connectivity when deviating from norms, whereas the least attentive participants showed the reverse pattern. Curiously, we found no clear evidence that decisions to conform were more valued than decisions to deviate. This conflicts with work suggesting social norms boost the value of norm compliance (Nook and Zaki, 2015). Instead, our findings suggest that norm compliance can also stem from increased functional connectivity between vmPFC and dlPFC.

The research is here.

Thursday, June 4, 2020

A Value-Based Framework for Understanding Cooperation

Pärnamets, P., Shuster, A., Reinero, D. A.,
& Van Bavel, J. J. (2020)
Current Directions in Psychological Science. 
https://doi.org/10.1177/0963721420906200

Abstract

Understanding the roots of human cooperation, a social phenomenon embedded in pressing issues including climate change and social conflict, requires an interdisciplinary perspective. We propose a unifying value-based framework for understanding cooperation that integrates neuroeconomic models of decision-making with psychological variables involved in cooperation. We propose that the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neurocognitive processes such as attention, memory, and learning. Next, we describe findings from social and personality psychology highlighting factors that shape the value of cooperation, including research on contexts and norms, personal and social identity, and intergroup relations. Our approach advances theoretical debates about cooperation by highlighting how previous findings are accommodated within a general value-based framework and offers novel predictions.

The paper is here.


Thursday, September 12, 2019

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.

Thursday, February 14, 2019

Sex talks

Rebecca Kukla
aeon.co
Originally posted February 4, 2019

Communication is essential to ethical sex. Typically, our public discussions focus on only one narrow kind of communication: requests for sex followed by consent or refusal. But notice that we use language and communication in a wide variety of ways in negotiating sex. We flirt and rebuff, express curiosity and repulsion, and articulate fantasies. Ideally, we talk about what kind of sex we want to have, involving which activities, and what we like and don’t like. We settle whether or not we are going to have sex at all, and when we want to stop. We check in with one another and talk dirty to one another during sex. 

In this essay I explore the language of sexual negotiation. My specific interest is in what philosophers call the ‘pragmatics’ of speech. That is, I am less interested in what words mean than I am in how speaking can be understood as a kind of action that has a pragmatic effect on the world. Philosophers who specialise in what is known as ‘speech act theory’ focus on what an act of speaking accomplishes, as opposed to what its words mean. J L Austin developed this way of thinking about the different things that speech can do in his classic book, How To Do Things With Words (1962), and many philosophers of language have developed the idea since.

The info is here.

Happy Valentine's Day

Saturday, February 17, 2018

Fantasy and Dread: The Demand for Information and the Consumption Utility of the Future

Ananda R. Ganguly and Joshua Tasoff
Management Science
Last revised: 1 Jun 2016

Abstract

We present evidence that intrinsic demand for information about the future is increasing in expected future consumption utility. In the first experiment, subjects may resolve a lottery now or later. The information is useless for decision making but the larger the reward, the more likely subjects are to pay to resolve the lottery early. In the second experiment subjects may pay to avoid being tested for HSV-1 and the more highly feared HSV-2. Subjects are three times more likely to avoid testing for HSV-2, suggesting that more aversive outcomes lead to more information avoidance. In a third experiment, subjects make choices about when to get tested for a fictional disease. Some subjects behave in a way consistent with expected utility theory and others exhibit greater delay of information for more severe diseases. We also find that information choice is correlated with positive affect, ambiguity aversion, and time preference as some theories predict.

The research is here.

Thursday, June 22, 2017

Is it dangerous for humans to depend on computers?

Rory Cellan-Jones
BBC News
Originally published June 5, 2017

Here is an excerpt:

In Britain, doctors whose computers froze during the recent ransomware attack had to turn patients away. In Ukraine, there were power cuts when hackers attacked the electricity system, and five years ago, millions of Royal Bank of Scotland customers were unable to get at their money for days after problems with a software upgrade.

Already some people have had enough. This week a letter to the Guardian newspaper warned that the modern world was "dangerously exposed by this reliance on the internet and new technology".
The correspondent, quite possibly a retired government employee, continued "there are just enough old-time civil servants left alive to turn back the clock and take away our dangerous dependence on modern technology."

Somehow, though, I don't see this happening. Airlines are not going to scrap the computers and tick passengers off on a paper list before they climb aboard, bank clerks will not be entering transactions in giant ledgers in copperplate writing.

In fact, computers will take over more and more functions once restricted to humans, most of them far more useful than a game of Go. And that means that at home, at work and at play we will have to get used to seeing our lives disrupted when those clever machines suffer the occasional nervous breakdown.

The article is here.