Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, November 1, 2021

Social Media and Mental Health

Luca Braghieri, Ro’ee Levy, and Alexey Makarin
Independent Research
August 21

Abstract 

The diffusion of social media coincided with a worsening of mental health conditions among adolescents and young adults in the United States, giving rise to speculation that social media might be detrimental to mental health. In this paper, we provide the first quasi-experimental estimates of the impact of social media on mental health by leveraging a unique natural experiment: the staggered introduction of Facebook across U.S. colleges. Our analysis couples data on student mental health around the years of Facebook’s expansion with a generalized difference-in-differences empirical strategy. We find that the roll-out of Facebook at a college increased symptoms of poor mental health, especially depression, and led to increased utilization of mental healthcare services. We also find that, according to the students’ reports, the decline in mental health translated into worse academic performance. Additional evidence on mechanisms suggests the results are due to Facebook fostering unfavorable social comparisons. 

Discussion 

Implications for social media today 

Our estimates of the effects of social media on mental health rely on quasi-experimental variation in Facebook access among college students around the years 2004 to 2006. Such population and time window are directly relevant to the discussion about the severe worsening of mental health conditions among adolescents and young adults over the last two decades. In this section, we elaborate on the extent to which our findings have the potential to inform our understanding of the effects of social media on mental health today. 

Over the last two decades, Facebook underwent a host of important changes. Such changes include: i) the introduction of a personalized feed where posts are ranked by an algorithm; ii) the growth of Facebook’s user base from U.S. college students to almost three billion active users around the globe (Facebook, 2021); iii) video often replacing images and text; iv) increased usage of Facebook on mobile phones instead of computers; and v) the introduction of Facebook pages for brands, businesses, and organizations. 

The nature of the variation we are exploiting in this paper does not allow us to identify the impact of these features of social media. For example, the introduction of pages, along with other changes, made news consumption on Facebook more common over the last decade than it was at inception. Our estimates cannot shed light on whether the increased reliance on Facebook for news consumption has exacerbated or mitigated the effects of Facebook on mental health. 

Despite these caveats, we believe the estimates presented in this paper are still highly relevant today for two main reasons: first, the mechanisms whereby social media use might affect mental health arguably relate to core features of social media platforms that have been present since inception and that remain integral parts of those platforms today; second, the technological changes undergone by Facebook and related platforms might have amplified rather than mitigated the effect of those mechanisms. 

Sunday, October 31, 2021

Silenced by Fear: The Nature, Sources, and Consequences of Fear at Work

Kish-Gephart, J. J. et al. (2009)
Research in Organizational Behavior, 29, 163-193. 
https://doi.org/10.1016/j.riob.2009.07.002

Abstract

In every organization, individual members have the potential to speak up about important issues, but a growing body of research suggests that they often remain silent instead, out of fear of negative personal and professional consequences. In this chapter, we draw on research from disciplines ranging from evolutionary psychology to neuroscience, sociology, and anthropology to unpack fear as a discrete emotion and to elucidate its effects on workplace silence. In doing so, we move beyond prior descriptions and categorizations of what employees fear to present a deeper understanding of the nature of fear experiences, where such fears originate, and the different types of employee silence they motivate. Our aim is to introduce new directions for future research on silence as well as to encourage further attention to the powerful and pervasive role of fear across numerous areas of theory and research on organizational behavior.

Discussion 

Fear, a powerful and pervasive emotion, influences human perception, cognition, and behavior in ways and to an extent that we find underappreciated in much of the organizational literature. This chapter draws from a broad range of literatures, including evolutionary psychology, neuroscience, sociology, and anthropology, to provide a fuller understanding of how fear influences silence in organizations. Our intention is to provide a foundation to inform future theorizing and research on fear’s effects in the workplace, and to elucidate why people at work fear challenging authority and thus how fear inhibits speaking up with even routine problems or suggestions for improvement.

Our review of the literature on fear generated insights with the potential to extend theory on silence in several ways.  First, we proposed that silence should be differentiated based on the intensity of fear experienced and the time available for choosing a response. Both non-deliberative, low-road silence and conscious but schema-driven silence differ from descriptions in extant literature of defensive silence as intentional, reasoned and involving an expectancy-like mental calculus. Thus, our proposed typology (in Fig. 2) suggests the need for content-specific future theory and research. For example, the description of silence as the result of extended, conscious deliberation may fit choices about whistleblowing and major issue selling well, while not explaining how individuals decide to speak up or remain silent in more routine high fear intensity or high immediacy situations. We also theorized that as a natural outcome of humans’ innate tendency to avoid the unpleasant characteristics of fear, employees may develop a type of habituated silence behavior that is largely unrecognized by them.

We expanded understanding of the antecedents of workplace silence by explaining in detail how prior (individual and societal) experiences affect the perceptions, appraisals, and outcomes of fear-based silence. Noting that the fear of challenging authority has roots in the biological mechanisms developed to aid survival in early humans, we argued that this prepared fear is continually developed and reinforced through a lifetime of experiences across most social institutions (e.g., family, school, religion) that implicitly and explicitly convey messages about authority relationships.Over time, these direct and indirect learning experiences, coupled with the characteristics of an evolutionary-based fear module, become the memories and beliefs against which current stimuli in moments of possible voice are compared.

Finally, we proposed two factors to help explain why and how certain individuals speak up to authority despite experiencing some fear of doing so. Though the deck is clearly stacked in favor of fear and silence, anger as a biologically-based emotion and voice efficacy as a learned belief in one’s ability to successfully speak up in difficult voice situations may help employees prevail over fear – in part, through their influence on the control appraisals that are central to emotional experience.

Saturday, October 30, 2021

Psychological barriers to effective altruism: An evolutionary perspective

Bastian, J. & van Vugt, M.
Current Opinion in Psychology
Available online 17 September 2021

Abstract

People usually engage in (or at least profess to engage in) altruistic acts to benefit others. Yet, they routinely fail to maximize how much good is achieved with their donated money and time. An accumulating body of research has uncovered various psychological factors that can explain why people’s altruism tends to be ineffective. These prior studies have mostly focused on proximate explanations (e.g., emotions, preferences, lay beliefs). Here, we adopt an evolutionary perspective and highlight how three fundamental motives—parochialism, status, and conformity—can explain many seemingly disparate failures to do good effectively. Our approach outlines ultimate explanations for ineffective altruism and we illustrate how fundamental motives can be leveraged to promote more effective giving.

Summary and Implications

Even though donors and charities often highlight their desire to make a difference in the lives of others, an accumulating body of research demonstrates that altruistic acts are surprisingly ineffective in maximizing others’ welfare. To explain ineffective altruism, previous investigations have largely focused on the role of emotions, beliefs, preferences, and other proximate causes. Here, we adopted an evolutionary perspective to understand why these proximate mechanisms evolved in the first place. We outlined how three fundamental motives that likely evolved because they helped solve key challenges in humans’ ancestral past—parochialism, status, and conformity—can create psychological barriers to effective giving. Our framework not only provides a parsimonious explanation for many proximate causes of ineffective giving, it also provides an ultimate explanation for why these mechanisms exist.

Although parochialism, status concerns, and conformity can explain many forms of ineffective giving, there are additional causes that we did not address here. For example, many people focus too much on overhead costs when deciding where to donate. Everyday altruism is multi-faceted: People donate to charity, volunteer, give in church, and engage in a various random acts of kindness. These diverse acts of altruism likely require diverse explanations and more research is needed to understand the relative importance of different psychological factors for explaining different forms of altruism. Moreover, of the three fundamental motives reviewed here, conformity to social norms has probably received the least attention when it comes to explaining ineffective altruism. While there is ample evidence showing that social norms affect the decision of whether and how much to donate, more research is needed to understand how social norms influence the decision of where to donate and how they can lead to ineffective giving.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021

Abstract

This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.

Conclusion

In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Thursday, October 28, 2021

Vicarious Justifications for Prejudice in the Application of Democratic Values

White MH, Crandall CS, & Davis NT.
September 2021. 
doi:10.1177/19485506211040700

Abstract

Democratic values are widely-endorsed principles including commitments to protect individual
freedoms. Paradoxically, the widespread normativity of these ideas can be used to justify
prejudice. With two nationally representative U.S. samples, we find that prejudiced respondents
defend another’s prejudiced speech, using democratic values as justification. This vicarious
defense occurs primarily among those who share the prejudice, and only when the relevant
prejudice is expressed. Several different democratic values (e.g., due process, double jeopardy)
can serve as justifications—the issue is more about when something can be used as a justification
for prejudice and less about what can be used as one. Endorsing democratic values can be a
common rhetorical device to expand what is acceptable, and protect what is otherwise
unacceptable to express in public.

Conclusion

Theories of values conceptualize them as principles that guide behavior (Maio, 2010;
Schwartz, 2012). But they can be bent to the will of strongly-held attitudes like prejudice. Values
are not always guiding principles that transcend situations; they can be normatively-loaded
rhetorical devices used to contest what is and is not acceptable to express. Although democratic
values are held in high regard in the abstract, citizens may accept norm violations when political
outcomes suit them (Graham & Svolik, 2020). We find that individuals strategically apply these
values to situations when it justifies shared prejudices.

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021

Abstract

Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Tuesday, October 26, 2021

The Fragility of Moral Traits to Technological Interventions

J. Fabiano
Neuroethics 14, 269–281 (2021). 
https://doi.org/10.1007/s12152-020-09452-6

Abstract

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

Conclusion: Fragility Leads to Increased Risks 

Any substantial technological modification of moral traits would be more likely to cause harm than benefit. Moral traits have a particularly high proclivity to unexpected disturbances, as exemplified by the co-operation case, amplified by its self-reinforcing and irreversible nature and finally as their complex aetiology would lead one to suspect. Even the most seemingly simple improvement, if only slightly mistaken, is likely to lead to significant negative outcomes. Unless we produce an almost perfectly calibrated deep moral enhancement, its implementation will carry large risks. Deep moral enhancement is likely to be hard to develop safely, but not necessarily be impossible or undesirable. Given that deep moral enhancement could prevent extreme risks for humanity, in particular decreasing the risk of human extinction, it might as well be the case that we still should attempt to develop it. I am not claiming that our current traits are well suited to dealing with global problems. On the contrary, there are certainly reasons to expect that there are better traits that could be brought about by enhancement technologies. However, I believe my arguments indicate there are also much worse, more socially disruptive, traits accessible through technological intervention.

Monday, October 25, 2021

Federal Reserve tightens ethics rules to ban active trading by senior officials

Brian Cheung
Yahoo Business News
Originally posted 21 OCT 21

The Federal Reserve on Thursday said it will tighten its ethics rules concerning personal finances among its most senior officials, the latest development in a trading scandal that has led to the resignation of two policymakers.

The central bank said it has introduced a “broad set of new rules” that restricts any active trading and prohibits the purchase of any individual securities (i.e. stocks, bonds, or derivatives). The new restrictions effectively only allow purchases of diversified investment vehicles like mutual funds.

If policymakers want to make any purchases or sales, they will be required to provide 45 days of advance notice and obtain prior approval for any purchases and sales. Those officials will also be required to hold onto those investments for at least one year, with no purchases or sales allowed during periods of “heightened financial market stress.”

Fed officials are still working on the details of what would define that level of stress, but said the market conditions of spring 2020 would have qualified.

The new rules will also increase the frequency of public disclosures from the reserve bank presidents, requiring monthly filings instead of the status quo of annual filings. Those at the Federal Reserve Board in Washington already were required to make monthly disclosures.

The restrictions apply to policymakers and senior staff at the Fed’s headquarters in Washington, as well as its 12 Federal Reserve Bank regional outposts. The new rules will be implemented “over the coming months.”

Fed officials said changes will likely require divestments from any existing holdings that do not meet the updated standards.

Sunday, October 24, 2021

Evaluating Tradeoffs between Autonomy and Wellbeing in Supported Decision Making

Veit, W., Earp, B.D., Browning, H., Savulescu, J.
American Journal of Bioethics 
https://www.researchgate.net/publication/354327526 

A core challenge for contemporary bioethics is how to address the tension between respecting an individual’s autonomy and promoting their wellbeing when these ideals seem to come into conflict (Notini  et  al.  2020).  This  tension  is  often  reflected  in  discussions  of  the  ethical  status  of guardianship and other surrogate decision-making regimes for individuals with different kinds or degrees of cognitive ability and (hence) decision-making capacity (Earp and Grunt-Mejer 2021), specifically when these capacities are regarded as diminished or impaired along certain dimensions (or with respect to certain domains). The notion or practice of guardianship, wherein a guardian is legally appointed to make decisions on behalf of someone with different/diminished capacities, has been particularly controversial. For example, many people see guardianship as unjust, taking too  much  decisional  authority  away  from  the  person  under  the  guardian’s  care  (often  due  to prejudiced attitudes, as when people with certain disabilities are wrongly assumed to lack decision-making capacity); and as too rigid, for example, in making a blanket judgment about someone’s (lack of) capacity, thereby preventing them from making decisions even in areas where they have the requisite abilities (Glen 2015).

It is  against  this  backdrop that  Peterson,  Karlawish, and  Largent (2021) offer  a  useful philosophical framework for the notion of ‘supported decision-making’ as a compelling alternative for  individuals  with  ‘dynamic  impairments’  (i.e.,  non-static  or  domain-variant  perceived mpairments  in  decision-making  capacity).  In  a  similar spirit,  we  have  previously  argued  that bioethics would benefit from a more case-sensitive rather than a ‘one-size-fits-all’ approach when it comes to issues of cognitive diversity (Veit et al. 2020; Chapman and Veit 2020). We therefore agree with most of the authors’ defence of supported decision-making, as this approach allows for case- and context-sensitivity. We also agree with the authors that the categorical condemnation of guardianships  or  similar  arrangements  is  not  justified,  as  this  precludes  such  sensitivity.  For instance, as the authors note, if a patient is in a permanent unaware/unresponsive state – i.e., with no  current  or  foreseeable  decision-making  capacity  or  ability  to  exercise  autonomy  –  then  a guardianship-like regime may be the most appropriate means of promoting this person’s interests. A similar point can be made in relation to debates about intended human enhancement of embryos and children.  Although some critics  claim that  such interventions  violate the autonomy  of the enhanced person, proponents may argue that respect for autonomy and consent do not apply in certain cases, for example, when dealing with embryos (see Veit 2018); alternatively, they may argue that interventions to enhance the (future) autonomy of a currently pre-autonomous (or partially autonomous) being can be justified on an enhancement framework without falling prey to such objections (see Earp 2019, Maslen et al. 2014).