Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Secrecy. Show all posts
Showing posts with label Secrecy. Show all posts

Saturday, April 16, 2022

Morality, punishment, and revealing other people’s secrets.

Salerno, J. M., & Slepian, M. L. (2022).
Journal of Personality & Social Psychology, 
122(4), 606–633. 
https://doi.org/10.1037/pspa0000284

Abstract

Nine studies represent the first investigation into when and why people reveal other people’s secrets. Although people keep their own immoral secrets to avoid being punished, we propose that people will be motivated to reveal others’ secrets to punish them for immoral acts. Experimental and correlational methods converge on the finding that people are more likely to reveal secrets that violate their own moral values. Participants were more willing to reveal immoral secrets as a form of punishment, and this was explained by feelings of moral outrage. Using hypothetical scenarios (Studies 1, 3–6), two controversial events in the news (hackers leaking citizens’ private information; Study 2a–2b), and participants’ behavioral choices to keep or reveal thousands of diverse secrets that they learned in their everyday lives (Studies 7–8), we present the first glimpse into when, how often, and one explanation for why people reveal others’ secrets. We found that theories of self-disclosure do not generalize to others’ secrets: Across diverse methodologies, including real decisions to reveal others’ secrets in everyday life, people reveal others’ secrets as punishment in response to moral outrage elicited from others’ secrets.

From the Discussion

Our data serve as a warning flag: one should be aware of a potential confidant’s views with regard to the morality of the behavior. Across 14 studies (Studies 1–8; Supplemental Studies S1–S5), we found that people are more likely to reveal other people’s secrets to the degree that they, personally, view the secret act as immoral. Emotional reactions to the immoral secrets explained this effect, such as moral outrage as well as anger and disgust, which were associated correlationally and experimentally with revealing the secret as a form of punishment. People were significantly more likely to reveal the same secret if the behavior was done intentionally (vs. unintentionally), if it had gone unpunished (vs. already punished by someone else), and in the context of a moral framing (vs. no moral framing). These experiments suggest a causal role for both the degree to which the secret behavior is immoral and the participants’ desire to see the behavior punished.  Additionally, we found that this psychological process did not generalize to non-secret information. Although people were more likely to reveal both secret and non-secret information when they perceived it to be more immoral, they did so for different reasons: as an appropriate punishment for the immoral secrets, and as interesting fodder for gossip for the immoral non-secrets.

Wednesday, January 2, 2019

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.