Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Intentions. Show all posts
Showing posts with label Intentions. Show all posts

Thursday, October 1, 2020

Intentional Action Without Knowledge

Vekony, R., Mele, A. & Rose, D.
Synthese (2020).

Abstract

In order to be doing something intentionally, must one know that one is doing it? Some philosophers have answered yes. Our aim is to test a version of this knowledge thesis, what we call the Knowledge/Awareness Thesis, or KAT. KAT states that an agent is doing something intentionally only if he knows that he is doing it or is aware that he is doing it. Here, using vignettes featuring skilled action and vignettes featuring habitual action, we provide evidence that, in various scenarios, a majority of non-specialists regard agents as intentionally doing things that the agents do not know they are doing and are not aware of doing. This puts pressure on proponents of KAT and leaves it to them to find a way these results can coexist with KAT.

Conclusion

Our aim was to evaluate KAT empirically. We found that majority responses to our vignettes
are at odds with KAT. Our results show that, on an ordinary view of matters, neither knowledge nor
awareness of doing something is necessary for doing it intentionally. We tested cases of skilled action
and habitual action, and we found that, for both, people ascribed intentionality to an action at an
appreciably higher rate than knowledge and awareness.

The research is here.

Tuesday, March 3, 2020

It Pays to Be Yourself

Francesca Gino
hbr.org
Originally posted 13 Feb 20

Whether it’s trying to land a new job or a new deal or client, we often focus on making a good initial impression on people, especially when they don’t know us well or the stakes are high. One strategy people often use is to cater to the interests, preferences, and expectations of the person they want to impress. Most people, it seems, believe this is a more promising strategy than being themselves and use it in high-stakes interpersonal first meetings. But research I conducted with Ovul Sezer of the University of Carolina at Chapel Hill and Laura Huang of Harvard Business School found those beliefs are wrong.

Our research confirmed that catering to others’ interests and expectations is quite common. When we asked over 450 employed adults to imagine they were about to have an important professional interaction — such as interviewing for their dream job, conducting a valuable negotiation for their company, pitching an entrepreneurial idea to potential investors, or making a presentation to a client — 66% of them indicated they would use catering techniques, rather than simply being themselves; 71% reported believing that catering would be the most effective approach in the situation.

But another study we conducted found that catering was much less effective than being yourself. We asked 166 entrepreneurs to participate in a “fast-pitch” competition held at a private university in the northeastern United States. Each entrepreneur presented his or her venture idea to a panel of three judges: experienced, active members of angel investment groups. The ideas pitched were all in the early stages; none had received any external financing. At the end of the event, the judges collectively deliberated to choose 10 semifinalists who would be invited to participate in the final round. After entrepreneurs made their pitches, we had them answer a few questions about their presentations. We found that when they were genuine in their pitches, they were more than three times as likely to be chosen as semifinalists than when they tried to cater to the judges.

The info is here.

Monday, February 3, 2020

Buddhist Ethics

Maria Heim
Elements in Ethics
DOI: 10.1017/9781108588270
First published online: January 2020

Abstract

“Ethics” was not developed as a separate branch of philosophy in Buddhist traditions until the modern period, though Buddhist philosophers have always been concerned with the moral significance of thoughts, emotions, intentions, actions, virtues, and precepts. Their most penetrating forms of moral reflection have been developed within disciplines of practice aimed at achieving freedom and peace. This Element first offers a brief overview of Buddhist thought and modern scholarly approaches to its diverse forms of moral reflection. It then explores two of the most prominent philosophers from the main strands of the Indian Buddhist tradition – Buddhaghosa and Śāntideva – in a comparative fashion.

The info is here.

Tuesday, November 21, 2017

What The Good Place Can Teach You About Morality

Patrick Allan
Lifehacker.com
Originally posted November 6, 2017

Here is an excerpt:

Doing “Good” Things Doesn’t Necessarily Make You a Good Person

In The Good Place, the version of the afterlife you get sent to is based on a complicated point system. Doing “good” deeds earns you a certain number of positive points, and doing “bad” things will subtract them. Your point total when you die is what decides where you’ll go. Seems fair, right?

Despite the fact The Good Place makes life feel like a point-based videogame, we quickly learn morality isn’t as black and white as positive points and negative points. At one point, Eleanor tries to rack up points by holding doors for people; an action worth 3 points a pop. To put that in perspective, her score is -4,008 and she needs to meet the average of 1,222,821. It would take her a long time to get there but it’s one way to do it. At least, it would be if it worked. She quickly learns after awhile that she didn’t earn any points because she’s not actually trying to be nice to people. Her only goal is to rack up points so she can stay in The Good Place, which is an inherently selfish reason. The situation brings up a valid question: are “good” things done for selfish reasons still “good” things?

I don’t want to spoil too much, but as the series goes on, we see this question asked time and time again with each of its characters. Chidi may have spent his life studying moral ethics, but does knowing everything about pursuing “good” mean you are? Tahani spent her entire life as a charitable philanthropist, but she did it all for the questionable pursuit of finally outshining her near-perfect sister. She did a lot of good, but is she “good?” It’s something to consider yourself as you go about your day. Try to do “good” things, but ask yourself every once in awhile who those “good” things are really for.

The article is here.

Note: I really enjoy watching The Good Place.  Very clever. 

My spoiler: I think Michael is supposed to be in The Good Place too, not really the architect.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Friday, June 16, 2017

On What Basis Do Terrorists Make Moral Judgments?

Kendra Pierre-Louis
Popular Science
Originally published May 26, 2017

Here is an excerpt:

“Multiple studies across the world have systematically shown that in judging the morality of an action, civilized individuals typically attach greater importance to intentions than outcomes,” Ibáñez told PopSci. “If an action is aimed to induce harm, it does not matter whether it was successful or not: most people consider it as less morally admissible than other actions in which harm was neither intended nor inflicted, or even actions in which harm was caused by accident.”

For most of us, intent matters. If I mean to slam you to the ground and I fail, that’s far worse than if I don’t mean to slam you to the ground and I do. If that sounds like a no-brainer, you should know that for the terrorists in the study, the morality was flipped. They rated accidental harm as worse than the failed intentional harm, because in one situation someone doesn’t get hurt, while in the second situation someone does. Write the study’s authors, “surprisingly, this moral judgement resembles that observed at early development stages.”

Perhaps more chilling, this tendency to focus on the outcomes rather than the underlying intention means that the terrorists are focused more on outcomes than your average person, and that terror behavior is "goal directed." Write the study's authors "... our sample is characterized by a general tendency to focus more on the outcomes of actions than on the actions' underlying intentions." In essence terrorism is the world's worst productivity system, because when coupled with rational choice theory—which says that we tend to act in ways that maximize getting our way with the least amount of personal sacrifice—murdering a lot of people to get your goal, absent moral stigma, starts to make sense.

The article is here.

Saturday, January 28, 2017

Judgments of Moral Responsibility and Wrongness for Intentional and Accidental Harm and Purity Violations

Mary Parkinson and Ruth M.J. Byrne
The Quarterly Journal Of Experimental Psychology 

Abstract

Two experiments examine whether people reason differently about intentional and accidental violations in the moral domains of harm and purity, by examining moral responsibility and wrongness judgments for violations that affect others or the self. The first experiment shows that intentional violations are judged to be worse than accidental ones, regardless of whether they are harm or purity violations, e.g., Sam poisons his colleague versus Sam eats his dog, when participants judge how morally responsible was Sam for what he did, or how morally wrong was what Sam did. The second experiment shows that violations of others are judged to be worse than violations of the self, regardless of whether they are harm or purity violations, when their content and context is matched, e.g., on a tropical holiday Sam orders poisonous starfruit for dinner for his friend, or for himself, versus on a tropical holiday Sam orders dog meat for dinner for his friend, or for himself. Moral reasoning is influenced by whether the violation was intentional or accidental, and whether its target was the self or another person, rather than by the moral domain, such as harm or purity.

The article is here.

Thursday, September 29, 2016

Priming Children’s Use of Intentions in Moral Judgement with Metacognitive Training

Gvozdic, Katarina and others
Frontiers in Psychology  
18 March 2016
http://dx.doi.org/10.3389/fpsyg.2016.00190

Abstract

Typically, adults give a primary role to the agent's intention to harm when performing a moral judgment of accidental harm. By contrast, children often focus on outcomes, underestimating the actor's mental states when judging someone for his action, and rely on what we suppose to be intuitive and emotional processes. The present study explored the processes involved in the development of the capacity to integrate agents' intentions into their moral judgment of accidental harm in 5 to 8-year-old children. This was done by the use of different metacognitive trainings reinforcing different abilities involved in moral judgments (mentalising abilities, executive abilities, or no reinforcement), similar to a paradigm previously used in the field of deductive logic. Children's moral judgments were gathered before and after the training with non-verbal cartoons depicting agents whose actions differed only based on their causal role or their intention to harm. We demonstrated that a metacognitive training could induce an important shift in children's moral abilities, showing that only children who were explicitly instructed to "not focus too much" on the consequences of accidental harm, preferentially weighted the agents' intentions in their moral judgments. Our findings confirm that children between the ages of 5 and 8 are sensitive to the intention of agents, however, at that age, this ability is insufficient in order to give a "mature" moral judgment. Our experiment is the first that suggests the critical role of inhibitory resources in processing accidental harm.

The article is here.

Saturday, October 4, 2014

A Simplified Account of Kant's Ethics

By Onora O'Neill

From Matters of Life and Death, ed. Tom Regan
Copyright 1986, McGraw-Hill Publishing Company.
Excerpted in Contemporary Moral Problems, ed. James E. White
Copyright 1994, West Publishing Company

Kant's moral theory has acquired the reputation of being forbiddingly difficult to understand and, once understood, excessively demanding in its requirements. I don't believe that this reputation has been wholly earned, and I am going to try to undermine it.... I shall try to reduce some of the difficulties.... Finally, I shall compare Kantian and utilitarian approaches and assess their strengths and weaknesses.

The main method by which I propose to avoid some of the difficulties of Kant's moral theory is by explaining only one part of the theory. This does not seem to me to be an irresponsible approach in this case. One of the things that makes Kant's moral theory hard to understand is that he gives a number of different versions of the principle that he calls the Supreme Principle of Morality, and these different versions don't look at all like one another. They also don't look at all like the utilitarians' Greatest Happiness Principle. But the Kantian principle is supposed to play a similar role in arguments about what to do.

To learn the short version of Kant, read on here.