Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human Behavior. Show all posts
Showing posts with label Human Behavior. Show all posts

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Monday, December 16, 2019

Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation

Ishowo-Oloko, F., Bonnefon, J., Soroye, Z. et al.
Nat Mach Intell 1, 517–521 (2019)
doi:10.1038/s42256-019-0113-5

Abstract

Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

Monday, December 9, 2019

The rise of the greedy-brained ape: Book Review

Shilluk tribes people gather in a circle under a large tree for traditional storytellingTim Radford
Nature.com
Originally published 30 Oct 19

Here is an excerpt:

For her hugely enjoyable sprint through human evolutionary history, Vince (erstwhile news editor of this journal) intertwines many threads: language and writing; the command of tools, pursuit of beauty and appetite for trinkets; and the urge to build things, awareness of time and pursuit of reason. She tracks the cultural explosion, triggered by technological discovery, that gathered pace with the first trade in obsidian blades in East Africa at least 320,000 years ago. That has climaxed this century with the capacity to exploit 40% of the planet’s total primary production.

How did we do it? Vince examines, for instance, our access to and use of energy. Other primates must chew for five hours a day to survive. Humans do so for no more than an hour. We are active 16 hours a day, a tranche during which other mammals sleep. We learn by blind variation and selective retention. Vince proposes that our ancestors enhanced that process of learning from each other with the command of fire: it is 10 times more efficient to eat cooked meat than raw, and heat releases 50% of all the carbohydrates in cereals and tubers.

Thus Homo sapiens secured survival and achieved dominance by exploiting extra energy. The roughly 2,000 calories ideally consumed by one human each day generates about 90 watts: enough energy for one incandescent light bulb. At the flick of a switch or turn of a key, the average human now has access to roughly 2,300 watts of energy from the hardware that powers our lives — and the richest have much more.

The book review is here.

Sunday, May 26, 2019

Brain science should be making prisons better, not trying to prove innocence

Arielle Baskin-Sommers
theconversaton.com
Originally posted November 1, 2017

Here is an excerpt:

Unfortunately, when neuroscientific assessments are presented to the court, they can sway juries, regardless of their relevance. Using these techniques to produce expert evidence doesn’t bring the court any closer to truth or justice. And with a single brain scan costing thousands of dollars, plus expert interpretation and testimony, it’s an expensive tool out of reach for many defendants. Rather than helping untangle legal responsibility, neuroscience here causes an even deeper divide between the rich and the poor, based on pseudoscience.

While I remain skeptical about the use of neuroscience in the judicial process, there are a number of places where its findings could help corrections systems develop policies and practices based on evidence.

Solitary confinement harms more than helps

Take, for instance, the use within prisons of solitary confinement as a punishment for disciplinary infractions. In 2015, the Bureau of Justice reported that nearly 20 percent of federal and state prisoners and 18 percent of local jail inmates spent time in solitary.

Research consistently demonstrates that time spent in solitary increases the chances of persistent emotional trauma and distress. Solitary can lead to hallucinations, fantasies and paranoia; it can increase anxiety, depression and apathy as well as difficulties in thinking, concentrating, remembering, paying attention and controlling impulses. People placed in solitary are more likely to engage in self-mutilation as well as exhibit chronic rage, anger and irritability. The term “isolation syndrome” has even been coined to capture the severe and long-lasting effects of solitary.

The info is here.

Wednesday, February 27, 2019

Business Ethics And Integrity: It Starts With The Tone At The Top

Betsy Atkins
Forbes.com
Originally posted 7, 2019

Here is the conclusion:

Transparency leads to empowerment:

Share your successes and your failures and look to everyone to help build a better company.  By including everyone, you create the illusive “we” that is the essence of company culture.  Transparency leads to a company culture that creates an outcome because the CEO creates a bigger purpose for the organization than just making money or reaching quarterly numbers.  Company culture guru Kenneth Kurtzman author of Common Purpose said it best when he said “CEOs need to know how to read their organizations’ emotional tone and need to engage behaviors that build trust including leading-by-listening, building bridges, showing compassion and caring, demonstrating their own commitment to the organization, and giving employees the authority to do their job while inspiring them to do their best work.”

There is no substitute for CEO leadership in creating a company culture of integrity.  A board that supports the CEO in building a company culture of integrity, transparency, and collaboration will be supporting a successful company.

The info is here.

Friday, December 21, 2018

You can’t characterize human nature if studies overlook 85 percent of people on Earth

Daniel Hruschka
The Conversation
Originally posted November 16, 2108

Here is an excerpt:

To illustrate the extent of this bias, consider that more than 90 percent of studies recently published in psychological science’s flagship journal come from countries representing less than 15 percent of the world’s population.

If people thought and behaved in basically the same ways worldwide, selective attention to these typical participants would not be a problem. Unfortunately, in those rare cases where researchers have reached out to a broader range of humanity, they frequently find that the “usual suspects” most often included as participants in psychology studies are actually outliers. They stand apart from the vast majority of humanity in things like how they divvy up windfalls with strangers, how they reason about moral dilemmas and how they perceive optical illusions.

Given that these typical participants are often outliers, many scholars now describe them and the findings associated with them using the acronym WEIRD, for Western, educated, industrialized, rich and democratic.

The info is here.

Monday, October 8, 2018

Evolutionary Psychology

Downes, Stephen M.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms. What distinguishes evolutionary psychologists from many cognitive psychologists is the proposal that the relevant internal mechanisms are adaptations—products of natural selection—that helped our ancestors get around the world, survive and reproduce. To understand the central claims of evolutionary psychology we require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind. Philosophers are interested in evolutionary psychology for a number of reasons. For philosophers of science —mostly philosophers of biology—evolutionary psychology provides a critical target. There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise. For philosophers of mind and cognitive science evolutionary psychology has been a source of empirical hypotheses about cognitive architecture and specific components of that architecture. Philosophers of mind are also critical of evolutionary psychology but their criticisms are not as all-encompassing as those presented by philosophers of biology. Evolutionary psychology is also invoked by philosophers interested in moral psychology both as a source of empirical hypotheses and as a critical target.

The entry is here.

Wednesday, April 4, 2018

Simple moral code supports cooperation

Charles Efferson & Ernst Fehr
Nature
Originally posted March 7, 2018

The evolution of cooperation hinges on the benefits of cooperation being shared among those who cooperate. In a paper in Nature, Santos et al. investigate the evolution of cooperation using computer-based modelling analyses, and they identify a rule for moral judgements that provides an especially powerful system to drive cooperation.

Cooperation can be defined as a behaviour that is costly to the individual providing help, but which provides a greater overall societal benefit. For example, if Angela has a sandwich that is of greater value to Emmanuel than to her, Angela can increase total societal welfare by giving her sandwich to Emmanuel. This requires sacrifice on her part if she likes sandwiches. Reciprocity offers a way for benefactors to avoid helping uncooperative individuals in such situations. If Angela knows Emmanuel is cooperative because she and Emmanuel have interacted before, her reciprocity is direct. If she has heard from others that Emmanuel is a cooperative person, her reciprocity is indirect — a mechanism of particular relevance to human societies.

A strategy is a rule that a donor uses to decide whether or not to cooperate, and the evolution of reciprocal strategies that support cooperation depends crucially on the amount of information that individuals process. Santos and colleagues develop a model to assess the evolution of cooperation through indirect reciprocity. The individuals in their model can consider a relatively large amount of information compared with that used in previous studies.

The review is here.

Tuesday, March 27, 2018

Neuroblame?

Stephen Rainey
Practical Ethics
Originally posted February 15, 2018

Here is an excerpt:

Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.

Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.

The blog post is here.

Tuesday, March 20, 2018

The Psychology of Clinical Decision Making: Implications for Medication Use

Jerry Avorn
February 22, 2018
N Engl J Med 2018; 378:689-691

Here is an excerpt:

The key problem is medicine’s ongoing assumption that clinicians and patients are, in general, rational decision makers. In reality, we are all influenced by seemingly irrational preferences in making choices about reward, risk, time, and trade-offs that are quite different from what would be predicted by bloodless, if precise, quantitative calculations. Although we physicians sometimes resist the syllogism, if all humans are prone to irrational decision making, and all clinicians are human, then these insights must have important implications for patient care and health policy. There have been some isolated innovative applications of that understanding in medicine, but despite a growing number of publications about the psychology of decision making, most medical care — at the bedside and the systems level — is still based on a “rational actor” understanding of how we make decisions.

The choices we make about prescription drugs provide one example of how much further medicine could go in taking advantage of a more nuanced understanding of decision making under conditions of uncertainty — a description that could define the profession itself. We persist in assuming that clinicians can obtain comprehensive information about the comparative worth (clinical as well as economic) of alternative drug choices for a given condition, assimilate and evaluate all the findings, and synthesize them to make the best drug choices for our patients. Leaving aside the access problem — the necessary comparative effectiveness research often doesn’t exist — actual drug-utilization data make it clear that real-world prescribing choices are in fact based heavily on various “irrational” biases, many of which have been described by behavioral economists and other decision theorists.

The article is here.

Thursday, February 22, 2018

NIH adopts new rules on human research, worrying behavioral scientists

William Wan
The Washington Post
Originally posted January 24, 2018

Last year, the National Institutes of Health announced plans to tighten its rules for all research involving humans — including new requirements for scientists studying human behavior — and touched off a panic.

Some of the country’s biggest scientific associations, including the American Psychological Association and Federation of Associations in Behavioral and Brain Sciences, penned impassioned letters over the summer warning that the new policies could slow scientific progress, increase red tape and present obstacles for researchers working in smaller labs with less financial and administrative resources to deal with the added requirements. More than 3,500 scientists signed an open letter to NIH director Francis Collins.

The new rules are scheduled to take effect Thursday. They will have a big impact on how research is conducted, especially in fields like psychology and neuroscience. NIH distributes more than $32 billion each year, making it the largest public funder of biomedical and health research in the world, and the rules apply to any NIH-supported work that studies human subjects and is evaluating the effects of interventions on health or behavior.

The article is here.

Thursday, February 1, 2018

Ethics for healthcare data is obsessed with risk – not public benefits

Tim Spector and Barbara Prainsack
The Conversation
Originally published January 5, 2018

Here is an excerpt:

Health researchers working with human participants – or their identifiable information – need to jump through lots of ethical and bureaucratic hoops. The underlying rationale is that health research poses particularly high risks to people, and that these risks need to be minimised. But does the same rationale apply to non-invasive research using digital health data? Setting aside physically invasive research, which absolutely should maintain the most stringent of safeguards, is data-based health research really riskier than other research that analyses people's information?

Many corporations can use data from their customers for a wide range of purposes without needing research ethics approval, because their users have already "agreed" to this (by ticking a box), or the activity itself isn't qualified as health research. But is the assumptions that it is less risky justified?

Facebook and Google hold voluminous and fine-grained datasets on people. They analyse pictures and text posted by users. But they also study behavioural information, such as whether or not users "like" something or support political causes. They do this to profile users and discern new patterns connecting previously unconnected traits and behaviours. These findings are used for marketing; but they also contribute to knowledge about human behaviour.

The information is here.

Friday, December 29, 2017

Freud in the scanner

M. M. Owen
aeon.co
Originally published December 7, 2017

Here is an excerpt:

This is why Freud is less important to the field than what Freud represents. Researching this piece, I kept wondering: why hang on to Freud? He is an intensely polarising figure, so polarising that through the 1980s and ’90s there raged the so-called Freud Wars, fighting on one side of which were a whole team of authors driven (as the historian of science John Forrester put it in 1997) by the ‘heartfelt wish that Freud might never have been born or, failing to achieve that end, that all his works and influence be made as nothing’. Indeed, a basic inability to track down anyone with a dispassionate take on psychoanalysis was a frustration of researching this essay. The certitude that whatever I write here will enrage some readers hovers at the back of my mind as I think ahead to skimming the comments section. Preserve subjectivity, I thought, fine, I’m onboard. But why not eschew the heavily contested Freudianism for the psychotherapy of Irvin D Yalom, which takes an existentialist view of the basic challenges of life? Why not embrace Viktor Frankl’s logotherapy, which prioritises our fundamental desire to give life meaning, or the philosophical tradition of phenomenology, whose first principle is that subjectivity precedes all else?

Within neuropsychoanalysis, though, Freud symbolises the fact that, to quote the neuroscientist Ramachandran’s Phantoms in the Brain (1998), you can ‘look for laws of mental life in much the same way that a cardiologist might study the heart or an astronomer study planetary motion’. And on the clinical side, it is simply a fact that before Freud there was really no such thing as therapy, as we understand that word today. In Yalom’s novel When Nietzsche Wept (1992), Josef Breuer, Freud’s mentor, is at a loss for how to counsel the titular German philosopher out of his despair: ‘There is no medicine for despair, no doctor for the soul,’ he says. All Breuer can recommend are therapeutic spas, ‘or perhaps a talk with a priest’.

The article is here.

Thursday, June 29, 2017

Can a computer administer a Wechsler Intelligence Test?

Vrana, Scott R.; Vrana, Dylan T.
Professional Psychology: Research and Practice, Vol 48(3), Jun 2017, 191-198.

Abstract

Prompted by the rapid development of Pearson’s iPad-based Q-interactive platform for administering individual tests of cognitive ability (Pearson, 2016c), this article speculates about what it would take for a computer to administer the current versions of the Wechsler individual intelligence tests without the involvement of a psychologist or psychometrist. We consider the mechanics of administering and scoring each subtest and the more general clinical skills of motivating the client to perform, making observations of verbal and nonverbal behavior, and responding to the client’s off-task comments, questions, and nonverbal cues. It is concluded that we are very close to the point, given current hardware and artificial intelligence capabilities, at which administration of all subtests of the Wechsler Adult Intelligence Scale-Fourth Edition (PsychCorp, 2008) and Wechsler Intelligence Scale for Children-Fifth Edition (PsychCorp, 2014), and all assessment functions of the human examiner, could be performed by a computer. Potential acceptability of computer administration by clients and the psychological community are considered.

The article is here.

Friday, March 17, 2017

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

BEC CREW
Science Alert
Originally published February 13, 2017

Here is an excerpt:

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

"This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning," one of the team, Joel Z Leibo, told Matt Burgess at Wired.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents - two of them played as wolves, and one as the prey.

The article is here.

Wednesday, July 13, 2016

Does moral identity effectively predict moral behavior?: A meta-analysis

Steven G. Hertz and Tobias Krettenauer
Review of General Psychology, Vol 20(2), Jun 2016, 129-140.
http://dx.doi.org/10.1037/gpr0000062

Abstract

This meta-analysis examined the relationship between moral identity and moral behavior. It was based on 111 studies from a broad range of academic fields including business, developmental psychology and education, marketing, sociology, and sport sciences. Moral identity was found to be significantly associated with moral behavior (random effects model, r = .22, p < .01, 95% CI [.19, .25]). Effect sizes did not differ for behavioral outcomes (prosocial behavior, avoidance of antisocial behavior, ethical behavior). Studies that were entirely based on self-reports yielded larger effect sizes. In contrast, the smallest effect was found for studies that were based on implicit measures or used priming techniques to elicit moral identity. Moreover, a marginally significant effect of culture indicated that studies conducted in collectivistic cultures yielded lower effect sizes than studies from individualistic cultures. Overall, the meta-analysis provides support for the notion that moral identity strengthens individuals’ readiness to engage in prosocial and ethical behavior as well as to abstain from antisocial behavior. However, moral identity fares no better as a predictor of moral action than other psychological constructs.

And the conclusion...

Overall, three major conclusions can be drawn from this metaanalysis. First, considering all empirical evidence available it seems impossible to deny that moral identity positively predicts moral behavior in individuals from Western cultures. Although this finding does not refute research on moral hypocrisy, it put the claim that people want to appear moral, rather than be moral into perspective (Batson, 2011; Frimer et al., 2014). If this were always true, why would people who feel that morality matters to them engage more readily in moral action? Second, explicit self-report measures represent a valid and valuable approach to the moral identity construct. This is an important conclusion because many scholars feel that more effort should be invested into developing moral identity measures (e.g., Hardy & Carlo, 2011b; Jennings et al., 2015). Third, although moral identity positively predicts moral behavior the effect is not much stronger than the effects of other constructs, notably moral judgment or moral emotions. Thus, there is no reason to prioritize the moral identity construct as a predictor of moral action at the expense of other factors. Instead, it seems more appropriate to consider moral identity in a broader conceptual framework where it interacts with other personological and situational factors to bring about moral action. This approach is well underway in studies that investigate the moderating and mediating role of moral identity as a predictor of moral action (e.g., Aquino et al., 2007; Hardy et al., 2015). As part of this endeavor, it might become necessary to give up an overly homogenous notion of the moral identity construct in order to acknowledge that moral identities may consist of different motivations and goal orientations. Recently, Krettenauer and Casey (2015) provided evidence for two different types of moral identities, one that is primarily concerned with demonstrating morality to others, and one that is more inwardly defined by being consistent with one's values and beliefs. This differentiation has important ramifications for moral emotions and moral action and helps to explain why moral identities sometimes strengthen individuals' motivation to act morally and sometimes undermine it.

Wednesday, March 2, 2016

Beyond the paleo

Our morality may be a product of natural selection, but that doesn’t mean it’s set in stone

by Russell Powell & Allen Buchanan
Aeon Magazine
Originally published December 12, 2013

For centuries now, conservative thinkers have argued that significant social reform is impossible, because human nature is inherently limited. The argument goes something like this: sure, it would be great to change the world, but it will never work, because people are too flawed, lacking the ability to see beyond their own interests and those of the groups to which they belong. They have permanent cognitive, motivational and emotional deficits that make any deliberate, systematic attempt to improve human society futile at best. Efforts to bring about social or moral progress are naive about the natural limits of the human animal and tend to have unintended consequences. They are likely to make things worse rather than better.

It’s tempting to nod along at this, and think humans are irredeemable, or at best, permanently flawed. But it’s not clear that such a view stands up to empirical scrutiny. For the conservative argument to prevail, it is not enough that humans exhibit tendencies toward selfishness, group-mindedness, partiality toward kin and kith, apathy toward strangers, and the like. It must also be the case that these tendencies are unalterable, either due to the inherent constraints of human psychology or to our inability to figure out how to modify these constraints without causing greater harms. The trouble is, these assumptions about human nature are largely based on anecdote or selective and controversial readings of history. A more thorough look at the historical record suggests they are due for revision.

The article is here.

Thursday, August 20, 2015

life after faith

Richard Marshall interviews Philip Kitcher
3:AM Magazine
Originally published on August 2, 2015

Here is an excerpt:

Thought experiments work when, and only when, they call into action cognitive capacities that might reliably deliver the conclusions drawn. When the question posed is imprecise, your thought experiment is typically useless. But even more crucial is the fact that the stripped-down scenarios many philosophers love simply don’t mesh with our intellectual skills. The story rules out by fiat the kinds of reactions we naturally have in the situation described. Think of the trolley problem in which you are asked to decide whether to push the fat man off the bridge. If you imagine yourself – seriously imagine yourself – in the situation, you’d look around for alternatives, you’d consider talking to the fat man, volunteering to jump with him, etc. etc. None of that is allowed. So you’re offered a forced choice about which most people I know are profoundly uneasy. The “data” delivered are just the poor quality evidence any reputable investigator would worry about using. (I like Joshua Greene’s fundamental idea of investigating people’s reactions; but I do wish he’d present them with better questions.)

Philosophers love to appeal to their “intuitions” about these puzzle cases. They seem to think they have access to little nuggets of wisdom. We’d all be much better off if the phrase “My intuition is …” were replaced by “Given my evolved psychological adaptations and my distinctive enculturation, when faced by this perplexing scenario, I find myself, more or less tentatively, inclined to say …” Maybe there are occasions in which the cases bring out some previously unnoticed facet of the meaning of a word. But, for a pragmatist like me, the important issues concern the words we might deploy to achieve our purposes, rather than the language we actually use.

If the intuition-mongering were abandoned, would that be the end of philosophy? It would be the end of a certain style of philosophy – a style that has cut philosophy off, not only from the humanities but from every other branch of inquiry and culture. (In my view, most of current Anglophone philosophy is quite reasonably seen as an ingrown conversation pursued by very intelligent people with very strange interests.) But it would hardly stop the kinds of investigation that the giants of the past engaged in. In my view, we ought to replace the notion of analytic philosophy by that of synthetic philosophy. Philosophers ought to aspire to know lots of different things and to forge useful synthetic perspectives.

The entire interview is here.

Monday, June 8, 2015

Is Morality Innate?

By Jesse J. Prinz
Forthcoming in W. Sinnott-Armstrong (ed.), Moral Psychology. Oxford University Press

Here is an excerpt:

The link between morality and human nature has been a common theme since ancient times, and, with the rise of modern empirical moral psychology, it remains equally popular today. Evolutionary ethicists, ethologists, developmental psychologists, social neuroscientists, and even some cultural
anthropologists tend to agree that morality is part of the bioprogram (e.g., Cosmides & Tooby, 1992; de Waal, 1996; Haidt & Joseph, 2004; Hauser, 2006; Ruse, 1991; Sober & Wilson, 1998; Turiel, 2002). Recently, researchers have begun to look for moral modules in the brain, and they have been increasingly tempted to speculate about the moral acquisition device, and innate faculty for norm acquisition akin to celebrated language acquisition device, promulgated by Chomsky (Dwyer, 1999; Mikhail, 2000; Hauser, this volume). All this talk of modules and mechanism may make some shudder, especially if they recall that eugenics emerged out of an effort to find the biological sources of evil. Yet the tendency to postulate an innate moral faculty is almost irresistible. For one thing, it makes us appear nobler as a species, and for another, it offers an explanation of the fact that people in every corner of the globe seem to have moral rules. Moral nativism is, in this respect, an optimistic doctrine—one that makes our great big world seem comfortingly smaller.

The chapter is here.