Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Language. Show all posts
Showing posts with label Language. Show all posts

Wednesday, March 27, 2019

Language analysis reveals recent and unusual 'moral polarisation' in Anglophone world

Andrew Masterson
Cosmos Magazine
Originally published March 4, 2019

Here is an excerpt:

Words conveying moral values in more specific domains, however, did not always accord to a similar pattern – revealing, say the researchers, the changing prominence of differing sets of concerns surrounding concepts such as loyalty and betrayal, individualism, and notions of authority.

Remarkably, perhaps, the study is only the second in the academic literature that uses big data to examine shifts in moral values over time. The first, by psychologists Pelin and Selin Kesibir, and published in The Journal of Positive Psychology in 2012, used two approaches to track the frequency of morally-loaded words in a corpus of US books across the twentieth century.

The results revealed a “decline in the use of general moral terms”, and significant downturns in the use of words such as honesty, patience, and compassion.

Haslam and colleagues found that at headline level their results, using a larger dataset, reflected the earlier findings. However, fine-grain investigations revealed a more complex picture. Nevertheless, they say, the changes in the frequency of use for particular types of moral terms is sufficient to allow the twentieth century to be divided into five distinct historical periods.

The words used in the search were taken from lists collated under what is known as Moral Foundations Theory (MFT), a generally supported framework that rejects the idea that morality is monolithic. Instead, the researchers explain, MFT aims to “categorise the automatic and intuitive emotional reactions that commonly occur in moral evaluation across cultures, and [identifies] five psychological systems (or foundations): Harm, Fairness, Ingroup, Authority, and Purity.”

The info is here.

Thursday, January 24, 2019

What Could Be Wrong with a Little ‘Moral Clarity’?

Frank Guan
The New York Times Magazine
Originally posted January 2, 2019

If, in politics, words are weapons, they often prove themselves double—edged. So it was when, on the summer night that Alexandria Ocasio—Cortez learned that she had won a Democratic congressional primary over a 10-term incumbent, she provided a resonant quote to a TV reporter. “I think what we’ve seen is that working—class Americans want a clear champion,” she said, “and there is nothing radical about moral clarity in 2018.” Dozens of news videos and articles would cite those words as journalists worked to interpret what Ocasio—Cortez’s triumph, repeated in November’s general election, might represent for the American left and its newest star.

Until recently, “moral clarity” was more likely to signal combativeness toward the left, not from it: It served for decades as a badge of membership among conservative hawks and cultural crusaders. But in the Trump era, militant certainty takes precedence across the political spectrum. On the left, “moral clarity” can mean taking an unyielding stand against economic inequality or social injustice, climate change or gun violence. Closer to the center, it can take on a sonorous, transpartisan tone, as when Senator Robert Menendez, a Democrat, and former Speaker Paul Ryan, a Republican, each called for “moral clarity” in the White House reaction to the murder of the journalist Jamal Khashoggi. And it can fly beyond politics altogether, as when the surgeon and author Atul Gawande writes that better health care “does not take genius. It takes diligence. It takes moral clarity.” We hear about moral clarity any time there is impatience with equivocation, delay, conciliation and confusion — whenever people long for rapid action based on truths they hold to be self—evident.

The info is here.

Wednesday, October 10, 2018

Urban Meyer, Ohio State Football, and How Leaders Ignore Unethical Behavior

David Mayer
Harvard Business Review
Originally posted September 4, 2018

Here is an excerpt:

A sizable literature in management and psychology helps us understand how people become susceptible to moral biases and make choices that are inconsistent with their values and the values of their organizations. Reading the report with that lens can help leaders better understand the biases that get in the way of ethical conduct and ethical organizations.

Performance over principles. One number may surpass all other details in this case: 90%. That’s the percentage of games the team has won under Meyer as head coach since he joined Ohio State in 2012. Psychological research shows that in almost every area of life, being moral is weighted as more important than being competent. However, in competitive environments such as work and sports, the classic findings flip: competence is prized over character. Although the report does not mention anything about the team’s performance or the resulting financial and reputational benefits of winning, the program’s success may have crowded out concerns over the allegations against Smith and about the many other problematic behaviors he showed.

Unspoken values. Another factor that can increase the likelihood of making unethical decisions is the absence of language around values. Classic research in organizations has found that leaders tend to be reluctant to use “moral language.” For example, leaders are more likely to talk about deadlines, objectives, and effectiveness than values such as integrity, respect, and compassion. Over time, this can license unethical conduct.

The info is here.

Saturday, September 1, 2018

Why Ethical People Become Unethical Negotiators

Dina Gerdeman
Forbes.com
Originally posted July 31, 2018

Here is an excerpt:

With profit and greed driving the desire to deceive, it’s not surprising that negotiators often act unethically. But it’s too simplistic to think people always enter a negotiation looking to dupe the other side.

Sometimes negotiators stretch the truth unintentionally, falling prey to what Bazerman and his colleagues call “bounded ethicality” by engaging in unethical behavior that contradicts their values without knowing it.

Why does this happen? In the heat of negotiations, “ethical fading” comes into play, and people are unable to see the ethical implications of their actions because their desire to win gets in the way. The end result is deception.

In business, with dollars at stake, many people will interpret situations in ways that naturally favor them. Take Bazerman’s former dentist, who always seemed too quick to drill. “He was overtreating my mouth, and it didn’t make sense,” he says.

In service professions, he explains, people often have conflicts of interest. For instance, a surgeon may believe that surgery is the proper course of action, but her perception is biased: She has an incentive and makes money off the decision to operate. Another surgeon might just as easily come to the conclusion that if it’s not bothering you, don’t operate. “Lawyers are affected by how long a case takes to settle,” he adds. “

The info is here.

Tuesday, July 24, 2018

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Tuesday, June 5, 2018

Is There Such a Thing as Truth?

Errol Morris
Boston Review
Originally posted April 30, 2018

Here is an excerpt:

In fiction, we are often given an imaginary world with seemingly real objects—horses, a coach, a three-cornered hat and wig. But what about the objects of science—positrons, neutrinos, quarks, gravity waves, Higgs bosons? How do we reckon with their reality?

And truth. Is there such a thing? Can we speak of things as unambiguously true or false? In history, for example, are there things that actually happened? Louis XVI guillotined on January 21, 1793, at what has become known as the Place de la Concorde. True or false? Details may be disputed—a more recent example: how large, comparatively, was Donald Trump’s victory in the electoral college in 2016, or the crowd at his inauguration the following January? 
But do we really doubt that Louis’s bloody head was held up before the assembled crowd? Or doubt the existence of the curved path of a positron in a bubble chamber? Even though we might not know the answers to some questions—“Was Louis XVI decapitated?” or “Are there positrons?”—we accept that there are answers.

And yet, we read about endless varieties of truth. Coherence theories of truth. Pragmatic, relative truths. Truths for me, truths for you. Dog truths, cat truths. Whatever. I find these discussions extremely distasteful and unsatisfying. To say that a philosophical system is “coherent” tells me nothing about whether it is true. Truth is not hermetic. I cannot hide out in a system and assert its truth. For me, truth is about the relation between language and the world. A correspondence idea of truth. Coherence theories of truth are of little or no interest to me. Here is the reason: they are about coherence, not truth. We are talking about whether a sentence or a paragraph
 or group of paragraphs is true when set up against the world. Thackeray, introducing the fictional world of Vanity Fair, evokes the objects of a world he is familiar with—“a large family coach, with two fat horses in blazing harnesses, driven by a fat coachman in a three-cornered hat and wig, at the rate of four miles an hour.”

The information is here.

Friday, May 4, 2018

Psychology will fail if it keeps using ancient words like “attention” and “memory”

Olivia Goldhill
Quartz.com
Originally published April 7, 2018

Here is an excerpt:

Then there are “jangle fallacies,” when two things that are the same are seen as different because they have different names. For example, “working memory” is used to describe the ability to keep information mind. It’s not clear this is meaningfully different from simply “paying attention” to particular aspects of information.

Scientific concepts should be operationalized, meaning measurable and testable in experiments that produce clear-cut results. “You’d hope that a scientific concept would name something that one can use to then make predictions about how it’s going to work. It’s not clear that ‘attention’ does that for us,” says Poldrack.

It’s no surprise “attention” and “memory” don’t perfectly map onto the brain functions scientists know of today, given that they entered the lexicon centuries ago, when we knew very little about the internal workings of the brain or our own mental processes. Psychology, Poldrack argues, cannot be a precise science as long as it relies on these centuries-old, lay terms, which have broad, fluctuating usage. The field has to create new terminology that accurately describes mental processes. “It hurts us a lot because we can’t really test theories,” says Poldrack. “People can talk past one another. If one person says I’m studying ‘working memory’ and the other people says ‘attention,’ they can be finding things that are potentially highly relevant to one another but they’re talking past one another.”

The information is here.

Friday, April 27, 2018

The Mind-Expanding Ideas of Andy Clark

Larissa MacFarquhar
The New Yorker
Originally published April 2, 2018

Here is an excerpt:

Cognitive science addresses philosophical questions—What is a mind? What is the mind’s relationship to the body? How do we perceive and make sense of the outside world?—but through empirical research rather than through reasoning alone. Clark was drawn to it because he’s not the sort of philosopher who just stays in his office and contemplates; he likes to visit labs and think about experiments. He doesn’t conduct experiments himself; he sees his role as gathering ideas from different places and coming up with a larger theoretical framework in which they all fit together. In physics, there are both experimental and theoretical physicists, but there are fewer theoretical neuroscientists or psychologists—you have to do experiments, for the most part, or you can’t get a job. So in cognitive science this is a role that philosophers can play.

Most people, he realizes, tend to identify their selves with their conscious minds. That’s reasonable enough; after all, that is the self they know about. But there is so much more to cognition than that: the vast, silent cavern of underground mental machinery, with its tubes and synapses and electric impulses, so many unconscious systems and connections and tricks and deeply grooved pathways that form the pulsing substrate of the self. It is those primal mechanisms, the wiring and plumbing of cognition, that he has spent most of his career investigating. When you think about all that fundamental stuff—some ancient and shared with other mammals and distant ancestors, some idiosyncratic and new—consciousness can seem like a merely surface phenomenon, a user interface that obscures the real works below.

The article and audio file are here.

Friday, February 16, 2018

The Scientism of Psychiatry

Sami Timimi
Mad in America
Originally posted January 10, 2018

Here is an excerpt:

Mainstream psychiatry has been afflicted by at least two types of scientism. Firstly, it parodies science as ideology, liking to talk in scientific language, using the language of EBM, and carrying out research that ‘looks’ scientific (such as brain scanning). Psychiatry wants to be seen as residing in the same scientific cosmology as the rest of medicine. Yet the cupboard of actual clinically relevant findings remains pretty empty. Secondly, it ignores much of the genuine science there is and goes on supporting and perpetuating concepts and treatments that have little scientific support. This is a more harmful and deceptive form of scientism; it means that psychiatry likes to talk in the language of science and treats this as more important than the actual science.

I have had debates with fellow psychiatrists on many aspects of the actual evidence base. Two ‘defences’ have become familiar to me. The first is use of anecdote — such and such a patient got better with such and such a treatment, therefore, this treatment ‘works.’ Anecdote is precisely what EBM was trying to get away from. The second is an appeal for me to take a ‘balanced’ perspective. Of course each person’s idea of what is a ‘balanced’ position depends on where they are sitting. We get our ideas on what is ‘balanced’ from what is culturally dominant, not from what the science is telling us. At one point, to many people, Nelson Mandala was a violent terrorist; later to many more people, he becomes the embodiment of peaceful reconciliation and forgiveness. What were considered ‘balanced’ views on him were almost polar opposites, depending on where and when you were examining him from. Furthermore, in science facts are simply that. Our interpretations are of course based on our reading of these facts. Providing an interpretation consistent with the facts is more important than any one person’s notion of what a ‘balanced’ position should look like.

The article is here.

Sunday, January 7, 2018

Are human rights anything more than legal conventions?

John Tasioulas
aeon.co
Originally published April 11, 2017

We live in an age of human rights. The language of human rights has become ubiquitous, a lingua franca used for expressing the most basic demands of justice. Some are old demands, such as the prohibition of torture and slavery. Others are newer, such as claims to internet access or same-sex marriage. But what are human rights, and where do they come from? This question is made urgent by a disquieting thought. Perhaps people with clashing values and convictions can so easily appeal to ‘human rights’ only because, ultimately, they don’t agree on what they are talking about? Maybe the apparently widespread consensus on the significance of human rights depends on the emptiness of that very notion? If this is true, then talk of human rights is rhetorical window-dressing, masking deeper ethical and political divisions.

Philosophers have debated the nature of human rights since at least the 12th century, often under the name of ‘natural rights’. These natural rights were supposed to be possessed by everyone and discoverable with the aid of our ordinary powers of reason (our ‘natural reason’), as opposed to rights established by law or disclosed through divine revelation. Wherever there are philosophers, however, there is disagreement. Belief in human rights left open how we go about making the case for them – are they, for example, protections of human needs generally or only of freedom of choice? There were also disagreements about the correct list of human rights – should it include socio-economic rights, like the rights to health or work, in addition to civil and political rights, such as the rights to a fair trial and political participation?

The article is here.

Monday, August 14, 2017

AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Mark Wilson
Co.Design
Originally posted July 14, 2017

Here is an excerpt:

But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

The article is here.

Tuesday, May 2, 2017

AI Learning Racism, Sexism and Other Prejudices from Humans

Ian Johnston
The Independent
Originally published April 13, 2017

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

(cut)

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

The article is here.

Monday, July 18, 2016

How Language ‘Framing’ Influences Decision-Making

Observations
Association for Psychological Science
Published in 2016

The way information is presented, or “framed,” when people are confronted with a situation can influence decision-making. To study framing, people often use the “Asian Disease Problem.” In this problem, people are faced with an imaginary outbreak of an exotic disease and asked to choose how they will address the issue. When the problem is framed in terms of lives saved (or “gains”), people are given the choice of selecting:
Medicine A, where 200 out of 600 people will be saved
or
Medicine B, where there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.
When the problem is framed in terms of lives lost (or “losses”), people are given the option of selecting:
Medicine A, where 400 out of 600 people will die
or
Medicine B, where there is a one-third probability that no one will die and a two-thirds probability that 600 people will die.
Although in both problems Medicine A and Medicine B lead to the same outcomes, people are more likely to choose Medicine A when the problem is presented in terms of gains and to choose Medicine B when the problem is presented in terms of losses. This difference occurs because people tend to be risk averse when the problem is presented in terms of gains, but risk tolerant when it is presented in terms of losses.

The article is here.

Wednesday, June 22, 2016

A New Theory Explains How Consciousness Evolved

Michael Graziano
The Atlantic
Originally posted June 6, 2016

Here is an excerpt:

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

The article is here.

Tuesday, February 16, 2016

Why You Should Stop Using the Phrase ‘the Mentally Ill’

By Tanya Basu
New York Magazine
Originally published February 2, 2016

Here is an excerpt:

What’s most surprising is the reaction that counselors have when the phrase “the mentally ill” is used: They’re more likely to believe that those suffering from mental illness should be controlled and isolated from the rest of the community. That's pretty surprising, given that these counselors are perhaps the ones most likely to be aware of the special needs and varying differences in diagnoses of the group.

Counselors also showed the largest differences in how intolerant they were based on the language, which boosted the researchers’ belief that simply changing language is important in not only understanding people who suffer from mental illness but also helping them adjust and cope. “Even counselors who work every day with people who have mental illness can be affected by language,” Granello said in a press release. “They need to be aware of how language might influence their decision-making when they work with clients.”

The entire article is here.

Friday, December 4, 2015

Researchers uncover patterns in how scientists lie about their data

Science Simplified
Originally posted November 16, 2015

Even the best poker players have "tells" that give away when they're bluffing with a weak hand. Scientists who commit fraud have similar, but even more subtle, tells, and a pair of Stanford researchers have cracked the writing patterns of scientists who attempt to pass along falsified data.

The work, published in the Journal of Language and Social Psychology, could eventually help scientists identify falsified research before it is published.

There is a fair amount of research dedicated to understanding the ways liars lie. Studies have shown that liars generally tend to express more negative emotion terms and use fewer first-person pronouns. Fraudulent financial reports typically display higher levels of linguistic obfuscation – phrasing that is meant to distract from or conceal the fake data – than accurate reports.

The entire research review is here.

Friday, June 6, 2014

Gained in translation

When moral dilemmas are posed in a foreign language, people become more coolly utilitarian

The Economist
Originally posted May 17, 2014

Here is an excerpt:

Morally speaking, this is a troubling result. The language in which a dilemma is posed should make no difference to how it is answered. Linguists have wondered whether different languages encode different assumptions about morality, which might explain the result. But the effect existed for every combination of languages that the researchers looked at, so culture does not seem to explain things. Other studies in “trolleyology” have found that East Asians are less likely to make the coldly utilitarian calculation, and indeed none of the Korean subjects said they would push the fat man when asked in Korean. But 7.5% were prepared to when asked in English.

The explanation seems to lie in the difference between being merely competent in a foreign language and being fluent. The subjects in the experiment were not native bilinguals, but had, on average, begun the study of their foreign language at age 14. (The average participant was 21.) The participants typically rated their ability with their acquired tongue at around 3.0 on a five-point scale. Their language skills were, in other words, pretty good—but not great.

The entire article is here.