Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Saturday, June 16, 2018

Ivanka Trump in China: The trademarks raising an ethics firestorm

Aimee Picchi
CBS News - Money Watch
Originally published May 29, 2018

Ivanka Trump this month received trademark approval from China for a broad array of items, including baby blankets, wallpaper and carpets. That wouldn't be unusual for a global business built on consumer goods such as elegant women's clothing and shoes, but it raises numerous ethical issues given that her father is the U.S. president.

The timing appears especially fraught given President Donald Trump agreed to rescue Chinese telecom giant ZTE Corp. shortly after Ivanka Trump's brand was awarded the trademarks.

Ethics watchdogs say the approvals are problematic on a number of levels, including Ivanka Trump's role representing the U.S. at diplomatic events even though her brand's business could be impacted -- for good or bad -- by relations with foreign nations. Then there's also the conflicts that arise from her father's role as president amid rising trade tensions between the U.S. and China.

The article is here.

Friday, June 15, 2018

Tech giants need to build ethics into AI from the start

James Titcomb
The Telegraph
Originally posted May 13, 2018

Here is an excerpt:

But excitement about the software soon turned to comprehending the ethical minefield it created. Google’s initial demo gave no indication that the person on the other end of the phone would be alerted that they were talking to a robot. The software even had human-like quirks built into it, stopping to say “um” and “mm-hmm”, a quality designed to seem cute but that ended up appearing more deceptive.

Some found the whole idea that a person should have to go through an artificial conversation with a robot somewhat demeaning; insulting even.

After a day of criticism, Google attempted to play down some of the concerns. It said the technology had no fixed release date, would take into account people’s concerns and promised to ensure that the software identified itself as such at the start of every phone call.

But the fact that it did not do this immediately was not a promising sign. The last two years of massive data breaches, evidence of Russian propaganda campaigns on social media and privacy failures have proven what should always have been obvious: that the internet has as much power to do harm as good. Every frontier technology now needs to be built with at least some level of paranoia; some person asking: “How could this be abused?”

The information is here.

The danger of absolute thinking is absolutely clear

Mohammed Al-Mosaiwi
aeon.co
Originally posted May 2, 2018

Here is an excerpt:

There are generally two forms of absolutism; ‘dichotomous thinking’ and ‘categorical imperatives’. Dichotomous thinking – also referred to as ‘black-and-white’ or ‘all-or-nothing’ thinking – describes a binary outlook, where things in life are either ‘this’ or ‘that’, and nothing in between. Categorical imperatives are completely rigid demands that people place on themselves and others. The term is borrowed from Immanuel Kant’s deontological moral philosophy, which is grounded in an obligation- and rules-based ethical code.

In our research – and in clinical psychology more broadly – absolutist thinking is viewed as an unhealthy thinking style that disrupts emotion-regulation and hinders people from achieving their goals. Yet we all, to varying extents, are disposed to it – why is this? Primarily, because it’s much easier than dealing with the true complexities of life. The term cognitive miser, first introduced by the American psychologists Susan Fiske and Shelley Taylor in 1984, describes how humans seek the simplest and least effortful ways of thinking. Nuance and complexity is expensive – it takes up precious time and energy – so wherever possible we try to cut corners. This is why we have biases and prejudices, and form habits. It’s why the study of heuristics (intuitive ‘gut-feeling’ judgments) is so useful in behavioural economics and political science.

But there is no such thing as a free lunch; the time and energy saved through absolutist thinking has a cost. In order to successfully navigate through life, we need to appreciate nuance, understand complexity and embrace flexibility. When we succumb to absolutist thinking for the most important matters in our lives – such as our goals, relationships and self-esteem – the consequences are disastrous.

The article is here.

Thursday, June 14, 2018

The Benefits of Admitting When You Don’t Know

Tenelle Porter
Behavioral Scientist
Originally published April 30, 2018

Here is an excerpt:

We found that the more intellectually humble students were more motivated to learn and more likely to use effective metacognitive strategies, like quizzing themselves to check their own understanding. They also ended the year with higher grades in math. We also found that the teachers, who hadn’t seen students’ intellectual humility questionnaires, rated the more intellectually humble students as more engaged in learning.

Next, we moved into the lab. Could temporarily boosting intellectual humility make people more willing to seek help in an area of intellectual weakness? We induced intellectual humility in half of our participants by having them read a brief article that described the benefits of admitting what you do not know. The other half read an article about the benefits of being very certain of what you know. We then measured their intellectual humility.

Those who read the benefits-of-humility article self-reported higher intellectual humility than those in the other group. What’s more, in a follow-up exercise 85 percent of these same participants sought extra help for an area of intellectual weakness. By contrast, only 65 percent of the participants who read about the benefits of being certain sought the extra help that they needed. This experiment provided evidence that enhancing intellectual humility has the potential to affect students’ actual learning behavior.

Together, our findings illustrate that intellectual humility is associated with a host of outcomes that we think are important for learning in school, and they suggest that boosting intellectual humility may have benefits for learning.

The article is here.

Sex robots are coming. We might even fall in love with them.

Sean Illing
www.vox.com
Originally published May 11, 2018

Here is an excerpt:

Sean Illing: Your essay poses an interesting question: Is mutual love with a robot possible? What’s the answer?

Lily Eva Frank:

Our essay tried to explore some of the core elements of romantic love that people find desirable, like the idea of being a perfect match for someone or the idea that we should treasure the little traits that make someone unique, even those annoying flaws or imperfections.

The key thing is that we love someone because there’s something about being with them that matters, something particular to them that no one else has. And we make a commitment to that person that holds even when they change, like aging, for example.

Could a robot do all these things? Our answer is, in theory, yes. But only a very advanced form of artificial intelligence could manage it because it would have to do more than just perform as if it were a person doing the loving. The robot would have to have feelings and internal experiences. You might even say that it would have to be self-aware.

But that would leave open the possibility that the sex bot might not want to have sex with you, which sort of defeats the purpose of developing these technologies in the first place.

(cut)

I think people are weird enough that it is probably possible for them to fall in love with a cat or a dog or a machine that doesn’t reciprocate the feelings. A few outspoken proponents of sex dolls and robots claim they love them. Check out the testimonials page on the websites of sex doll manufactures; they say things like, “Three years later, I love her as much as the first day I met her.” I don’t want to dismiss these people’s reports.

The information is here.

Wednesday, June 13, 2018

The Burnout Crisis in American Medicine

Rena Xu
The Atlantic
Originally published May 11, 2018

Here is an excerpt:

In medicine, burned-out doctors are more likely to make medical errors, work less efficiently, and refer their patients to other providers, increasing the overall complexity (and with it, the cost) of care. They’re also at high risk of attrition: A survey of nearly 7,000 U.S. physicians, published last year in the Mayo Clinic Proceedings, reported that one in 50 planned to leave medicine altogether in the next two years, while one in five planned to reduce clinical hours over the next year. Physicians who self-identified as burned out were more likely to follow through on their plans to quit.

What makes the burnout crisis especially serious is that it is hitting us right as the gap between the supply and demand for health care is widening: A quarter of U.S. physicians are expected to retire over the next decade, while the number of older Americans, who tend to need more health care, is expected to double by 2040. While it might be tempting to point to the historically competitive rates of medical-school admissions as proof that the talent pipeline for physicians won’t run dry, there is no guarantee. Last year, for the first time in at least a decade, the volume of medical school applications dropped—by nearly 14,000, according to data from the Association of American Medical Colleges. By the association’s projections, we may be short 100,000 physicians or more by 2030.

The article is here.

Thus Spoke Jordan Peterson

David Livingstone Smith and John Kaag
Foreign Policy
Originally published April 4, 2018

Here is an excerpt:

Peterson’s philosophy is difficult to assess because it is constructed of equal parts apocalyptic alarm and homespun advice. Like the Swiss psychiatrist Carl Jung, whom he cites as an intellectual influence, Peterson is fond of thinking in terms of grand dualities — especially the opposition of order and chaos. Order, in his telling, consists of everything that is routine and predictable, while chaos corresponds to all that is unpredictable and novel.

For Peterson, living well requires walking the line between the two. He is hardly the first thinker to make this point; another of his heroes, the German philosopher Friedrich Nietzsche, harking back to the ancient Greeks, suggested that life is best lived between the harmony of Apollo and the madness of Dionysus. But while Peterson claims both order and chaos are equally important, he is mainly concerned with the perils posed by the latter — hence his rules.

In his books and lectures, Peterson describes chaos as “feminine.” Order, of course, is “masculine.” So the threat of being overwhelmed by chaos is the threat of being overwhelmed by femininity. The tension between chaos and order plays out in both the personal sphere and the broader cultural landscape, where chaos is promoted by those “neo-Marxist postmodernists” whose nefarious influence has spawned radical feminism, political correctness, moral relativism, and identity politics.

At the core of Peterson’s social program is the idea that the onslaught of femininity must be resisted. Men need to get tough and dominant. And, in Peterson’s mind, women want this, too. He tells us in 12 Rules for Life: “If they’re healthy, women don’t want boys. They want men.… If they’re tough, they want someone tougher. If they’re smart, they want someone smarter.” “Healthy” women want men who can “outclass” them. That’s Peterson’s reason for frequently referencing the Jungian motif of the hero: the square-jawed warrior who subdues the feminine powers of chaos. Don’t be a wimp, he tells us. Be a real man.

The information is here.

Tuesday, June 12, 2018

Did Google Duplex just pass the Turing Test?

Lance Ulanoff
Medium.com
Originally published

Here is an excerpt:

In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider.

Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex?

Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more.

I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test.

It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar.

The information is here.

Is it Too Soon? The Ethics of Recovery from Grief

John Danaher
Philosophical Disquisitions
Originally published May 11, 2106

Here is an excerpt:

This raises an obvious and important question in the ethics of grief recovery. Is there a certain mourning period that should be observed following the death of a loved one? If you get back on your feet too quickly, does that say something negative about the relationship you had with the person who died (or about you)? To be more pointed: if I can re-immerse myself in my work a mere three weeks after my sister’s death, does that mean there is something wrong with me or something deficient in the relationship I had with her?

There is a philosophical literature offering answers to these questions, but from what I have read the majority of it does not deal with the ethics of recovering from a sibling’s death. Indeed, I haven’t found anything that deals directly with this issue. Instead, the majority of the literature deals with the ethics of recovery from the death of a spouse or intimate partner. What’s more, when they discuss that topic, they seem to have one scenario in mind: how soon is too soon when it comes to starting an intimate relationship with another person?

Analysing the ethical norms that should apply to that scenario is certainly of value, but it is hardly the only scenario worthy of consideration, and it is obviously somewhat distinct from the scenario that I am facing. I suspect that different norms apply to different relationships and this is likely to affect the ethics of recovery across those different relationship types.

The information is here.

Monday, June 11, 2018

Discerning bias in forensic psychological reports in insanity cases

Tess M. S. Neal
Behavioral Sciences & the Law, (2018).

Abstract

This project began as an attempt to develop systematic, measurable indicators of bias in written forensic mental health evaluations focused on the issue of insanity. Although forensic clinicians observed in this study did vary systematically in their report‐writing behaviors on several of the indicators of interest, the data are most useful in demonstrating how and why bias is hard to ferret out. Naturalistic data were used in this project (i.e., 122 real forensic insanity reports), which in some ways is a strength. However, given the nature of bias and the problem of inferring whether a particular judgment is biased, naturalistic data also made arriving at conclusions about bias difficult. This paper describes the nature of bias – including why it is a special problem in insanity evaluations – and why it is hard to study and document. It details the efforts made in an attempt to find systematic indicators of potential bias, and how this effort was successful in part, but also how and why it failed. The lessons these efforts yield for future research are described. We close with a discussion of the limitations of this study and future directions for work in this area.

The research is here.

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Sunday, June 10, 2018

Can precision medicine do for depression what it’s done for cancer? It won’t be easy

Megan Thielking
Statnews.com
Originally posted May 9, 2018

At a growing number of research centers across the country, scientists are scanning brains of patients with depression, drawing their blood, asking about their symptoms, and then scouring that data for patterns. The goal: pinpoint subtypes of depression, then figure out which treatments have the best chance of success for each particular variant of the disease.

The idea of precision medicine for depression is quickly gaining ground — just last month, Stanford announced it is establishing a Center for Precision Mental Health and Wellness. And depression is one of many diseases targeted by All of Us, the National Institute of Health campaign launched this month to collect DNA and other data from 1 million Americans. Doctors have been treating cancer patients this way for years, but the underlying biology of mental illness is not as well understood.

“There’s not currently a way to match people with treatment,” said Dr. Madhukar Trivedi, a depression researcher at the University of Texas Southwestern Medical Center. “That’s why this is a very exciting field to research.”

The information is here.

Saturday, June 9, 2018

Doing good vs. avoiding bad in prosocial choice

 A refined test and extension of the morality preference hypothesis

Ben Tappin and Valerio Capraro
Preprint

Abstract

Prosociality is fundamental to the success of human social life, and, accordingly, much research has attempted to explain human prosocial behavior. Capraro and Rand (2018) recently advanced the hypothesis that prosocial behavior in anonymous, one-shot interactions is not driven by outcome-based social preferences for equity or efficiency, as classically assumed, but by a generalized morality preference for “doing the right thing”. Here we argue that the key experiments reported in Capraro and Rand (2018) comprise prominent methodological confounds and open questions that bear on influential psychological theory. Specifically, their design confounds: (i) preferences for efficiency with self-interest; and (ii) preferences for action with preferences for morality. Furthermore, their design fails to dissociate the preference to do “good” from the preference to avoid doing “bad”. We thus designed and conducted a preregistered, refined and extended test of the morality preference hypothesis (N=801). Consistent with this hypothesis and the results of Capraro and Rand (2018), our findings indicate that prosocial behavior in anonymous, one-shot interactions is driven by a preference for doing the morally right thing. Inconsistent with influential psychological theory, however, our results suggest the preference to do “good” is as potent as the preference to avoid doing “bad” in prosocial choice.

The preprint is here.

Friday, June 8, 2018

The pros and cons of having sex with robots

Karen Turner
www.vox.com
Originally posted January 18, 2018

Here is an excerpt:

Karen Turner: Where does sex robot technology stand right now?

Neil McArthur:

When people have this idea of a sex robot, they think it’s going to look like a human being, it’s gonna walk around and say seductive things and so on. I think that’s actually the slowest-developing part of this whole nexus of sexual technology. It will come — we are going to have realistic sex robots. But there are a few technical hurdles to creating humanoid robots that are proving fairly stubborn. Making them walk is one of them. And if you use Siri or any of those others, you know that AI is proving sort of stubbornly resistant to becoming realistic.

But I think that when you look more broadly at what’s happening with sexual technology, virtual reality in general has just taken off. And it’s being used in conjunction with something called teledildonics, which is kind of an odd term. But all it means is actual devices that you hook up to yourself in various ways that sync with things that you see onscreen. It’s truly amazing what’s going on.

(cut)

When you look at the ethical or philosophical considerations, — I think there’s two strands. One is the concerns people have, and two, which I think maybe doesn’t get as much attention, in the media at least, is the potential advantages.

The concerns have to do with the psychological impact. As you saw with those Apple shareholders [who asked Apple to help protect children from digital addiction], we’re seeing a lot of concern about the impact that technology is having on people’s lives right now. Many people feel that anytime you’re dealing with sexual technology, those sorts of negative impacts really become intensified — specifically, social isolation, people cutting themselves off from the world.

The article is here.

The Ethics of Medicaid’s Work Requirements and Other Personal Responsibility Policies

Harald Schmidt and Allison K. Hoffman
JAMA. Published online May 7, 2018. doi:10.1001/jama.2018.3384

Here are two excerpts:

CMS emphasizes health improvement as the primary rationale, but the agency and interested states also favor work requirements for their potential to limit enrollment and spending and out of an ideological belief that everyone “do their part.” For example, an executive order by Kentucky’s Governor Matt Bevin announced that the state’s entire Medicaid expansion would be unaffordable if the waiver were not implemented, threatening to end expansion if courts strike down “one or more” program elements. Correspondingly, several nonexpansion states have signaled that the option of introducing work requirements might make them reconsider expansion—potentially covering more people but arguably in a way inconsistent with Medicaid’s broader objectives.

Work requirements have attracted the most attention but are just one of many policies CMS has encouraged as part of apparent attempts to promote personal responsibility in Medicaid. Other initiatives tie levels of benefits to confirming eligibility annually, paying premiums on time, meeting wellness program criteria such as completing health risk assessments, or not using the emergency department (ED) for nonemergency care.

(cut)

It is troubling that these policies could result in some portion of previously eligible individuals being denied necessary medical care because of unduly demanding requirements. Moreover, even if reduced enrollment were to decrease Medicaid costs, it might not reduce medical spending overall. Laws including the Emergency Medical Treatment and Labor Act still require stabilization of emergency medical conditions, entailing more expensive and less effective care.

The article is here.

Thursday, June 7, 2018

Embracing the robot

John Danaher
aeon.co
Originally posted March 19, 2018

Here is an excerpt:

Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.

In Blade Runner 2049, the true significance of K’s relationship with Joi is ambiguous. It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: she has been designed and created by a corporation, which no doubt records the data from her interactions, and updates her software from time to time.

This is a far cry from the philosophical ideal of love. Philosophers emphasise the need for mutual commitment in any meaningful relationship. It’s not enough for you to feel a strong, emotional attachment to another; they have to feel a similar attachment to you. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient.

The information is here.

Protecting confidentiality in genomic studies


MIT Press Release
Originally released May 7, 2018

Genome-wide association studies, which look for links between particular genetic variants and incidence of disease, are the basis of much modern biomedical research.

But databases of genomic information pose privacy risks. From people’s raw genomic data, it may be possible to infer their surnames and perhaps even the shapes of their faces. Many people are reluctant to contribute their genomic data to biomedical research projects, and an organization hosting a large repository of genomic data might conduct a months-long review before deciding whether to grant a researcher’s request for access.

In a paper published in Nature Biotechnology (https://doi.org/10.1038/nbt.4108), researchers from MIT and Stanford University present a new system for protecting the privacy of people who contribute their genomic data to large-scale biomedical studies. Where earlier cryptographic methods were so computationally intensive that they became prohibitively time consuming for more than a few thousand genomes, the new system promises efficient privacy protection for studies conducted over as many as a million genomes.

The release is here.

Wednesday, June 6, 2018

The LAPD’s Terrifying Policing Algorithm: Yes It’s Basically ‘Minority Report’

Dan Robitzski
Futurism.com
Originally posted May 11, 2018

The Los Angeles Police Department was recently forced to release documents about their predictive policing and surveillance algorithms, thanks to a lawsuit from the Stop LAPD Spying Coalition (which turned the documents over to In Justice Today). And what do you think the documents have to say?

If you guessed “evidence that policing algorithms, which require officers to keep a checklist of (and keep an eye on) 12 people deemed most likely to commit a crime, are continuing to propagate a vicious cycle of disproportionately high arrests of black Angelinos, as well as other racial minorities,” you guessed correctly.

Algorithms, no matter how sophisticated, are only as good as the information that’s provided to them. So when you feed an AI data from a city where there’s a problem of demonstrably, mathematically racist over-policing of neighborhoods with concentrations of people of color, and then have it tell you who the police should be monitoring, the result will only be as great as the process. And the process? Not so great!

The article is here.

Welcome to America, where morality is judged along partisan lines

Joan Vennochi
Boston Globe
Originally posted May 8, 2018

Here some excerpts:

“It’s OK to lie to the press?” asked Stephanopoulos. To which, Giuliani replied: “Gee, I don’t know — you know a few presidents who did that.”

(cut)

Twenty years later, special counsel Robert Mueller has been investigating allegations of collusion between the Trump campaign and the Russian government. Trump’s lawyer, Cohen, is now entangled in the collusion investigation, as well as with the payment to Daniels, which also entangles Trump — who, according to Giuliani, might invoke the Fifth Amendment to avoid testifying under oath. That must be tempting, given Trump’s well-established contempt for truthfulness and personal accountability.

(cut)

So it goes in American politics, where morality is judged strictly along partisan lines, and Trump knows it.

The information is here.

Tuesday, June 5, 2018

Norms and the Flexibility of Moral Action

Oriel Feldman Hall, Jae-Young Son, and Joseph Heffner
Preprint

ABSTRACT

A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.

The research is here.

Is There Such a Thing as Truth?

Errol Morris
Boston Review
Originally posted April 30, 2018

Here is an excerpt:

In fiction, we are often given an imaginary world with seemingly real objects—horses, a coach, a three-cornered hat and wig. But what about the objects of science—positrons, neutrinos, quarks, gravity waves, Higgs bosons? How do we reckon with their reality?

And truth. Is there such a thing? Can we speak of things as unambiguously true or false? In history, for example, are there things that actually happened? Louis XVI guillotined on January 21, 1793, at what has become known as the Place de la Concorde. True or false? Details may be disputed—a more recent example: how large, comparatively, was Donald Trump’s victory in the electoral college in 2016, or the crowd at his inauguration the following January? 
But do we really doubt that Louis’s bloody head was held up before the assembled crowd? Or doubt the existence of the curved path of a positron in a bubble chamber? Even though we might not know the answers to some questions—“Was Louis XVI decapitated?” or “Are there positrons?”—we accept that there are answers.

And yet, we read about endless varieties of truth. Coherence theories of truth. Pragmatic, relative truths. Truths for me, truths for you. Dog truths, cat truths. Whatever. I find these discussions extremely distasteful and unsatisfying. To say that a philosophical system is “coherent” tells me nothing about whether it is true. Truth is not hermetic. I cannot hide out in a system and assert its truth. For me, truth is about the relation between language and the world. A correspondence idea of truth. Coherence theories of truth are of little or no interest to me. Here is the reason: they are about coherence, not truth. We are talking about whether a sentence or a paragraph
 or group of paragraphs is true when set up against the world. Thackeray, introducing the fictional world of Vanity Fair, evokes the objects of a world he is familiar with—“a large family coach, with two fat horses in blazing harnesses, driven by a fat coachman in a three-cornered hat and wig, at the rate of four miles an hour.”

The information is here.

Monday, June 4, 2018

Human-sounding Google Assistant sparks ethics questions

The Strait Times
Originally published May 9, 2018

Here are some excerpts:

The new Google digital assistant converses so naturally it may seem like a real person.

The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers, but left others fretting over the ethics of how the human-seeming software might be used.

(cut)

The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing "robocalls" by marketers or political campaigns.

(cut)

Digital assistants making arrangements for people also raises the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.

The information is here.

A narrative thematic analysis of moral injury in combat veterans

Held, P., Klassen, B. J., Hall, J. M., Friese, and others
Psychological Trauma: Theory, Research, Practice, and Policy. 
Advance online publication. http://dx.doi.org/10.1037/tra0000364

Here is a portion of the Introduction:

In war, service members sometimes have to make difficult decisions, some of which may violate their deeply held beliefs and moral values. The term moral injury was coined to refer to the enduring mental health consequences that can occur from participating in, witnessing, or learning about acts that violate one’s moral code (Drescher et al., 2011; Litz et al., 2009; Shay, 1994). Some examples of potentially morally injurious events include disproportionate violence, engaging in atrocities, or violations of rules of engagement (Litz et al., 2009; Stein et al., 2012). Although consensus regarding how best to measure moral injury has not been reached, one preliminary estimate suggested that as many as 25% of a representative sample of veterans endorsed exposure to morally injurious experiences (Wisco et al., 2017). Involvement in these situations has been shown to be associated with a range of negative psychological reactions, including the development of mental health symptoms, such as posttraumatic stress disorder (PTSD), depression (Held, Klassen, Brennan, & Zalta, 2017; Maguen et al., 2010), substance use problems (Wilk et al., 2010) and suicidal ideation (Maguen et al., 2012).

Litz and colleagues (2009) have proposed the sole theoretical model of how moral transgressions result in the development of mental health symptoms. Following the morally injurious event, individuals experience a conflict between the event and their own moral beliefs. For example, a service member may believe that civilians should not be harmed during combat but is involved in an event that involves the death of noncombatants. In an attempt to resolve this cognitive conflict, self-directed attributions of the event’s cause may be made, such as service members believing that they were complicit in noncombatants being harmed. The stable, internal, and global attributions that result lead to the development of painful emotions (e.g., guilt, shame, fear of social rejection) and withdrawal from social interaction. Lack of social contact leads to missed opportunities for potentially corrective information and further strengthens the painful emotions and the stable, internal, and global attributions about the morally injurious event (e.g., Martin et al., 2017). It has been proposed that unless addressed, the moral injury continues to manifest and perpetuate itself through intrusions, avoidance, and numbing in a manner similar to PTSD (Jinkerson, 2016; Farnsworth, Drescher, Nieu- wsma, Walser, & Currier, 2014; Litz, Lebowitz, Gray, & Nash, 2016; Litz et al., 2009).

The article is here.

Sunday, June 3, 2018

Hostile environment: The dark side of nudge theory

Nick Barrett
politics.co.uk
Originally posted May 1, 2018

Here is an excerpt:

Just as a website can use a big yellow button to make buying a book or signing up to a newsletter inviting, governments can use nudge theory to make saving money for your pension easy and user-friendly. But it can also establish its own dark patterns too and the biggest government dark pattern of all is the hostile environment policy established in 2012 to encourage migrants to leave the country.

The policy meant that without the right paperwork, people were deprived of health services, employment rights and access to housing and effectively excluded from mainstream society. They were not barred. The circumstances were simply created to nudge them into leaving the country.

For six years the hostile environment persecuted the least visible among us. It was only when its effects on the Windrush generation were revealed that the policy’s inherent prejudice became clear to all. What could once be seen as firm but fair suddenly looked cruel and unusual. These measures might have been defensible if the legal migration process hadn’t been turned into a painfully punitive process for anybody arriving from outside of the EU.

The information is here.

Saturday, June 2, 2018

Preventing Med School Suicides

Roger Sergel
MegPage Today
Originally posted May 2, 2018

Here is an excerpt:

The medical education community needs to acknowledge the stress imposed on our medical learners as they progress from students to faculty. One of the biggest obstacles is changing the culture of medicine to not only understand the key burnout drivers and pain points but to invest resources into developing strategies which reduce stress. These strategies must include the medical learner taking ownership for the role they play in their lack of well-being. In addition, medical schools and healthcare organizations must reflect on their policies/processes which do not promote wellness. In both situations, there is pointing to the other group as the one who needs to change. Both are right.

We do need to change how we deliver a quality medical education AND we need our medical learners to reflect on their personal attitudes and openness to developing their resilience muscles to manage their stress. Equally important, we need to reduce the stigma of seeking help and break down the barriers which would allow our medical learners and physicians to seek help, when needed. We need to create support services which are convenient, accessible, and utilized.

What programs does your school have to support medical students' mental health?

The information is here.

Friday, June 1, 2018

CGI ‘Influencers’ Like Lil Miquela Are About to Flood Your Feeds

Miranda Katz
www.wired.com
Originally published May 1, 2018

Here is an excerpt:

There are already a number of startups working on commercial applications for what they call “digital” or “virtual” humans. Some, like the New Zealand-based Soul Machines, are focusing on using these virtual humans for customer service applications; already, the company has partnered with the software company Autodesk, Daimler Financial Services, and National Westminster Bank to create hyper-lifelike digital assistants. Others, like 8i and Quantum Capture, are working on creating digital humans for virtual, augmented, and mixed reality applications.

And those startups’ technologies, though still in their early stages, make Lil Miquela and her cohort look positively low-res. “[Lil Miquela] is just scratching the surface of what these virtual humans can do and can be,” says Quantum Capture CEO and president Morgan Young. “It’s pre-rendered, computer-generated snapshots—images that look great, but that’s about as far as it’s going to go, as far as I can tell, with their tech. We’re concentrating on a high level of visual quality and also on making these characters come to life.”

Quantum Capture is focused on VR and AR, but the Toronto-based company is also aware that those might see relatively slow adoption—and so it’s currently leveraging its 3D-scanning and motion-capture technologies for real-world applications today.

The information is here.

The toxic legacy of Canada's CIA brainwashing experiments

Ashifa Kassam
The Guardian
Originally published May 3, 2018

Here is an excerpt:

Patients were subjected to high-voltage electroshock therapy several times a day, forced into drug-induced sleeps that could last months and injected with megadoses of LSD.

After reducing them to a childlike state – at times stripping them of basic skills such as how to dress themselves or tie their shoes – Cameron would attempt to reprogram them by bombarding them with recorded messages for up to 16 hours at a time. First came negative messages about their inadequacies, followed by positive ones, in some cases repeated up to half a million times.

“He couldn’t get his patients to listen to them enough so he put speakers in football helmets and locked them on their heads,” said Johnson. “They were going crazy banging their heads into walls, so he then figured he could put them in a drug induced coma and play the tapes as long as he needed.”

Along with intensive bouts of electroshock therapy, Johnson’s grandmother was given injections of LSD on 14 occasions. “She said that made her feel like her bones were melting. She would say: ‘I don’t want these,’” said Johnson. “And the doctors and nurses would say to her: ‘You’re a bad wife, you’re a bad mother. If you wanted to get better, you would do this for your family. Think about your daughter.’”

The information is here.