Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, August 8, 2018

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

Marshall Allen
ProPublica.org
Originally posted July 17, 2018

Here are two excerpts:

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

(cut)

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

The information is here.

The Road to Pseudoscientific Thinking

Julia Shaw
The Road to Pseudoscientific ThinkingScientific American
Originally published January 16, 2017

Here is the conclusion:

So, where to from here? Are there any cool, futuristic, applications of such insights? According to McColeman “I expect that category learning work from human learning will help computer vision moving forward, as we understand the regularities in the environment that people are picking up on. There’s still a lot of room for improvement in getting computer systems to notice the same things that people notice.” We need to help people, and computers, to avoid being distracted by unimportant, attention-grabbing, information.

The take-home message from this line of research seems to be: When fighting the post-truth war against pseudoscience and misinformation, make sure that important information is eye-catching and quickly understandable.

The information is here.

Tuesday, August 7, 2018

Thousands of leading AI researchers sign pledge against killer robots

Ian Sample
The Guardian
Originally posted July 18, 2018

Here is an excerpt:

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

The info is here.

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Monday, August 6, 2018

Why Should We Be Good?

Matt McManus
Quillette.com
Originally posted July 7, 2018

Here are two excerpts:

The negative motivation arises from moral dogmatism. There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose. For instance, this is often the case with religious fundamentalists who worry that secular society is increasingly unmoored from proper values and traditions. Ironically, the dark underside of this moral dogmatism is often a relativistic epistemology. Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

(cut)

These issues are all of considerable philosophical interest. In what follows, I want to press on just one issue that is often missed in debates between those who believe there are universal values, and those who believe that what is ethically correct is relative to either a culture or to the subjective preference of individuals. The issue I wish to explore is this: even if we know which values are universal, why should we feel compelled to adhere to them? Put more simply, even if we know what it is to be good, why should we bother to be good? This is one of the major questions addressed by what is often called meta-ethics.

The information is here.

False Equivalence: Are Liberals and Conservatives in the U.S. Equally “Biased”?

Jonathan Baron and John T. Jost
Invited Revision, Perspectives on Psychological Science.

Abstract

On the basis of a meta-analysis of 51 studies, Ditto, Liu, Clark, Wojcik, Chen, et al. (2018) conclude that ideological “bias” is equivalent on the left and right of U.S. politics. In this commentary, we contend that this conclusion does not follow from the review and that Ditto and colleagues are too quick to embrace a false equivalence between the liberal left and the conservative right. For one thing, the issues, procedures, and materials used in studies reviewed by Ditto and colleagues were selected for purposes other than the inspection of ideological asymmetries. Consequently, methodological choices made by researchers were systematically biased to avoid producing differences between liberals and conservatives. We also consider the broader implications of a normative analysis of judgment and decision-making and demonstrate that the “bias” examined by Ditto and colleagues is not, in fact, an irrational bias, and that it is incoherent to discuss bias in the absence of standards for assessing accuracy and consistency. We find that Jost’s (2017) conclusions about domain-general asymmetries in motivated social cognition, which suggest that epistemic virtues are more prevalent among liberals than conservatives, are closer to the truth of the matter when it comes to current American politics. Finally, we question the notion that the research literature in psychology is necessarily characterized by “liberal bias,” as several authors have claimed.

Here is the end:

 If academics are disproportionately liberal—in comparison with society at large—it just might
be due to the fact that being liberal in the early 21st century is more compatible with the epistemic standards, values, and practices of academia than is being conservative.

The article is here.

See Your Surgeon Is Probably a Republican, Your Psychiatrist Probably a Democrat as an other example.

Sunday, August 5, 2018

How Do Expectations Shape Perception?

Floris P. de Lange, Micha Heilbron, & Peter Kok
Trends in Cognitive Sciences
Available online 29 June 2018

Abstract

Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.

Highlights

  • Expectations play a strong role in determining the way we perceive the world.
  • Prior expectations can originate from multiple sources of information, and correspondingly have different neural sources, depending on where in the brain the relevant prior knowledge is stored.
  • Recent findings from both human neuroimaging and animal electrophysiology have revealed that prior expectations can modulate sensory processing at both early and late stages, and both before and after stimulus onset. The response modulation can take the form of either dampening the sensory representation or enhancing it via a process of sharpening.
  • Theoretical computational frameworks of neural sensory processing aim to explain how the probabilistic integration of prior expectations and sensory inputs results in perception.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Friday, August 3, 2018

Data Citizens: Why We All Care About Data Ethics

Caitlin McDonald
InfoQ.com
Originally posted July 4, 2018

Key Takeaways

  • Data citizens are impacted by the models, methods, and algorithms created by data scientists, but they have limited agency to affect the tools which are acting on them.
  • Data science ethics can draw on the conceptual frameworks in existing fields for guidance on how to approach ethical questions--specifically, in this case, civics.
  • Data scientists are also data citizens. They are acted upon by the tools of data science as well as building them. It is often where these roles collide that people have the best understanding of the importance of developing ethical systems.
  • One model for ensuring the rights of data citizens could be seeking the same level of transparency for ethical practices in data science that there are for lawyers and legislators.
  • As with other ethical movements before, like seeking greater environmental protection or fairer working conditions, implementing new rights and responsibilities at scale will take a great deal of lobbying and advocacy.