Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, January 12, 2020

Bias in algorithmic filtering and personalization

Engin Bozdag
Ethics Inf Technol (2013) 15: 209.

Abstract

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.

From the Discussion:

Today information seeking services can use interpersonal contacts of users in order to tailor information and to increase relevancy. This not only introduces bias as our model shows, but it also has serious implications for other human values, including user autonomy, transparency, objectivity, serendipity, privacy and trust. These values introduce ethical questions. Do private companies that are
offering information services have a social responsibility, and should they be regulated? Should they aim to promote values that the traditional media was adhering to, such as transparency, accountability and answerability? How can a value such as transparency be promoted in an algorithm?  How should we balance between autonomy and serendipity and between explicit and implicit personalization? How should we define serendipity? Should relevancy be defined as what is popular in a given location or by what our primary groups find interesting? Can algorithms truly replace human filterers?

The info can be downloaded here.

Saturday, January 11, 2020

A Semblance of Aliveness

J. Grunsven & A. Wynsberghe
Techné: Research in Philosophy and Technology
Published on December 3, 2019

While the design of sex robots is still in the early stages, the social implications of the potential proliferation of sex robots into our lives has been heavily debated by activists and scholars from various disciplines. What is missing in the current debate on sex robots and their potential impact on human social relations is a targeted look at the boundedness and bodily expressivity typically characteristic of humans, the role that these dimensions of human embodiment play in enabling reciprocal human interactions, and the manner in which this contrasts with sex robot-human interactions. Through a fine-grained discussion of these themes, rooted in fruitful but largely untapped resources from the field of enactive embodied cognition, we explore the unique embodiment of sex robots. We argue that the embodiment of the sex robot is constituted by what we term restricted expressivity and a lack of bodily boundedness and that this is the locus of negative but also potentially positive implications. We discuss the possible benefits that these two dimensions of embodiment may have for people within a specific demographic, namely some persons on the autism spectrum. Our preliminary conclusion—that the benefits and the downsides of sex robots reside in the same capability of the robot, its restricted expressivity and lack of bodily boundedness as we call it—demands we take stock of future developments in the design of sex robot embodiment. Given the importance of evidence-based research pertaining to sex robots in particular, as reinforced by Nature (2017) for drawing correlations and making claims, the analysis is intended to set the stage for future research.

The info is here.

Friday, January 10, 2020

Ethically Adrift: How Others Pull Our Moral Compass from True North, and How we Can Fix It

Moore, C., and F. Gino.
Research in Organizational Behavior 
33 (2013): 53–77.

Abstract

This chapter is about the social nature of morality. Using the metaphor of the moral compass to describe individuals' inner sense of right and wrong, we offer a framework to help us understand social reasons why our moral compass can come under others' control, leading even good people to cross ethical boundaries. Departing from prior work focusing on the role of individuals' cognitive limitations in explaining unethical behavior, we focus on the socio-psychological processes that function as triggers of moral neglect, moral justification and immoral action, and their impact on moral behavior. In addition, our framework discusses organizational factors that exacerbate the detrimental effects of each trigger. We conclude by discussing implications and recommendations for organizational scholars to take a more integrative approach to developing and evaluating theory about unethical behavior.

From the Summary

Even when individuals are aware of the ethical dimensions of the choices they are making, they may still engage in unethical behavior as long as they recruit justifications for it. In this section, we discussed the role of two social–psychological processes – social comparison and self-verification – that facilitate moral justification, which will lead to immoral behavior. We also discussed three characteristics of organizational life that amplify these social–psychological processes. Specifically, we discussed how organizational identification, group loyalty, and framing or euphemistic language can all affect the likelihood and extent to which individuals justify their actions, by judging them as ethical when in fact they are morally contentious. Finally, we discussed moral disengagement, moral hypocrisy, and moral licensing as intrapersonal consequences of these social facilitators of moral justification.

The paper can be downloaded here.

The Complicated Ethics of Genetic Engineering

Brit McCandless Farmer
cbsnews.com
Originally posted 8 Dec 19

Here is an excerpt:

A 2017 survey at the University of Wisconsin-Madison asked 1,600 members of the general public about their attitudes toward gene editing. The results showed 65 percent of respondents think gene editing is acceptable for therapeutic purposes. But when it comes to whether scientists should use technology for genetic enhancement, only 26 percent agreed.

Going forward, Church thinks genetic engineering needs government oversight. He is also concerned about reversibility—he does not want to create anything in his lab that cannot be reversed if it creates unintended consequences.

"A lot of the technology we develop, we try to make them reversible, containable," Church said. "So the risks are that some people get excited, so excited that they ignore well-articulated risks."

Back in his Harvard lab, Church's colleagues showed Pelley their work on "mini brains," tiny dots with millions of cells each. The cells, which come from a patient, can be grown into many types of organ tissue in a matter of days, making it possible for drugs to be tested on that patient's unique genome. Church aims to use genetic engineering to reverse aging and grow human organs for transplant.

Pelley said he was struck by the speed with which medical advancements are coming.

The info is here.

Thursday, January 9, 2020

How implicit bias harms patient care

Jeff Bendix
medicaleconomics.com
Originally posted 25 Nov 19

Here is an excerpt:

While many people have difficulty acknowledging that their actions are influenced by unconscious biases, the concept is particularly troubling for doctors, who have been trained to view—and treat—patients equally, and the vast majority of whom sincerely believe that they do.

“Doctors have been molded throughout medical school and all our training to be non-prejudiced when it comes to treating patients,” says James Allen, MD, a pulmonologist and medical director of University Hospital East, part of Ohio State University’s Wexner Medical Center. “It’s not only asked of us, it’s demanded of us, so many physicians would like to think they have no biases. But it’s not true. All human beings have biases.”

“Among physicians, there’s a stigma attached to any suggestion of racial bias,” adds Penner. “And were a person to be identified that way, there could be very severe consequences in terms of their career prospects or even maintaining their license.”

Ironically, as Penner and others point out, the conditions under which most doctors practice today—high levels of stress, frequent distractions, and brief visits that allow little time to get to know patients--are the ones most likely to heighten their vulnerability to unintentional biases.

“A doctor under time pressure from a backlog of overdue charting and whatever else they’re dealing with will have a harder time treating all patients with the same level of empathy and concern,” van Ryn says.

The info is here.

Artificial Intelligence Is Superseding Well-Paying Wall Street Jobs

Deutsche Boerse To Acquire NYSE Euronext To Create Largest Exchange OwnerJack Kelly
forbes.com
Originally posted 10 Dec 19

Here is an excerpt:

Compliance people run the risk of being replaced too. “As bad actors become more sophisticated, it is vital that financial regulators have the funding resources, technological capacity and access to AI and automated technologies to be a strong and effective cop on the beat,” said Martina Rejsjö, head of Nasdaq Surveillance North America Equities.

Nasdaq, a tech-driven trading platform, has an associated regulatory body that offers over 40 different algorithms, using 35,000 parameters, to spot possible market abuse and manipulation in real time. “The massive and, in many cases, exponential growth in market data is a significant challenge for surveillance professionals," Rejsjö said. “Market abuse attempts have become more sophisticated, putting more pressure on surveillance teams to find the proverbial needle in the data haystack." In layman's terms, she believes that the future is in tech overseeing trading activities, as the human eye is unable to keep up with the rapid-fire, sophisticated global trading dominated by algorithms.

When people say not to worry, that’s the precise time to worry. Companies—whether they are McDonald’s, introducing self-serve kiosks and firing hourly workers to cut costs, or top-tier investment banks that rely on software instead of traders to make million-dollar bets on the stock market—will continue to implement technology and downsize people in an effort to enhance profits and cut down on expenses. This trend will be hard to stop and have serious future consequences for the workers at all levels and salaries. 

The info is here.

Wednesday, January 8, 2020

Can expert bias be reduced in medical guidelines?

Sheldon Greenfield
BMJ 2019; 367
https://doi.org/10.1136/bmj.l6882 

Here are two excerpts:

Despite robust study designs, even double blind randomised controlled trials can be subject to subtle forms of bias. This can be because of the financial conflicts of interest of the authors, intellectual or disciplinary based opinions, pressure on researchers from sponsors, or conflicting values. For example, some researchers may favour mortality over quality of life as a primary outcome, demonstrating a value conflict. The quality of evidence is often uneven and can include underappreciated sources of bias. This makes interpreting the evidence difficult, which results in guideline developers turning to “experts” to translate it into clinical practice recommendations.

Can we be confident that these experts are objective and free of bias? A 2011 Institute of Medicine (now known as the National Academy of Medicine) report1 challenged the assumption of objectivity among guideline development experts.

(cut)

The science that supports clinical medicine is constantly evolving. The pace of that evolution is increasing.

There is an urgent imperative to generate and update accurate, unbiased, clinical practice guidelines. So, what can we do now? I have two suggestions.

Firstly, the public, which may include physicians, nurses, and other healthcare providers dependent on guidelines, should advocate for organisations like the ECRI Institute and its international counterparts to be supported and looked to for setting standards.

Secondly, we should continue to examine the details and principles of “shared decision making” and other initiatives like it, so that doctors and patients can be as clear as possible in the face of uncertain evidence about medical treatments and recommendations.

It is an uphill battle, but one worth fighting.

Many Public Universities Refuse to Reveal Professors’ Conflicts of Interest

Annie Waldman and David Armstrong
Chronicle of Higher Ed and
ProPublica
Originally posted 6 Dec 19

Here is an excerpt:

All too often, what’s publicly known about faculty members’ outside activities, even those that could influence their teaching, research, or public-policy views, depends on where they teach. Academic conflicts of interest elude scrutiny because transparency varies from one university and one state to the next. ProPublica discovered those inconsistencies over the past year as we sought faculty outside-income forms from at least one public university in all 50 states.

About 20 state universities complied with our requests. The rest didn't, often citing exemptions from public-information laws for personnel records, or offering to provide the documents only if ProPublica first paid thousands of dollars. And even among those that released at least some records, there’s a wide range in what types of information are collected and disclosed, and whether faculty members actually fill out the forms as required. Then there's the universe of private universities that aren't subject to public-records laws and don't disclose professors’ potential conflicts at all. While researchers are supposed to acknowledge industry ties in scientific journals, those caveats generally don’t list compensation amounts.

We've accumulated by far the largest collection of university faculty and staff conflict-of-interest reports available anywhere, with more than 29,000 disclosures from state schools, which you can see in our new Dollars for Profs database. But there are tens of thousands that we haven't been able to get from other public universities, and countless more from private universities.

Sheldon Krimsky, a bioethics expert and professor of urban and environmental planning and policy at Tufts University, said that the fractured disclosure landscape deprives the public of key information for understanding potential bias in research. “Financial conflicts of interest influence outcomes,” he said. “Even if the researchers are honorable people, they don’t know how the interests affect their own research. Even honorable people can’t figure out why they have a predilection toward certain views. It’s because they internalize the values of people from whom they are getting funding, even if it’s not on the surface."

The info is here.

Tuesday, January 7, 2020

Can Artificial Intelligence Increase Our Morality?

Matthew Hutson
psychologytoday.com
Originally posted 9 Dec 19

Here is an excerpt:

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

The info is here.