Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, January 8, 2020

Can expert bias be reduced in medical guidelines?

Sheldon Greenfield
BMJ 2019; 367
https://doi.org/10.1136/bmj.l6882 

Here are two excerpts:

Despite robust study designs, even double blind randomised controlled trials can be subject to subtle forms of bias. This can be because of the financial conflicts of interest of the authors, intellectual or disciplinary based opinions, pressure on researchers from sponsors, or conflicting values. For example, some researchers may favour mortality over quality of life as a primary outcome, demonstrating a value conflict. The quality of evidence is often uneven and can include underappreciated sources of bias. This makes interpreting the evidence difficult, which results in guideline developers turning to “experts” to translate it into clinical practice recommendations.

Can we be confident that these experts are objective and free of bias? A 2011 Institute of Medicine (now known as the National Academy of Medicine) report1 challenged the assumption of objectivity among guideline development experts.

(cut)

The science that supports clinical medicine is constantly evolving. The pace of that evolution is increasing.

There is an urgent imperative to generate and update accurate, unbiased, clinical practice guidelines. So, what can we do now? I have two suggestions.

Firstly, the public, which may include physicians, nurses, and other healthcare providers dependent on guidelines, should advocate for organisations like the ECRI Institute and its international counterparts to be supported and looked to for setting standards.

Secondly, we should continue to examine the details and principles of “shared decision making” and other initiatives like it, so that doctors and patients can be as clear as possible in the face of uncertain evidence about medical treatments and recommendations.

It is an uphill battle, but one worth fighting.

Many Public Universities Refuse to Reveal Professors’ Conflicts of Interest

Annie Waldman and David Armstrong
Chronicle of Higher Ed and
ProPublica
Originally posted 6 Dec 19

Here is an excerpt:

All too often, what’s publicly known about faculty members’ outside activities, even those that could influence their teaching, research, or public-policy views, depends on where they teach. Academic conflicts of interest elude scrutiny because transparency varies from one university and one state to the next. ProPublica discovered those inconsistencies over the past year as we sought faculty outside-income forms from at least one public university in all 50 states.

About 20 state universities complied with our requests. The rest didn't, often citing exemptions from public-information laws for personnel records, or offering to provide the documents only if ProPublica first paid thousands of dollars. And even among those that released at least some records, there’s a wide range in what types of information are collected and disclosed, and whether faculty members actually fill out the forms as required. Then there's the universe of private universities that aren't subject to public-records laws and don't disclose professors’ potential conflicts at all. While researchers are supposed to acknowledge industry ties in scientific journals, those caveats generally don’t list compensation amounts.

We've accumulated by far the largest collection of university faculty and staff conflict-of-interest reports available anywhere, with more than 29,000 disclosures from state schools, which you can see in our new Dollars for Profs database. But there are tens of thousands that we haven't been able to get from other public universities, and countless more from private universities.

Sheldon Krimsky, a bioethics expert and professor of urban and environmental planning and policy at Tufts University, said that the fractured disclosure landscape deprives the public of key information for understanding potential bias in research. “Financial conflicts of interest influence outcomes,” he said. “Even if the researchers are honorable people, they don’t know how the interests affect their own research. Even honorable people can’t figure out why they have a predilection toward certain views. It’s because they internalize the values of people from whom they are getting funding, even if it’s not on the surface."

The info is here.

Tuesday, January 7, 2020

Can Artificial Intelligence Increase Our Morality?

Matthew Hutson
psychologytoday.com
Originally posted 9 Dec 19

Here is an excerpt:

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

The info is here.

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Monday, January 6, 2020

The Majority Does Not Determine Morality

Michael Brown
Townhall.com
Originally posted 9 Dec 19

Here is an excerpt:

During the time period from 2003 to 2017, support for polygamy in America rose from 7 percent to 17 percent, an even more dramatic shift from a statistical point of view. And it’s up to 18 percent in 2019.

Gallup noted that this “may simply be the result of the broader leftward shift on moral issues Americans have exhibited in recent years. Or, as conservative columnist Ross Douthat notes in his New York Times blog, ‘Polygamy is bobbing forward in social liberalism's wake ...’ To Douthat and other social conservatives, warming attitudes toward polygamy is a logical consequence of changing social norms -- that values underpinning social liberalism offer ‘no compelling grounds for limiting the number of people who might wish to marry.’”

Gallup also observed that, “It is certainly true that moral perceptions have significantly, fundamentally changed on a number of social issues or behaviors since 2001 -- most notably, gay/lesbian relations, having a baby outside of wedlock, sex between unmarried men and women, and divorce.”

Interestingly, Gallup also noted that there were social reasons that help to explain some of this larger leftward shift (including the rise in divorce and changes in laws; another obvious reason is that people have friends and family members who identify as gay or lesbian).

The info is here.

Pa. prison psychologist loses license after 3 ‘preventable and foreseeable’ suicides

Samantha Melamed
inquirer.com
Originally posted 4 Dec 19

Nearly a decade after a 1½-year stretch during which three prisoners at State Correctional Institution Cresson died by suicide and 17 others attempted it, the Pennsylvania Board of Psychology has revoked the license of the psychologist then in charge at the now-shuttered prison in Cambria County and imposed $17,233 in investigation costs.

An order filed Tuesday said the suicides were foreseeable and preventable and castigated the psychologist, James Harrington, for abdicating his ethical responsibility to intervene when mentally ill prisoners were kept in inhumane conditions — including solitary confinement — and were prevented from leaving their cells for treatment.

Harrington still holds an administrative position with the Department of Corrections, with an annual salary of $107,052.

The info is here.

Sunday, January 5, 2020

The Big Change Coming to Just About Every Website on New Year’s Day

Facebook billboard with a hand cursor clicking an X.Aaron Mak
Slate.com
Originally published 30 Dec 19

Starting New Year’s Day, you may notice a small but momentous change to the websites you visit: a button or link, probably at the bottom of the page, reading “Do Not Sell My Personal Information.”

The change is one of many going into effect Jan. 1, 2020, thanks to a sweeping new data privacy law known as the California Consumer Privacy Act. The California law essentially empowers consumers to access the personal data that companies have collected on them, to demand that it be deleted, and to prevent it from being sold to third parties. Since it’s a lot more work to create a separate infrastructure just for California residents to opt out of the data collection industry, these requirements will transform the internet for everyone.

Ahead of the January deadline, tech companies are scrambling to update their privacy policies and figure out how to comply with the complex requirements. The CCPA will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling consumer data accounts for more than 50 percent of revenue. The companies that meet these qualifications are expected to collectively spend a total of $55 billion upfront to meet the new standards, in addition to $16 billion over the next decade. Major tech firms have already added a number of user features over the past few months in preparation. In early December, Twitter rolled out a privacy center where users can learn more about the company’s approach to the CCPA and navigate to a dashboard for customizing the types of info that the platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company, which users can take advantage of by downloading an opt-out add-on. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically “sell” personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests.

The info is here.

Saturday, January 4, 2020

Robots in Finance Could Wipe Out Some of Its Highest-Paying Jobs

Lananh Nguyen
Bloomberg.com
Originally poste 6 Dec 19

Robots have replaced thousands of routine jobs on Wall Street. Now, they’re coming for higher-ups.

That’s the contention of Marcos Lopez de Prado, a Cornell University professor and the former head of machine learning at AQR Capital Management LLC, who testified in Washington on Friday about the impact of artificial intelligence on capital markets and jobs. The use of algorithms in electronic markets has automated the jobs of tens of thousands of execution traders worldwide, and it’s also displaced people who model prices and risk or build investment portfolios, he said.

“Financial machine learning creates a number of challenges for the 6.14 million people employed in the finance and insurance industry, many of whom will lose their jobs -- not necessarily because they are replaced by machines, but because they are not trained to work alongside algorithms,” Lopez de Prado told the U.S. House Committee on Financial Services.

During the almost two-hour hearing, lawmakers asked experts about racial and gender bias in AI, competition for highly skilled technology workers, and the challenges of regulating increasingly complex, data-driven financial markets.

The info is here.

Friday, January 3, 2020

Robotics researchers have a duty to prevent autonomous weapons

Christoffer Heckman
theconversation.com
Originally posted 4 Dec 19

Here is an excerpt:

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years. Today, though, the best algorithms as shown in published papers are now at 86% accuracy. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.

But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

The info is here.