Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Policy. Show all posts
Showing posts with label Policy. Show all posts

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Saturday, January 27, 2024

Alcohol overuse causes 140,000 American deaths annually. Why is it so undertreated?

Melinda Fawcett
Psychiatry.ufl.edu
Originally posted 28 Nov 23

Here is an excerpt:

How to treat the disorder

In the last decade, the medical community has come to recognize AUD as a disease that (like all others) needs medical treatment through a range of interventions. With new treatments coming out every day, hope exists that in the years to come more and more people will receive the care they need.

For those with the most severe forms of AUD, treatment aims at stopping the individual’s alcohol consumption entirely (while recognizing that having a drink or breaking abstinence isn’t a failure, but an almost inevitable part of the recovery cycle).

“What’s happened in the last probably 50 years or so is there’s a more medicalized understanding,” said Humphreys. “So there’s been the rise of neuroscience that looks at things like how the brain changes with repeated administration of alcohol, how that limits things like self-control, how that increases phenomena like craving.”

And as with any other mental health diagnosis, successful treatment for AUD often boils down to a combination of therapy and medication, the experts Vox spoke to said. Just as depression is treated with medication to balance chemicals in the brain, and therapy to help patients unlearn harmful behaviors, AUD often needs the same combination of treatments, said Disselkoen.

The Federal Drug Administration approved the first medication to treat AUD, disulfiram, in 1951. Disulfiram, whose brand name is Antabuse, is a daily pill that causes someone to fall ill — face redness, headache, nausea, sweating, and more — if they drink even a small amount of alcohol. Disulfiram is safe and effective, but the same characteristic that makes it successful (the way it induces illness) also makes it unpopular among patients, said Nixon.


Key points:
  • Alarming death toll: 140,000 Americans die annually from alcohol overuse, highlighting a major public health crisis.
  • Undertreatment disparity: Unlike other dangerous substances, alcohol issues lack the same attention and treatment resources.
  • Neurological changes: Repeated alcohol misuse alters the brain, making it a serious health condition, not just a social issue.
  • Market forces: The powerful alcohol industry and its growing revenue contribute to lax regulations and limited intervention.
  • Policy gap: Inadequate taxation fails to curb consumption, while other harmful substances face stricter controls.
  • Blind spot in drug policy: Recognizing alcohol as a harmful drug with addiction potential is crucial for tackling the problem.

Saturday, January 6, 2024

Worth the Risk? Greater Acceptance of Instrumental Harm Befalling Men than Women

Graso, M., Reynolds, T. & Aquino, K.
Arch Sex Behav 52, 2433–2445 (2023).

Abstract

Scientific and organizational interventions often involve trade-offs whereby they benefit some but entail costs to others (i.e., instrumental harm; IH). We hypothesized that the gender of the persons incurring those costs would influence intervention endorsement, such that people would more readily support interventions inflicting IH onto men than onto women. We also hypothesized that women would exhibit greater asymmetries in their acceptance of IH to men versus women. Three experimental studies (two pre-registered) tested these hypotheses. Studies 1 and 2 granted support for these predictions using a variety of interventions and contexts. Study 3 tested a possible boundary condition of these asymmetries using contexts in which women have traditionally been expected to sacrifice more than men: caring for infants, children, the elderly, and the ill. Even in these traditionally female contexts, participants still more readily accepted IH to men than women. Findings indicate people (especially women) are less willing to accept instrumental harm befalling women (vs. men). We discuss the theoretical and practical implications and limitations of our findings.

Here is my summary:

This research investigated the societal acceptance of "instrumental harm" (IH) based on the gender of the person experiencing it. Three studies found that people are more likely to tolerate IH when it happens to men than when it happens to women. This bias is especially pronounced among women and those holding egalitarian or feminist beliefs. Even in contexts traditionally associated with women's vulnerability, IH inflicted on men is seen as more acceptable.

These findings highlight a potential blind spot in our perception of harm and raise concerns about how policies might be influenced by this bias. Further research is needed to understand the underlying reasons for this bias and develop strategies to address it.

Sunday, June 25, 2023

Harvard Business School Professor Francesca Gino Accused of Committing Data Fraud

Rahem D. Hamid
Crimson Staff Writer
Originally published 24 June 23

Here is an excerpt:

But in a post on June 17, Data Colada wrote that they found evidence of additional data fabrication in that study in a separate experiment that Gino was responsible for.

Harvard has also been internally investigating “a series of papers” for more than a year, according to the Chronicle of Higher Education. Data Colada wrote last week that the University’s internal report may be around 1,200 pages.

The professors added that Harvard has requested that three other papers co-authored by Gino — which Data Colada flagged — also be retracted and that the 2012 paper’s retraction be amended to include Gino’s fabrications.

Last week, Bazerman told the Chronicle of Higher Education that he was informed by Harvard that the experiments he co-authored contained additional fraudulent data.

Bazerman called the evidence presented to him by the University “compelling,” but he denied to the Chronicle that he was at all involved with the data manipulation.

According to Data Colada, Gino was “the only author involved in the data collection and analysis” of the experiment in question.

“To the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in question,” the professors wrote.

In their second post on Tuesday, the investigators wrote that a 2015 study co-authored by Gino also contains manipulations to prove the paper’s hypothesis.

Observations in the paper, the three wrote, “were altered to produce the desired effect.”

“And if these observations were altered, then it is reasonable to suspect that other observations were altered as well,” they added.


Science is a part of a healthy society:
  • Scientific research relies on the integrity of the researchers. When researchers fabricate or falsify data, they undermine the trust that is necessary for scientific progress.
  • Data fraud can have serious consequences. It can lead to the publication of false or misleading findings, which can have a negative impact on public policy, business decisions, and other areas.

Sunday, February 12, 2023

The scientific study of consciousness cannot, and should not, be morally neutral

Mazor, M., Brown, S., et al. (2021, November 12). 
Perspectives on psychological science.
Advance online publication.

Abstract

A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one’s own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programmes or interpret results in ways that justify current norms rather than challenge them. We show that since consciousness largely determines moral status, the use of non-human animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics – the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Lastly, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated, and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.

Here is how the article ends:

Finally, we believe consciousness researchers, including those working only with consenting humans, should take an active role in the ethical discussion about these issues, including the use of animal models for the study of consciousness. Studying consciousness, the field has the responsibility of leading the way on these ethical questions and of making strong statements when such statements are justified by empirical findings. Recent examples include discussions of ethical ramifications of neuronal signs of fetal consciousness (Lagercrantz, 2014) and a consolidation of evidence for consciousness in vertebrate animals, with a focus on livestock species, ordered by the European Food and Safety Authority (Le Neindre et al., 2017). In these cases, the science of consciousness provided empirical evidence to weigh on whether a fetus or a livestock animal is conscious. The question of animal models of consciousness is simpler because the presence of consciousness is a prerequisite for the model to be valid. Here, researchers can skip the difficult question of whether the entity is indeed conscious and directly ask, “Do we believe that consciousness, or some specific form or dimension of consciousness, entails moral status?”

It is useful to remind ourselves that ethical beliefs and practices are dynamic: Things that were considered
acceptable in the past are no longer acceptable today.  A relatively recent change is that to the status of nonhuman great apes (gorillas, bonobos, chimpanzees, and orangutans) such that research on great apes is banned in some countries today, including all European Union member states and New Zealand. In these countries, drilling a hole in chimpanzees’ heads, keeping them in isolation, or restricting their access to drinking water are forbidden by law. It is a fundamental question of the utmost importance which differences between animals make some practices acceptable with respect to some animals and not others. If consciousness is a determinant of moral status, consciousness researchers have a responsibility in taking an active part in this discussion—by providing scientific observations that either justify current ethical standards or induce the scientific and legal communities to revise these standards.

Thursday, July 14, 2022

What nudge theory got wrong

Tim Harford
The Financial Times
Originally posted 

Here is an excerpt:

Chater and Loewenstein argue that behavioural scientists naturally fall into the habit of seeing problems in the same way. Why don’t people have enough retirement savings? Because they are impatient and find it hard to save rather than spend. Why are so many greenhouse gases being emitted? Because it’s complex and tedious to switch to a green electricity tariff. If your problem is basically that fallible individuals are making bad choices, behavioural science is an excellent solution.

If, however, the real problem is not individual but systemic, then nudges are at best limited, and at worst, a harmful diversion. Historians such as Finis Dunaway now argue that the Crying Indian campaign was a deliberate attempt by corporate interests to change the subject. Is behavioural public policy, accidentally or deliberately, a similar distraction?

A look at climate change policy suggests it might be. Behavioural scientists themselves are clear enough that nudging is no real substitute for a carbon price — Thaler and Sunstein say as much in Nudge. Politicians, by contrast, have preferred to bypass the carbon price and move straight to the pain-free nudging.

Nudge enthusiast David Cameron, in a speech given shortly before he became prime minister, declared that “the best way to get someone to cut their electricity bill” was to cleverly reformat the bill itself. This is politics as the art of avoiding difficult decisions. No behavioural scientist would suggest that it was close to sufficient. Yet they must be careful not to become enablers of the One Weird Trick approach to making policy.

-------

Behavioural science has a laudable focus on rigorous evidence, yet even this can backfire. It is much easier to produce a quick randomised trial of bill reformatting than it is to evaluate anything systemic. These small quick wins are only worth having if they lead us towards, rather than away from, more difficult victories.

Another problem is that empirically tested, behaviourally rigorous bad policy can be bad policy nonetheless. For example, it has become fashionable to argue that people should be placed on an organ donor registry by default, because this dramatically expands the number of people registered as donors. But, as Thaler and Sunstein themselves keep having to explain, this is a bad idea. Most organ donation happens only after consultation with a grieving family — and default-bloated donor registries do not help families work out what their loved one might have wanted.


Wednesday, April 7, 2021

Actionable Principles for Artificial Intelligence Policy: Three Pathways

Stix, C. 
Sci Eng Ethics 27, 15 (2021). 
https://doi.org/10.1007/s11948-020-00277-3

Abstract

In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.

(cut)

Actionable Principles

In many areas, including AI, it has proven challenging to bridge ethics and governmental policy-making (Müller 2020, 1.3). To be clear, many AI Ethics Principles, such as those developed by industry actors or researchers for self-governance purposes, are not aimed at directly informing governmental policy-making, and therefore the challenge of bridging this gulf may not apply. Nonetheless, a significant subset of AI Ethics Principles are addressed to governmental actors, from the 2019 OECD Principles on AI (OECD 2019) to the US Defence Innovation Board’s AI Principles adopted by the Department of Defence (DIB 2019). Without focussing on any single effort in particular, the aggregate success of many AI Ethics Principles remains limited (Rességuier and Rodriques 2020). Clear shifts in governmental policy which can be directly traced back to preceding and corresponding sets of AI Ethics Principles, remain few and far between. This could mean, for example, concrete textual references reflecting a specific section of the AI Ethics Principle, or the establishment of (both enabling or preventative) policy actions building on relevant recommendations. A charitable interpretation could be that as governmental policy-making takes time, and given that the vast majority of AI Ethics Principles were published within the last two years, it may simply be premature to gauge (or dismiss) their impact. However, another interpretation could be that the current versions of AI Ethics Principles have fallen short of their promise, and reached their limitation for impact in governmental policy-making (henceforth: policy).

It is worth noting that successful actionability in policy goes well beyond AI Ethics Principles acting as a reference point. Actionable Principles could shape policy by influencing funding decisions, taxation, public education measures or social security programs. Concretely, this could mean increased funding into societally relevant areas, education programs to raise public awareness and increase vigilance, or to rethink retirement structures with regard to increased automation. To be sure, actionability in policy does not preclude impact in other adjacent domains, such as influencing codes of conduct for practitioners, clarifying what demands workers and unions should pose, or shaping consumer behaviour. Moreover, during political shifts or in response to a crisis, Actionable Principles may often prove to be the only (even if suboptimal) available governance tool to quickly inform precautionary and remedial (legal and) policy measures.


Tuesday, December 29, 2020

Internal Google document reveals campaign against EU lawmakers

Javie Espinoza
ft.com
Originally published 28 OCT 20

Here is an excerpt:

The leak of the internal document lays bare the tactics that big tech companies employ behind the scenes to manipulate public discourse and influence lawmakers. The presentation is watermarked as “privileged and need-to-know” and “confidential and proprietary”.

The revelations are set to create new tensions between the EU and Google, which are already engaged in tough discussions about how the internet should be regulated. They are also likely to trigger further debate within Brussels, where regulators hold divergent positions on the possibility of breaking up big tech companies.

Margrethe Vestager, the EU’s executive vice-president in charge of competition and digital policy, on Tuesday argued to MEPs that structural separation of big tech is not “the right thing to do”. However, in a recent interview with the FT, Mr Breton accused such companies of being “too big to care”, and suggested that they should be broken up in extreme circumstances.

Among the other tactics outlined in the report were objectives to “undermine the idea DSA has no cost to Europeans” and “show how the DSA limits the potential of the internet . . . just as people need it the most”.

The campaign document also shows that Google will seek out “more allies” in its fight to influence the regulation debate in Brussels, including enlisting the help of Europe-based platforms such as Booking.com.

Booking.com told the FT: “We have no intention of co-operating with Google on upcoming EU platform regulation. Our interests are diametrically opposed.”


Saturday, October 17, 2020

New Texas rule lets social workers turn away clients who are LGBTQ or have a disability

Edgar Walters
Texas Tribune
Originally posted 14 Oct 2020

Texas social workers are criticizing a state regulatory board’s decision this week to remove protections for LGBTQ clients and clients with disabilities who seek social work services.

The Texas State Board of Social Work Examiners voted unanimously Monday to change a section of its code of conduct that establishes when a social worker may refuse to serve someone. The code will no longer prohibit social workers from turning away clients on the basis of disability, sexual orientation or gender identity.

Gov. Greg Abbott’s office recommended the change, board members said, because the code’s nondiscrimination protections went beyond protections laid out in the state law that governs how and when the state may discipline social workers.

“It’s not surprising that a board would align its rules with statutes passed by the Legislature,” said Abbott spokesperson Renae Eze. A state law passed last year gave the governor’s office more control over rules governing state-licensed professions.

The nondiscrimination policy change drew immediate criticism from a professional association. Will Francis, executive director of the Texas chapter of the National Association of Social Workers, called it “incredibly disheartening.”

He also criticized board members for removing the nondiscrimination protections without input from the social workers they license and oversee.


Note: All psychotherapy services are founded on the principle of beneficence: the desire to help others and do right by them.  This decision from the Texas State Board of Social Work Examiners is terrifyingly unethical.  The unanimous decision demonstrates the highest levels of incompetence and bigotry.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Sunday, September 1, 2019

There is no Universal Objective Morality

An Interview with Homi Bhabha
Interviewer: Paula Erizanu

Here is an excerpt:

What does that imply for human rights conventions?

Those who assert the absolute nature of morality are not aware of how much power – political and personal power – gets mixed into the moral idea. The same person who would kneel in church and pray for God’s notion of universal love and brotherhood would go and lynch a person of colour, or would do violence to an untouchable in India. So, morality has to be understood in terms of power and authority in addition to circumstances, cases, forms of interpretation.

Moralities  are enlightened recommendations about how to live your life in a way that is fair and responsible towards others. But at the same time, the question of morality gets so mixed in with political power, with issues of affect, making people anxious, nervous about their conditions, making people feel like they live in a world of insecurity and threat. You know, it’s a much more complex package than can be understood in terms of universal moralities on the one hand, or objective and subjective moralities on the other.

For a number of important legal and political reasons, we want to go with the UDHR and absolutely support the view that all people are born equal, that all people have a foundational dignity, and therefore deserve the protections and provisions of human rights. Having said that, we know from long and bitter experience that only too often states find ways of violating the human rights of their own peoples and the rights of other peoples and countries. They do so with a kind of international impunity, and if I may say so, a collusive insouciance.

There always seem to be forms of legal architecture – however well-intentioned – that make the perfect the enemy of the good, and that is putting it generously. On the negative side, executive orders and states of exception are the enemies of the good. Look at the way in which the basic human rights of Mexicans on the Texan border are being violated on a daily basis. Let me simply refer to a recent comment from the NYT that puts the issue poignantly and pointedly:

“In fact the migrants are mostly victims of the broken immigration system. They are not by and large killers, rapists or gang members. Most do not carry drugs. They have learned how to make asylum claims, just as the law allows them to do.”

The info is here.

Wednesday, January 16, 2019

Debate ethics of embryo models from stem cells

Nicolas Rivron, Martin Pera, Janet Rossant, Alfonso Martinez Arias, and others
Nature
Originally posted December 12, 2018

Here are some excerpts:

Four questions

Future progress depends on addressing now the ethical and policy issues that could arise.

Ultimately, individual jurisdictions will need to formulate their own policies and regulations, reflecting their values and priorities. However, we urge funding bodies, along with scientific and medical societies, to start an international discussion as a first step. Bioethicists, scientists, clinicians, legal and regulatory specialists, patient advocates and other citizens could offer at least some consensus on an appropriate trajectory for the field.

Two outputs are needed. First, guidelines for researchers; second, a reliable source of information about the current state of the research, its possible trajectory, its potential medical benefits and the key ethical and policy issues it raises. Both guidelines and information should be disseminated to journalists, ethics committees, regulatory bodies and policymakers.

Four questions in particular need attention.

Should embryo models be treated legally and ethically as human embryos, now or in the future?

Which research applications involving human embryo models are ethically acceptable?

How far should attempts to develop an intact human embryo in a dish be allowed to proceed?

Does a modelled part of a human embryo have an ethical and legal status similar to that of a complete embryo?

The info is here.

Friday, January 11, 2019

10 ways to detect health-care lies

Lawton R. Burns and Mark V. Pauly
thehill.com
Originally posted December 9, 2018

Here is an excerpt:

Why does this kind of behavior occur? While flat-out dishonesty for short-term financial gains is an obvious answer, a more common explanation is the need to say something positive when there is nothing positive to say.

This problem is acute in health care. Suppose you are faced with the assignment of solving the ageless dilemma of reducing costs while simultaneously raising quality of care. You could respond with a message of failure or a discussion of inevitable tradeoffs.

But you could also pick an idea with some internal plausibility and political appeal, fashion some careful but conditional language and announce the launch of your program. Of course, you will add that it will take a number of years before success appears, but you and your experts will argue for the idea in concept, with the details to be worked out later.

At minimum, unqualified acceptance of such proposed ideas, even (and especially) by apparently qualified people, will waste resources and will lead to enormous frustration for your audience of politicians and outraged critics of the current system. The incentives to generate falsehoods are not likely to diminish — if anything, rising spending and stagnant health outcomes strengthen them — so it is all the more important to have an accurate and fast way to detect and deter lies in health care.

The info is here.

Tuesday, December 25, 2018

Medical Ethicist Calls Trump Approved Medicaid Work Requirements Cruel

Jason Turesky
www.wgbh.org
Originally posted November 26, 2018

Here is an excerpt:

Medical ethicist Art Caplan called the idea of Medicaid work requirements “cruel” on Boston Public Radio Monday, and believes there are no clear benefits to these new rules. “It’s not really something that I think is going to instill good habits or get people off Medicaid,” Caplan said.

Caplan pointed out that many of the people on Medicaid in Kentucky may not be physically able to fulfill the 80 hour requirement.

“Remember, the overwhelming majority of people on Medicaid in Kentucky, and every state, are disabled or children or single head of household females, so getting them out 80 hours per month to do anything is very difficult, unless we are going to re-institute child labor,” he said.

The info is here.

Thursday, August 23, 2018

Designing a Roadmap to Ethical AI in Government


Joshua Entsminger, Mark Esposito, Terence Tse and Danny Goh
www.thersa.org
Originally posted July 23, 2018

Here is an excerpt:

When a decision was made using AI, we may not know whether or not the data was faulty; regardless, there will come a time when someone appeals a decision made by, or influenced by, AI-driven insights. People have the right to be informed that a significant decision concerning their lives was carried out with the help of an AI. Governments will need a better record of what companies and institutions use AI for making significant decisions to enforce this policy.

When specifically assessing a decision-making process of concern, the first step should be to determine whether or not the data set represents what the organisation wanted the AI to understand and make decisions about.

However, data sets, particularly easily available data sets, cover a limited range of situations, and inevitably, most AI will be confronted with situations they have not encountered before – the ethical issue is the framework by which decisions occur, and good data cannot secure that kind of ethical behavior by itself.

The blog post is here.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.

Friday, June 8, 2018

The Ethics of Medicaid’s Work Requirements and Other Personal Responsibility Policies

Harald Schmidt and Allison K. Hoffman
JAMA. Published online May 7, 2018. doi:10.1001/jama.2018.3384

Here are two excerpts:

CMS emphasizes health improvement as the primary rationale, but the agency and interested states also favor work requirements for their potential to limit enrollment and spending and out of an ideological belief that everyone “do their part.” For example, an executive order by Kentucky’s Governor Matt Bevin announced that the state’s entire Medicaid expansion would be unaffordable if the waiver were not implemented, threatening to end expansion if courts strike down “one or more” program elements. Correspondingly, several nonexpansion states have signaled that the option of introducing work requirements might make them reconsider expansion—potentially covering more people but arguably in a way inconsistent with Medicaid’s broader objectives.

Work requirements have attracted the most attention but are just one of many policies CMS has encouraged as part of apparent attempts to promote personal responsibility in Medicaid. Other initiatives tie levels of benefits to confirming eligibility annually, paying premiums on time, meeting wellness program criteria such as completing health risk assessments, or not using the emergency department (ED) for nonemergency care.

(cut)

It is troubling that these policies could result in some portion of previously eligible individuals being denied necessary medical care because of unduly demanding requirements. Moreover, even if reduced enrollment were to decrease Medicaid costs, it might not reduce medical spending overall. Laws including the Emergency Medical Treatment and Labor Act still require stabilization of emergency medical conditions, entailing more expensive and less effective care.

The article is here.

Sunday, April 15, 2018

What If There Is No Ethical Way to Act in Syria Now?

Sigal Samel
The Atlantic
Originally posted April 13, 2018

For seven years now, America has been struggling to understand its moral responsibility in Syria. For every urgent argument to intervene against Syrian President Bashar al-Assad to stop the mass killing of civilians, there were ready responses about the risks of causing more destruction than could be averted, or even escalating to a major war with other powers in Syria. In the end, American intervention there has been tailored mostly to a narrow perception of American interests in stopping the threat of terror. But the fundamental questions are still unresolved: What exactly was the moral course of action in Syria? And more urgently, what—if any—is the moral course of action now?

The war has left roughly half a million people dead—the UN has stopped counting—but the question of moral responsibility has taken on new urgency in the wake of a suspected chemical attack over the weekend. As President Trump threatened to launch retaliatory missile strikes, I spoke about America’s ethical responsibility with some of the world’s leading moral philosophers. These are people whose job it is to ascertain the right thing to do in any given situation. All of them suggested that, years ago, America might have been able to intervene in a moral way to stop the killing in the Syrian civil war. But asked what America should do now, they all gave the same startling response: They don’t know.

The article is here.

Saturday, March 10, 2018

Universities Rush to Roll Out Computer Science Ethics Courses

Natasha Singer
The New York Times
Originally posted February 12, 2018

Here is an excerpt:

“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”

The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.

But until recently, ethics did not seem relevant to many students.

The article is here.