Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fairness. Show all posts
Showing posts with label Fairness. Show all posts

Tuesday, May 26, 2020

Rebuilding the Economy Around Good Jobs

Zeynep Ton
Harvard Business Review
Originally posted 22 May 20

One thing we can predict: Customers who are struggling economically will be looking more than ever for good value. This will give the companies that start building a good jobs system a competitive advantage over those that don’t. After the financial crisis of 2008, Mercadona — Spain’s largest grocery chain and a model good jobs company — reduced prices for its hard-pressed customers by 10% while remaining profitable and gaining significant market share. Hard work and input from empowered front lines had a lot to do with it.

The pandemic is likely to accelerate the ongoing shakeup of U.S. retailing. The United States has 24.5 square feet of retail space per person versus 16.4 square feet in Canada and 4.5 square feet in Europe. This is almost certainly too much and the mediocre — the ones that don’t make their customers want to keep coming back — will not survive.

The pandemic is likely to speed up the adoption of new technologies. Although typically seen as a way to reduce headcount, adopting, scaling, and leveraging new technologies require a capable and motivated (even if smaller) workforce.

There is an alternative: A good jobs system that has already proven successful. Long before the pandemic, there were successful companies — including Costco and QuikTrip — that knew their frontline workers were essential personnel and treated and paid them as such. Even in very competitive, low-cost retail sectors, these companies adopted a good jobs system and used it to win.

There’s a strong financial case for good jobs. Offering good jobs lowers costs by reducing employee turnover, operational mistakes, and wasted time. It improves service, which increases sales both in the short term and — through customer loyalty — in the long term.

The info is here.

Sunday, May 17, 2020

Veil-of-Ignorance Reasoning Favors Allocating Resources to Younger Patients During the COVID-19 Crisis

Huang, K., Bernhard, R., and others
(2020, April 22).
https://doi.org/10.31234/osf.io/npm4v

Abstract

The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? Veil-of-ignorance reasoning is a decision procedure designed to produce fair outcomes. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they are younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young. These findings provide concrete guidance to healthcare policymakers and front line personnel charged with allocating scarce medical resources during times of crisis.

From the General Discussion

In two pre-registered studies, we show that veil-of-ignorance reasoning favors allocating scarce medical resources to younger patients during the COVID-19 crisis. A strong majority of participants who engaged in veil-of-ignorance reasoning concluded that a policy of maximizing the number of life-years saved is what they would want for themselves if they did not know whom they were going to be.Importantly, engaging in veil-of-ignorance reasoning subsequently produced increased moral approval of this utilitarian policy. These findings, though predicted based on prior research(Huang, Greene, &Bazerman, 2019), make three new contributions. First, they apply directly to an ongoing crisis in which competing claims to fairness must be resolved. While the ventilator shortage in the developed world has been less acute than many feared, it may reemerge in greater force as the COVID-19 crisis spreads to the developing world (Woodyatt, 2020). Second, the dilemma considered here differs from those considered previously because it concerns maximizing the number of life-years saved, rather than the number of lives saved.Finally, the results show the power of the veil to eliminate self-serving bias. In the control condition, few older participants (33%) favored prioritizing younger patients. But after engaging in veil-of-ignorance reasoning, most older participants (62%) favored doing so, just like younger participants.

The research is here.

Friday, May 1, 2020

During the Pandemic, the FCC Must Provide Internet for All

Gigi Sohn
Wired.com
Originally published 28 April 20

If anyone believed access to the internet was not essential prior to the Covid-19 pandemic, nobody is saying that today. With ongoing stay-at-home orders in most states, high-speed broadband internet access has become a necessity to learn, work, engage in commerce and culture, keep abreast of news about the virus, and stay connected to neighbors, friends, and family. Yet nearly a third of American households do not have this critical service, either because it is not available to them, or, as is more often the case, they cannot afford it.

Lifeline is a government program that seeks to ensure that all Americans are connected, regardless of income. Started by the Reagan administration and placed into law by Congress in 1996, Lifeline was expanded by the George W. Bush administration and expanded further during the Obama administration. The program provides a $9.25 a month subsidy per household to low-income Americans for phone and/or broadband service. Because the subsidy is so minimal, most Lifeline customers use it for mobile voice and data services.

The Federal Communications Commission sets Lifeline’s policies, including rules about who is eligible to receive the subsidy, its amount, and which companies can provide the service. Americans whose income is below a certain level or who receive government assistance—such as Medicaid, the Supplemental Nutrition Assistance Program, or SNAP, and Supplemental Security Income, or SSI—are eligible.

During this crisis, President Donald Trump’s FCC could make an enormous dent in the digital divide if it expanded Lifeline, even if just on a temporary basis. The FCC could increase the subsidy so that it can be used to pay for robust fixed internet access. It could also make Lifeline available to a broader subset of Americans, specifically the tens of millions who have just filed for unemployment benefits. But that’s unlikely to be a priority for this FCC and its chairman, Ajit Pai, who has spent nearly his entire tenure trying to destroy the program.

The info is here.

Monday, March 2, 2020

Folk standards of sound judgment: Rationality vs. Reasonableness

Igor Grossman and others
PsyArXiv Preprints
Last edited on 10 Jan 20

Abstract

Normative theories of judgment either focus on rationality – decontextualized preference maximization, or reasonableness – the pragmatic balance of preferences and socially-conscious norms. Despite centuries of work on such concepts, a critical question appears overlooked: How do people’s intuitions and behavior align with the concepts of rationality from game theory and reasonableness from legal scholarship? We show that laypeople view rationality as abstract and preference-maximizing, simultaneously viewing reasonableness as social-context-sensitive and socially-conscious, as evidenced in spontaneous descriptions, social perceptions, and linguistic analyses of the terms in cultural products (news, soap operas, legal opinions, and Google books). Further, experiments among North Americans and Pakistani bankers, street merchants, and samples engaging in exchange (vs. market-) economy show that rationality and reasonableness lead people to different conclusions about what constitutes good judgment in Dictator Games, Commons Dilemma and Prisoner’s Dilemma: Lay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.

The research is here.

Thursday, February 20, 2020

Sharing Patient Data Without Exploiting Patients

McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354

Here is an excerpt:

The Risks of Data Sharing

When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.

Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.

Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.

The info is here.

Friday, December 27, 2019

Affordable treatment for mental illness and substance abuse gets harder to find

Image result for mental health parityJenny Gold
The Washington Post
Originally published 1 Dec 19

Here is an excerpt:

A report published by Milliman, a risk management and health-care consulting company, found that patients were dramatically more likely to resort to out-of-network providers for mental health and substance abuse treatment than for other conditions. The disparities have grown since Milliman published a similarly grim study two years ago.

The latest study examined the claims data of 37 million individuals with commercial preferred provider organization’s health insurance plans in all 50 states from 2013 to 2017.

Among the findings:

●People seeking inpatient care for behavioral health issues were 5.2 times more likely to be relegated to an out-of-network provider than for medical or surgical care in 2017, up from 2.8 times in 2013.

●For substance abuse treatment, the numbers were even worse: Treatment at an inpatient facility was 10 times more likely to be provided out-of-network — up from 4.7 times in 2013.

●In 2017, a child was 10 times more likely to go out-of-network for a behavioral health office visit than for a primary care office visit.

●Spending for all types of substance abuse treatment was just 0.9 percent of total health-care spending in 2017. Mental health treatment accounted for 2.4 percent of total spending.

In 2017, 70,237 Americans died of drug overdoses, and 47,173 from suicide, according to the Centers for Disease Control and Prevention. In 2018, nearly 20 percent of adults — more than 47 million people — experienced a mental illness, according to the National Alliance on Mental Illness.

“I thought maybe we would have seen some progress here. It’s very depressing to see that it’s actually gotten worse,” said Henry Harbin, former chief executive of Magellan Health, a managed behavioral health-care company, and adviser to the Bowman Family Foundation, which commissioned the report. “Employers and insurance plans need to quadruple their efforts.”

The info is here.

Tuesday, December 17, 2019

Create an Ethics Committee to Keep Your AI Initiative in Check

Steven Tiell
Harvard Business Review
Originally posted 15 Nov 19

Here is an excerpt:

Establishing this level of ethical governance is critical to helping executives mitigate downside risks, because addressing AI bias can be extremely complex. Data scientists and software engineers have biases just like everyone else, and when they allow these biases to creep into algorithms or the data sets used to train them — however unintentionally — it can leave those subjected to the AI feeling like they have been treated unfairly. But eliminating bias to make fair decisions is not a straightforward equation.

While many colloquial definitions of “bias” involve “fairness,” there is an important distinction between the two. Bias is a feature of statistical models, while fairness is a judgment against the values of a community. Shared understandings of fairness are different across cultures. But the most critical thing to understand is their relationship. The gut feeling may be that fairness requires a lack of bias, but in fact, data scientists must often introduce bias in order to achieve fairness.

Consider a model built to streamline hiring or promotions. If the algorithm learns from historic data, where women have been under-represented in the workforce, myriad biases against women will emerge in the model. To correct for this, data scientists might choose to introduce bias — balancing gender representation in historic data, creating synthetic data to fill in gaps, or correcting for balanced treatment (fairness) in the application of data-informed decisions. In many cases, there’s no possible way to be both unbiased and fair.

An Ethics Committee can help to not only maintain an organization’s values-based intentions, but can increase transparency into how they use AI. Even when it’s addressed, AI bias can still be maddening and frustrating for end users, and most companies deploying AIs today are subjecting people to it without giving them much agency in the process. Consider the experience of using a mapping app. When travelers are simply told which route to take, it is an experience stripped of agency; but when users are offered a set of alternate routes, they feel more confident in the selected route because they enjoyed more agency, or self-determination, in choosing it. Maximizing agency when AI is being used is another safeguard strong governance can help to ensure.

The info is here.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
venturebeat.com
Originally published Nov 11, 2109


Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

Thursday, November 7, 2019

Digital Ethics and the Blockchain

Dan Blum
ISACA, Volume 2, 2018

Here is an excerpt:

Integrity and Transparency

Integrity and transparency are core values for delivering trust to prosperous markets. Blockchains can provide immutable land title records to improve property rights and growth in small economies, such as Honduras.6 In smart power grids, blockchain-enabled meters can replace inefficient centralized record-keeping systems for transparent energy trading. Businesses can keep transparent records for product provenance, production, distribution and sales. Forward-thinking governments are exploring use cases through which transparent, immutable blockchains could facilitate a lighter, more effective regulatory touch to holding industry accountable.

However, trade secrets and personal information should not be published openly on blockchains. Blockchain miners may reorder transactions to increase fees or delay certain business processes at the expense of others. Architects must leaven accountability and transparency with confidentiality and privacy. Developers (or regulators) should sometimes add a human touch to smart contracts to avoid rigid systems operating without any consumer safeguards.

The info is here.

Monday, October 28, 2019

The Ethics of Contentious Hard Forks in Blockchain Networks With Fixed Features

Tae Wan Kim and Ariel Zetlin-Jones
Front. Blockchain, 28 August 2019
https://doi.org/10.3389/fbloc.2019.00009

An advantage of blockchain protocols is that a decentralized community of users may each update and maintain a public ledger without the need for a trusted third party. Such modifications introduce important economic and ethical considerations that we believe have not been considered among the community of blockchain developers. We clarify the problem and provide one implementable ethical framework that such developers could use to determine which aspects should be immutable and which should not.

(cut)

3. A Normative Framework for Blockchain Design With Fixed Features

Which features of a blockchain protocol should or should not be alterable? To answer this question, we need a normative framework. Our framework is twofold: the substantive and the procedural. The substantive consists of two ethical principles: The generalization principle and the utility-enhancement principle. The procedural has three principles: publicity, revision and appeals, and regulation. All the principles are necessary conditions. The procedural principles help to collectively examine whether any application of the two substantive principles are reasonable. The set of the five principles as a whole is in line with the broadly Kantian deontological approach to justice and democracy (Kant, 1785). In particular, we are partly indebted to Daniels and Sabin (2002) procedural approach to fair allocations of limited resources. Yet, our framework is different from theirs in several ways: the particular context we deal with is different, we replace the controversial “relevance” condition with our own representation of the Kantian generalization principle, and we add the utility-maximization principle. Although we do not offer a fully fledged normative analysis of the given issue, we propose a possible normative framework for cryptocurrency communities.

Monday, October 14, 2019

Why we don’t always punish: Preferences for non-punitive responses to moral violations

Joseph Heffner & Oriel FeldmanHall
Scientific Reports, volume 9, 
Article number: 13219 (2019) 

Abstract

While decades of research demonstrate that people punish unfair treatment, recent work illustrates that alternative, non-punitive responses may also be preferred. Across five studies (N = 1,010) we examine non-punitive methods for restoring justice. We find that in the wake of a fairness violation, compensation is preferred to punishment, and once maximal compensation is available, punishment is no longer the favored response. Furthermore, compensating the victim—as a method for restoring justice—also generalizes to judgments of more severe crimes: participants allocate more compensation to the victim as perceived severity of the crime increases. Why might someone refrain from punishing a perpetrator? We investigate one possible explanation, finding that punishment acts as a conduit for different moral signals depending on the social context in which it arises. When choosing partners for social exchange, there are stronger preferences for those who previously punished as third-party observers but not those who punished as victims. This is in part because third-parties are perceived as relatively more moral when they punish, while victims are not. Together, these findings demonstrate that non-punitive alternatives can act as effective avenues for restoring justice, while also highlighting that moral reputation hinges on whether punishment is enacted by victims or third-parties.

The research is here.

Readers may want to think about patients in psychotherapy and licensing board actions.

Monday, October 7, 2019

Ethics a distant second to profits in Silicon Valley

Gabriel Fairman
www.sdtimes.com
Originally published September 9, 2019

Here is an excerpt:

For ethics to become a part of the value system that drives behavior in Silicon Valley, it would have to be incentivized as such. I have a hard time envisioning a world were ethics can offer shareholders huge returns. Ethics is about doing the right thing, and the right thing and the lucrative thing typically don’t necessarily go hand in hand.

Everyone can understand ethics. Basic questions such as “Will this be good for the world in a year, 10 years or 20 years?”, “Would I want this for my kids?” are easy litmus tests to differentiate between ethical and unethical conduct. The challenge is that considerations on ethics slow down development by raising challenges and concerns early on.  Ethics are about amplifying potential problems that can be foreseen down the road.

On the other hand, venture-funded start-ups are about minimizing the ramifications of these problems as they move on quickly. How can ethics compete with billion-dollar exits? It can’t. Ethics are just this thing that we read about in articles or hear about in lectures. It is not driving day-to-day decision-making. You listen to people in boardrooms asking, “How will this impact our valuation?,” or “What is the ROI of this initiative?” but you don’t hear top-level execs brainstorming about how their product or company could be more ethical because there is no compensation tied to that. The way we have built our world, ethics are just fluff.

We are also extraordinary at differentiating private vs. public lives. Many people working at tech companies don’t allow their kids to use electronic devices ubiquitously or would not want their kids bossed around by an algorithm as they let go of full-time employee benefits. But they promote these things and further them because these things are highly profitable, not because they are fundamentally good. This key distinction between private and public behavior allows people to behave in wildly hypocritical ways, by helping advance the very things they do not want in their own homes.

The info is here.

Monday, August 12, 2019

Rural hospitals foundering in states that declined Obamacare

Michael Braga, Jennifer F. A. Borresen, Dak Le and Jonathan Riley
GateHouse Media
Originally published July 28, 2019

Here is an excerpt:

While experts agree embracing Obamacare is not a cure-all for rural hospitals and would not have saved many of those that closed, few believe it was wise to turn the money down.

The crisis facing rural America has been raging for decades and the carnage is not expected to end any time soon.

High rates of poverty in rural areas, combined with the loss of jobs, aging populations, lack of health insurance and competition from other struggling institutions will make it difficult for some rural hospitals to survive regardless of what government policies are implemented.

For some, there’s no point in trying. They say the widespread closures are the result of the free market economy doing its job and a continued shakeout would be helpful. But no rural community wants that shakeout to happen in its backyard.

“A hospital closure is a frightening thing for a small town,” said Patti Davis, president of the Oklahoma Hospital Association. “It places lives in jeopardy and has a domino effect on the community. Health care professionals leave, pharmacies can’t stay open, nursing homes have to close and residents are forced to rely on ambulances to take them to the next closest facility in their most vulnerable hours.”

The info is here.

Wednesday, August 7, 2019

Veil-of-Ignorance Reasoning Favors the Greater Good

Karen Huang Joshua D. Greene Max Bazerman
PsyArXiv
Originally posted July 2, 2019

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

The research is here.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Friday, May 10, 2019

An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Friday, April 26, 2019

EU beats Google to the punch in setting strategy for ethical A.I.

Elizabeth Schulze
www.CNBC.com
Originally posted April 8, 2019

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving “trustworthy” artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. “It is only with trust that our society can fully benefit from technologies.”

The EU defines artificial intelligence as systems that show “intelligent behavior,” allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

The info is here.

Monday, April 15, 2019

Tech giants are seeking help on AI ethics. Where they seek it matters.

Dave Gershgorn
quartz.com
Originally posted March 30, 2019

Here is an excerpt:

Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)

AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.

The info is here.