Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Friday, April 26, 2019

EU beats Google to the punch in setting strategy for ethical A.I.

Elizabeth Schulze
www.CNBC.com
Originally posted April 8, 2019

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving “trustworthy” artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. “It is only with trust that our society can fully benefit from technologies.”

The EU defines artificial intelligence as systems that show “intelligent behavior,” allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

The info is here.

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Wednesday, April 17, 2019

A New Model For AI Ethics In R&D

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.

Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.

Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.

We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.

The info is here.

Wednesday, April 3, 2019

Artificial Morality

Robert Koehler
www.citywathcla.com
Originally posted March 21, 2019

Here is an excerpt:

What I see here is moral awakening scrambling for sociopolitical traction: Employees are standing for something larger than sheer personal interests, in the process pushing the Big Tech brass to think beyond their need for an endless flow of capital, consequences be damned.

This is happening across the country. A movement is percolating: Tech won’t build it!

“Across the technology industry,” the New York Times reported in October, “rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What if moral thinking — not in books and philosophical tracts, but in the real world, both corporate and political — were as large and complex as technical thinking? It could no longer hide behind the cliché of the just war (and surely the next one we’re preparing for will be just), but would have to evaluate war itself — all wars, including the ones of the past 70 years or so, in the fullness of their costs and consequences — as well as look ahead to the kind of future we could create, depending on what decisions we make today.

Complex moral thinking doesn’t ignore the need to survive, financially and otherwise, in the present moment, but it stays calm in the face of that need and sees survival as a collective, not a competitive, enterprise.

The info is here.

Friday, March 22, 2019

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Saturday, March 16, 2019

How Should AI Be Developed, Validated, and Implemented in Patient Care?

Michael Anderson and Susan Leigh Anderson
AMA J Ethics. 2019;21(2):E125-130.
doi: 10.1001/amajethics.2019.125.

Abstract

Should an artificial intelligence (AI) program that appears to have a better success rate than human pathologists be used to replace or augment humans in detecting cancer cells? We argue that some concerns—the “black-box” problem (ie, the unknowability of how output is derived from input) and automation bias (overreliance on clinical decision support systems)—are not significant from a patient’s perspective but that expertise in AI is required to properly evaluate test results.

Here is an excerpt:

Automation bias. Automation bias refers generally to a kind of complacency that sets in when a job once done by a health care professional is transferred to an AI program. We see nothing ethically or clinically wrong with automation, if the program achieves a virtually 100% success rate. If, however, the success rate is lower than that—92%, as in the case presented—it’s important that we have assurances that the program has quality input; in this case, that probably means that the AI program “learned” from a cross section of female patients of diverse ages and races. With diversity of input secured, what matters most, ethically and clinically, is that that the AI program has a higher cancer cell-detection success rate than human pathologists.

Friday, March 15, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally posted February 15, 2019

Here is an excerpt:

Unjust accountability systems. When an organization’s processes for measuring employee contributions is perceived as unfair or unjust, we found it is 3.77 times more likely to have people withhold or distort information. We intentionally excluded compensation in our research, because incentive structures can sometimes play disproportionate roles in influencing behavior, and simply looked at how contribution was measured and evaluated through performance management systems, routine feedback processes, and cultural recognition. One interviewee captured a pervasive sentiment about how destructive these systems can be: “I don’t know why I work so hard. My boss doesn’t have a clue what I do. I fill out the appraisal forms at the end of the year, he signs them and sends them to HR. We pretend to have a discussion, and then we start over. It’s a rigged system.” Our study showed that when accountability processes are seen as unfair, people feel forced to embellish their accomplishments and hide, or make excuses for their shortfalls. That sets the stage for dishonest behavior. Research on organizational injustice shows a direct correlation between an employee’s sense of fairness and a conscious choice to sabotage the organization. And more recent research confirms that unfair comparison among employees leads directly to unethical behavior.

Fortunately, our statistical models show that even a 20% improvement in performance management consistency, as evidenced by employees belief that their contributions have been fairly assessed against known standards, can improve truth telling behavior by 12%.

The info is here.

Sunday, March 10, 2019

Rethinking Medical Ethics

Insights Team
Forbes.com
Originally posted February 11, 2019

Here is an excerpt:

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.

Avoiding Bias

In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.

In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.

The info is here.

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Monday, February 18, 2019

State ethics director resigns after porn, misconduct allegations

Richard Belcher
WSB-TV2
Originally published February 8, 2019

The director of the state Ethics Commission has resigned -- with a $45,000 severance -- and it’s still unknown whether accusations against him have been substantiated.

In January, Channel 2 Action News and The Atlanta Journal-Constitution broke the story that staff members at the Ethics Commission wrote letters accusing Stefan Ritter of poor work habits and of watching pornography in the office.

Ritter was placed on leave with pay to allow time to investigate the complaints.

Ritter continued to draw his $181,000 salary while the accusations against him were investigated, but he and the commission cut a deal before the investigation was over.

The info is here.

Saturday, February 16, 2019

There’s No Such Thing as Free Will

Stephen Cave
The Atlantic
Originally published June 2016

Here is an excerpt:

What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

(cut)

Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.

The info is here.

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

Wednesday, January 9, 2019

'Should we even consider this?' WHO starts work on gene editing ethics

Agence France-Presse
Originally published 3 Dec 2018

The World Health Organization is creating a panel to study the implications of gene editing after a Chinese scientist controversially claimed to have created the world’s first genetically edited babies.

“It cannot just be done without clear guidelines,” Tedros Adhanom Ghebreyesus, the head of the UN health agency, said in Geneva.

The organisation was gathering experts to discuss rules and guidelines on “ethical and social safety issues”, added Tedros, a former Ethiopian health minister.

Tedros made the comments after a medical trial, which was led by Chinese scientist He Jiankui, claimed to have successfully altered the DNA of twin girls, whose father is HIV-positive, to prevent them from contracting the virus.

His experiment has prompted widespread condemnation from the scientific community in China and abroad, as well as a backlash from the Chinese government.

The info is here.

Wednesday, January 2, 2019

When Fox News staffers break ethics rules, discipline follows — or does it?

Margaret Sullivan
The Washington Post
Originally published November 29, 2018

There are ethical standards at Fox News, we’re told.

But just what they are, or how they’re enforced, is an enduring mystery.

When Sean Hannity and Jeanine Pirro appeared onstage with President Trump at a Missouri campaign rally, the network publicly acknowledged that this ran counter to its practices.

“Fox News does not condone any talent participating in campaign events,” the network said in a statement. “This was an unfortunate distraction and has been addressed.”

Or take what happened this week.

When the staff of “Fox & Friends” was found to have provided a pre-interview script for Scott Pruitt, then the Environmental Protection Agency head, the network frowned: “This is not standard practice whatsoever and the matter is being addressed internally with those involved.”

“Not standard practice” is putting it mildly, as the Daily Beast’s Maxwell Tani — who broke the story — noted, quoting David Hawkins, formerly of CBS News and CNN, who teaches journalism at Fordham University...

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).
https://doi.org/10.1007/s11023-018-9482-5

Abstract

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Sunday, December 30, 2018

AI thinks like a corporation—and that’s worrying

Jonnie Penn
The Economist
Originally posted November 26, 2018

Here is an excerpt:

Perhaps as a result of this misguided impression, public debates continue today about what value, if any, the social sciences could bring to artificial-intelligence research. In Simon’s view, AI itself was born in social science.

David Runciman, a political scientist at the University of Cambridge, has argued that to understand AI, we must first understand how it operates within the capitalist system in which it is embedded. “Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves,” he explains.

“Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years,” says Mr Runciman. The worry is, these are systems we “never really learned how to control.”

After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.

Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc. It may be, given the costly labour required to identify and address these harms, that something akin to “ethics as a service” will emerge as a new cottage industry. Ms O’Neil, for example, now runs her own service that audits algorithms.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.