Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Responsibility. Show all posts
Showing posts with label Responsibility. Show all posts

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Monday, November 12, 2018

7 Ways Marketers Can Use Corporate Morality to Prepare for Future Data Privacy Laws

Patrick Hogan
Adweek.com
Originally posted October 10, 2018

Here is an excerpt:

Many organizations have already made responsible adjustments in how they communicate with users about data collection and use and have become compliant to support recent laws. However, compliance does not always equal responsibility, and even though companies do require consent and provide information as required, linking to the terms of use, clicking a checkbox or double opting-in still may not be enough to stay ahead or protect consumers.

The best way to reduce the impact of the potential legislation is to take proactive steps now that set a new standard of responsibility in data use for your organization. Below are some measurable ways marketers can lead the way for the changing industry and creating a foundational perception shift away from data and back to the acknowledgment of putting other humans first.

Create an action plan for complete data control and transparency

Set standards and protocols for your internal teams to determine how you are going to communicate with each other and your clients about data privacy, thus creating a path for all employees to follow and abide by moving forward.

Map data in your organization from receipt to storage to expulsion

Accountability is key. As a business, you should be able to know and speak to what is being done with the data that you are collecting throughout each stage of the process.

The info is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.

Saturday, October 20, 2018

Who should answer the ethical questions surrounding artificial intelligence?

Jack Karsten
Brookings.edu
Originally published September 14, 2018

Continuing advancements in artificial intelligence (AI) for use in both the public and private sectors warrant serious ethical consideration. As the capability of AI improves, the issues of transparency, fairness, privacy, and accountability associated with using these technologies become more serious. Many developers in the private sector acknowledge the threats AI poses and have created their own codes of ethics to monitor AI development responsibly. However, many experts believe government regulation may be required to resolve issues ranging from racial bias in facial recognition software to the use of autonomous weapons in warfare.

On Sept. 14, the Center for Technology Innovation hosted a panel discussion at the Brookings Institution to consider the ethical dilemmas of AI. Brookings scholars Christopher Meserole, Darrell West, and William Galston were joined by Charina Chou, the global policy lead for emerging technologies at Google, and Heather Patterson, a senior research scientist at Intel.

Enjoy the video


Monday, October 15, 2018

Big Island considers adding honesty policy to ethics code

Associated Press
Originally posted September 14, 2018

Big Island officials are considering adding language to the county's ethics code requiring officers and employees to provide the public with information that is accurate and factual.

The county council voted last week in support of the measure, requiring county employees to provide honest information to "the best of each officer's or employee's abilities and knowledge," West Hawaii Today reported . It's set to go before council for final approval next week.

The current measure has changed from Puna Councilwoman Eileen O'Hara's original bill that simply stated "officers and employees should be truthful."

She introduced the measure in response to residents' concerns, but amended it to gain the support of her colleagues, she said.

The info is here.

Sunday, September 23, 2018

The radical moral implications of luck in human life

David Roberts
vox.com
Originally posted August 21, 2018

Here is an excerpt:

So, then, here you are. You turn 18. You are no longer a child; you are an adult, a moral agent, responsible for who you are and what you do.

By that time, your inheritance is enormous. You’ve not only been granted a genetic makeup, an ethnicity and appearance, by accidents of nature and parentage. You’ve also had your latent genetic traits “activated” in a very specific way through a specific upbringing, in a specific environment, with a specific set of experiences.

Your basic mental and emotional wiring is in place; you have certain instincts, predilections, fears, and cravings. You have a certain amount of money, certain social connections and opportunities, a certain family lineage. You’ve had a certain amount and quality of education. You’re a certain kind of person.

You are not responsible for any of that stuff; you weren’t yet capable of being responsible. You were just a kid (or worse, a teen). You didn’t choose your genes or your experiences. Both nature and the vast bulk of the nurture that matters happened to you.

And yet when you turn 18, it’s all yours — the whole inheritance, warts and all. By the time you are an autonomous, responsible moral agent, you have effectively been fired out of a cannon, on a particular trajectory. You wake up, morally speaking, midflight.

The info is here.

Friday, August 24, 2018

Government Ethics In The Trump Administration

Scott Simon
Host, Weekend Edition, NPR
Originally posted August 11, 2018

President Trump appointed what's considered the richest Cabinet in U.S. history, and reportedly, more than half of the president's Cabinet, current and former, have been the subject of ethics allegations. There's HUD Secretary Carson's pricey dining table, VA Secretary Shulkin's seats at Wimbledon, Scott Pruitt's housing sublet from a lobbyist, Interior Secretary Zinke's charter planes, Treasury Secretary Mnuchin taking a government plane to see the solar eclipse, and Commerce Secretary Wilbur Ross might need his own category. Forbes magazine reports on many people who have accused him of outright theft, saying - Forbes magazine - quote, "if even half of the accusations are legitimate, the current United States secretary of commerce could rank among the biggest grifters in American history."

The interview is here.

Friday, August 10, 2018

SAS officers given lessons in ‘morality’

Paul Maley
PM Malcolm Turnbull with Defence Minister Marise Payne and current Chief of the Defence Force Air Chief Marshal Mark Binskin. Picture: Kym SmithThe Australian
Originally posted July 9, 2018

SAS officers are being given ­additional training in ethics, ­morality and courage in leadership as the army braces itself for a potentially damning report ­expected to find that a small number of troops may have committed war crimes during the decade-long fight in Afghanistan.

With the Inspector-General of the Australian Defence Force due within months to hand down his report into ­alleged battlefield atrocities committed by Diggers, The Australian can reveal that the SAS Regiment has been quietly instituting a series of reforms ahead of the findings.

The changes to special forces training reflect a widely held view within the army that any alleged misconduct committed by Australian troops was in part the ­result of a failure of leadership, as well as the transgression of individual soldiers.

Many of the reforms are ­focused on strengthening operational leadership and regimental culture, while others are designed to help special operations officers make ethical ­decisions even under the most challenging conditions.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.

(cut)

But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.

Tuesday, July 10, 2018

Google to disclose ethical framework on use of AI

Richard Walters
The Financial Times
Originally published June 3, 2018

Here is an excerpt:

However, Google already uses AI in other ways that have drawn criticism, leading experts in the field and consumer activists to call on it to set far more stringent ethical guidelines that go well beyond not working with the military.

Stuart Russell, a professor of AI at the University of California, Berkeley, pointed to the company’s image search feature as an example of a widely used service that perpetuates preconceptions about the world based on the data in Google’s search index. For instance, a search for “CEOs” returns almost all white faces, he said.

“Google has a particular responsibility in this area because the output of its algorithms is so pervasive in the online world,” he said. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”

The information is here.

Monday, June 11, 2018

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Monday, June 4, 2018

Human-sounding Google Assistant sparks ethics questions

The Strait Times
Originally published May 9, 2018

Here are some excerpts:

The new Google digital assistant converses so naturally it may seem like a real person.

The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers, but left others fretting over the ethics of how the human-seeming software might be used.

(cut)

The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing "robocalls" by marketers or political campaigns.

(cut)

Digital assistants making arrangements for people also raises the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.

The information is here.

Sunday, June 3, 2018

Hostile environment: The dark side of nudge theory

Nick Barrett
politics.co.uk
Originally posted May 1, 2018

Here is an excerpt:

Just as a website can use a big yellow button to make buying a book or signing up to a newsletter inviting, governments can use nudge theory to make saving money for your pension easy and user-friendly. But it can also establish its own dark patterns too and the biggest government dark pattern of all is the hostile environment policy established in 2012 to encourage migrants to leave the country.

The policy meant that without the right paperwork, people were deprived of health services, employment rights and access to housing and effectively excluded from mainstream society. They were not barred. The circumstances were simply created to nudge them into leaving the country.

For six years the hostile environment persecuted the least visible among us. It was only when its effects on the Windrush generation were revealed that the policy’s inherent prejudice became clear to all. What could once be seen as firm but fair suddenly looked cruel and unusual. These measures might have been defensible if the legal migration process hadn’t been turned into a painfully punitive process for anybody arriving from outside of the EU.

The information is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Friday, May 11, 2018

AI experts want government algorithms to be studied like environmental hazards

Dave Gershgorn
Quartz (www.qz.com)
Originally published April 9, 2018

Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.

AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.

“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”

The information is here.