Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accountability. Show all posts
Showing posts with label Accountability. Show all posts

Wednesday, October 23, 2019

Supreme Court Ethics Reform

Johanna Kalb and Alicia Bannon
Brennan Center for Justice
Originally published September 24, 2019

Today, the nine justices on the Supreme Court are the only U.S. judges — state or federal — not governed by a code of ethical conduct. But that may be about to change. Justice Elena Kagan recently testified during a congressional budget hearing that Chief Justice John Roberts is exploring whether to develop an ethical code for the Court. This was big news, given that the chief justice has previously rejected the need for a Supreme Court ethics code.

In fact, however, the Supreme Court regularly faces challenging ethical questions, and because of their crucial and prominent role, the justices receive intense public scrutiny for their choices. Over the last two decades, almost all members of the Supreme Court have been criticized for engaging in behaviors that are forbidden to other federal court judges, including participating in partisan convenings or fundraisers, accepting expensive gifts or travel, making partisan comments at public events or in the media, or failing to recuse themselves from cases involving apparent conflicts of interest, either financial or personal. Congress has also taken notice of the problem. The For the People Act, which was passed in March 2019 by the House of Representatives, included the latest of a series of proposals by both Republican and Democratic legislators to clarify the ethical standards that apply to the justices’ behavior.

The info is here.

Sunday, September 29, 2019

The brain, the criminal and the courts

A graph shows the number of mentions of neuroscience in judicial opinions in US cases from 2005 to 2015. Capital and noncapital homicides are shown, as well as other felonies. For the three categories added together, the authors found 101 mentions in 2005 and more than 400 in 2015. All three categories show growth.Eryn Brown
knowablemagazine.org
Originally posted August 30, 2019

Here is an excerpt:

It remains to be seen if all this research will yield actionable results. In 2018, Hoffman, who has been a leader in neurolaw research, wrote a paper discussing potential breakthroughs and dividing them into three categories: near term, long term and “never happening.” He predicted that neuroscientists are likely to improve existing tools for chronic pain detection in the near future, and in the next 10 to 50 years he believes they’ll reliably be able to detect memories and lies, and to determine brain maturity.

But brain science will never gain a full understanding of addiction, he suggested, or lead courts to abandon notions of responsibility or free will (a prospect that gives many philosophers and legal scholars pause).

Many realize that no matter how good neuroscientists get at teasing out the links between brain biology and human behavior, applying neuroscientific evidence to the law will always be tricky. One concern is that brain studies ordered after the fact may not shed light on a defendant’s motivations and behavior at the time a crime was committed — which is what matters in court. Another concern is that studies of how an average brain works do not always provide reliable information on how a specific individual’s brain works.

“The most important question is whether the evidence is legally relevant. That is, does it help answer a precise legal question?” says Stephen J. Morse, a scholar of law and psychiatry at the University of Pennsylvania. He is in the camp who believe that neuroscience will never revolutionize the law, because “actions speak louder than images,” and that in a legal setting, “if there is a disjunct between what the neuroscience shows and what the behavior shows, you’ve got to believe the behavior.” He worries about the prospect of “neurohype,” and attorneys who overstate the scientific evidence.

The info is here.

Monday, September 16, 2019

Sex misconduct claims up 62% against California doctors

Vandana Ravikumar
USAToday.com
Originally posted August 12, 2019

The number of complaints against California physicians for sexual misconduct has risen by 62% since the fall of 2017, according to a Los Angeles Times investigation.

The investigation, published Monday, found that the rise in complaints coincides with the beginning of the #MeToo movement, which encouraged victims of sexual misconduct or assault to speak out about their experiences. Though complaints of sexual misconduct against physicians are small in number, they are among the fastest growing types of allegations.

Recent high-profile incidents of sexual misconduct involving medical professionals were also a catalyst, the Times reported. Those cases include the abuses of Larry Nassar, a former USA Gymnastics doctor who was sentenced in 2018 for 40-175 years in prison for molesting hundreds of young athletes.

That same year, hundreds of women accused former University of Southern California gynecologist George Tyndall of inappropriate behavior. Tyndall, who worked at the university for nearly three decades, was recently charged for sexually assaulting 16 women.

The info is here.

Thursday, September 5, 2019

Allegations of sexual assault, cocaine use among SEAL teams prompt 'culture' review

Image result for navy sealsBarbara Starr
CNN.com
Originally posted August 12, 2019

In the wake of several high-profile scandals, including allegations of sexual assault and cocaine use against Navy SEAL team members, the four-star general in charge of all US special operations has ordered a review of the culture and ethics of the elite units.

"Recent incidents have called our culture and ethics into question and threaten the trust placed in us," Gen. Richard Clarke, head of Special Operations Command, said in a memo to the entire force.
While the memo did not mention specific incidents, it comes after an entire SEAL platoon was recently sent home from Iraq following allegations of sexual assault and drinking alcohol during their down time -- which is against regulations.

Another recent case involved an internal Navy investigation that found members of SEAL Team 10 allegedly abused cocaine and other illicit substances while they were stationed in Virginia last year. The members were subsequently disciplined.

(cut)

"I don't know yet if we have a culture problem, I do know that we have a good order and discipline problem that must be addressed immediately," Green said.

In early July, a military court decided Navy SEAL team leader Eddie Gallagher, a one-time member of SEAL Team 7, would be demoted in rank and have his pay reduced for posing for a photo with a dead ISIS prisoner while he was serving in Iraq. Another SEAL was sentenced in June for his role in the 2017 death of Army Staff Sgt. Logan Melgar, a Green Beret, in Bamako, Mali.

The info is here.

Wednesday, September 4, 2019

AI Ethics Guidelines Every CIO Should Read

Image: Mopic - stock.adobe.comJohn McClurg
www.informationweek.com
Originally posted August 7, 2019

Here is an excerpt:

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework.

The framework won’t be able to account for all the situations an enterprise will encounter on its journey to increased AI adoption. But it can lay the groundwork for future executive discussions. With a framework in hand, they can confidently chart a sensible path forward that aligns with the company’s culture, risk tolerance, and business objectives.

The good news is that CIOs and executives don’t need to come up with an AI ethics framework out of thin air. Many smart thinkers in the AI world have been mulling over ethics issues for some time and have published several foundational guidelines that an organization can use to draft a framework that makes sense for their business. Here are five of the best resources to get technology and ethics leaders started.

The info is here.

Friday, August 9, 2019

Advice for technologists on promoting AI ethics

Joe McKendrick
www.zdnet.com
Originally posted July 13, 2019

Ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it's unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

Wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they'll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. "The pressure to implement AI is fueling ethical issues," the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini's Artificial Intelligence & Analytics Group, state. "When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI." Thirty-four percent cited this pressure to stay ahead with AI trends.

The info is here.

Saturday, August 3, 2019

When Do Robots Have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Eddy Nahmias, Corey Allen, & Bradley Loveall
Georgia State University

From the Conclusion:

If future research bolsters our initial findings, then it would appear that when people consider whether agents are free and responsible, they are considering whether the agents have capacities to feel emotions more than whether they have conscious sensations or even capacities to deliberate or reason. It’s difficult to know whether people assume that phenomenal consciousness is required for or enhances capacities to deliberate and reason. And of course, we do not deny that cognitive capacities for self-reflection, imagination, and reasoning are crucial for free and responsible agency (see, e.g., Nahmias 2018). For instance, once considering agents that are assumed to have phenomenal consciousness, such as humans, it is likely that people’s attributions of free will and responsibility decrease in response to information that an agent has severely diminished reasoning capacities. But people seem to have intuitions that support the idea that an essential condition for free will is the capacity to experience conscious emotions.  And we find it plausible that these intuitions indicate that people take it to be essential to being a free agent that one can feel the emotions involved in reactive attitudes and in genuinely caring about one’s choices and their outcomes.

(cut)

Perhaps, fiction points us towards the truth here. In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions.  No matter how intelligent or complex their behavior, the robots do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own or others’ deaths, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation about how humans treat them, or our feeling such attitudes towards them, for instance when they harm humans.

The research paper is here.

Tuesday, July 16, 2019

Experts Recommend SCOTUS Adopt Code of Ethics to Promote Accountability

Jerry Lambe
www.lawandcrime.com
Originally posted June 24, 2019


Here is an excerpt:

While the high court’s justices must already abide by an ethical code, many of the experts noted that the current code does not sufficiently address modern ethical standards.

“The impartiality of our judiciary should be beyond reproach, so having a basic ethics code for its members to follow is a natural outgrowth of that common value, one that should be no less rigorously applied to our nation’s highest court,” Roth testified.

He added that disclosures from the court are particularly opaque, especially when sought out by the general public.

“To the outside observer, the current protocol makes it seem as if the judiciary is hiding something. […] With members of the judiciary already filling out and filing their reports digitally, the public should obtain them the same way, without having my organization act as the middleman,” Roth said.

Frost told the subcommittee that holding a hearing on the topic was a good first step in the process.

“Part of what I care about is not just the reality of impartial and fair justice, but the public’s perception of the courts,” she said, adding, “There have been signals by the justices that the court is considering rethinking adopting a code of ethics.”

The info is here.

Monday, July 1, 2019

House Panel Subpoenas Kellyanne Conway over ‘Egregious’ Ethics Violations

Jack Crowe
The National Review
Originally posted June 26, 2019


Here is an excerpt:

Henry J. Kerner, the special counsel, whose role is unrelated to Robert Mueller’s investigation, argued in his Wednesday testimony that Conway’s repeated violations of the Hatch Act — which stem from her endorsement of Republican congressional candidates during television interviews and on Twitter — created an “unprecedented challenge” to his office’s ability to enforce federal law.

Conway has dismissed the accusations of ethics violations as an unprecedented and politically motivated attack on the administration.

“If you’re trying to silence me through the Hatch Act, it’s not going to work,” Conway said when asked about her alleged violations during a May interview, adding “let me know when the jail sentence starts.”

Kerner, in his letter to the president and in his testimony, argued that Conway’s refusal to accept responsibility created a dangerous precedent and was further reason to dismiss her.

Conway’s repeated violations, “combined with her unrepentant attitude, are unacceptable from any federal employee, let alone one in such a prominent position,” Kerner testified.

Representative Elijah Cummings (D., Md.), who chairs the Committee, said he is prepared to hold Conway in contempt if she defies the subpoena.

The info is here.

How do you teach a machine right from wrong? Addressing the morality within Artificial Intelligence

Joseph Brean
The Kingston Whig Standard
Originally published May 30, 2019

Here is an excerpt:

AI “will touch or transform every sector and industry in Canada,” the government of Canada said in a news release in mid-May, as it named 15 experts to a new advisory council on artificial intelligence, focused on ethical concerns. Their goal will be to “increase trust and accountability in AI while protecting our democratic values, processes and institutions,” and to ensure Canada has a “human-centric approach to AI, grounded in human rights, transparency and openness.”

It is a curious project, helping computers be more accountable and trustworthy. But here we are. Artificial intelligence has disrupted the basic moral question of how to assign responsibility after decisions are made, according to David Gunkel, a philosopher of robotics and ethics at Northern Illinois University. He calls this the “responsibility gap” of artificial intelligence.

“Who is able to answer for something going right or wrong?” Gunkel said. The answer, increasingly, is no one.

It is a familiar problem that is finding new expressions. One example was the 2008 financial crisis, which reflected the disastrous scope of automated decisions. Gunkel also points to the success of Google’s AlphaGo, a computer program that has beaten the world’s best players at the famously complex board game Go. Go has too many possible moves for a computer to calculate and evaluate them all, so the program uses a strategy of “deep learning” to reinforce promising moves, thereby approximating human intuition. So when it won against the world’s top players, such as top-ranked Ke Jie in 2017, there was confusion about who deserved the credit. Even the programmers could not account for the victory. They had not taught AlphaGo to play Go. They had taught it to learn Go, which it did all by itself.

The info is here.

Tuesday, June 18, 2019

A tech challenge? Fear not, many AI issues boil down to ethics

Peter Montagnon
www.ft.com
Originally posted June 3, 2019

Here is an excerpt:

Ethics are particularly important when technology enters the governance agenda. Machines may be capable of complex calculation but they are so far unable to make qualitative or moral judgments.

Also, the use and manipulation of a massive amount of data creates an information asymmetry. This confers power on those who control it at the potential expense of those who are the subject of it.

Ultimately there must always be human accountability for the decisions that machines originate.

In the corporate world, the board is where accountability resides. No one can escape this. To exercise their responsibilities, directors do not need to be as expert as tech teams. For sure, they need to be familiar with the scope of technology used by their companies, what it can and cannot do, and where the risks and opportunities lie.

For that they may need trustworthy advice from either the chief technology officer or external experts, but the decisions will generally be about what is acceptable and what is not.

The risks may well be of a human rather than a tech kind. With the motor industry, one risk with semi-automated vehicles is that the owners of such cars will think they can do more on autopilot than they can. It seems most of us are bad at reading instructions and will need clear warnings, perhaps to the point where the car may even seem disappointing.

The info is here.


Psychologists Mitchell and Jessen called to testify about ‘torture’ techniques in 9/11 tribunals

Thomas Clouse
www.spokesman.com
Originally posted May 20, 2019

Two Spokane psychologists who devised the “enhanced interrogation” techniques that a federal judge later said constituted torture could testify publicly for the first time at a military tribunal at Guantanamo Bay, Cuba, that is trying five men charged with helping to plan and assist in the 9/11 attacks.

James E. Mitchell and John “Bruce” Jessen are among a dozen government-approved witnesses for the defense at the military tribunal. Mitchell and Jessen’s company was paid about $81 million by the CIA for providing and sometimes carrying out the interrogation techniques, which included waterboarding, during the early days of the post 9/11 war on terror.

“This will be the first time Dr. Mitchell and Dr. Jessen will have to testify in a criminal proceeding about the torture program they implemented,” said James Connell, a lawyer for Ammar al Baluchi, one of the five Guantanamo prisoners.

Both Mitchell and Jessen were deposed but were never forced to testify as part of a civil suit filed in 2015 in Spokane by the ACLU on behalf of three former CIA prisoners, Gul Rahman, Suleiman Abdullah Salim and Mohamed Ahmed Ben Soud.

According to court records, Rahman was interrogated in a dungeon-like Afghanistan prison in isolation, subjected to darkness and extreme cold water, and eventually died of hypothermia. The other two men are now free.

The U.S. government settled that civil suit in August 2017 just weeks before it was scheduled for trial in Spokane before U.S. District Court Judge Justin Quackenbush.

The info is here.

Thursday, May 30, 2019

How Big Tech is struggling with the ethics of AI

Madhumita Murgia & Siddarth Shrkianth
Financial Times
Originally posted April 28, 2019

Here is an excerpt:

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The info is here.

Wednesday, May 29, 2019

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Friday, May 10, 2019

An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Friday, May 3, 2019

Real or artificial? Tech titans declare AI ethics concerns

Matt O'Brien and Rachel Lerman
Associated Press
Originally posted April 7, 2019

Here is an excerpt:

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?

Google was hit with both questions when it formed a new board of outside advisers in late March to help guide how it uses AI in products. But instead of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from the backlash and dissolved the council.

The outside board fell apart in stages. One of the board's eight inaugural members quit within days and another quickly became the target of protests from Google employees who said her conservative views don't align with the company's professed values.

As thousands of employees called for the removal of Heritage Foundation President Kay Coles James, Google disbanded the board last week.

"It's become clear that in the current environment, (the council) can't function as we wanted," the company said in a statement.

The info is here.

Thursday, May 2, 2019

A Facebook request: Write a code of tech ethics

A Facebook request: Write a code of tech ethicsMike Godwin
www.latimes.com
Originally published April 30, 2019

Facebook is preparing to pay a multi-billion-dollar fine and dealing with ongoing ire from all corners for its user privacy lapses, the viral transmission of lies during elections, and delivery of ads in ways that skew along gender and racial lines. To grapple with these problems (and to get ahead of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that governments get together and set some laws and regulations for Facebook to follow.

But Zuckerberg should be aiming higher. The question isn’t just what rules should a reformed Facebook follow. The bigger question is what all the big tech companies’ relationships with users should look like. The framework needed can’t be created out of whole cloth just by new government regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized in the 19th century, developed formal ethical codes that became the seeds of modern-day professional practice. Tech-company professionals should follow their example. An industry-wide code of ethics could guide companies through the big questions of privacy and harmful content.

The info is here.

Editor's note: Many social media companies engage in unethical behavior on a regular basis, typically revolving around lack of consent, lack of privacy standards, filter bubble (personalized algorithms) issues, lack of accountability, lack of transparency, harmful content, and third party use of data.

Tuesday, April 30, 2019

Ethics in AI Are Not Optional

Rob Daly
www.marketsmedia.com
Originally posted April 12, 2019

Artificial intelligence is a critical feature in the future of the financial services, but firms should not be penny-wise and pound-foolish in their race to develop the most advanced offering as possible, caution experts.

“You do not need to be on the frontier of technology if you are not a technology company,” said Greg Baxter, the chief digital officer at MetLife, in his keynote address during Celent’s annual Innovation and Insight Day. “You just have to permit your people to use the technology.”

More effort should be spent on developing the various policies that will govern the deployment of the technology, he added.

MetLife spends more time on ethics and legal than it does with technology, according to Baxter.

Firms should be wary when implementing AI in such a fashion that it alienates clients by being too intrusive and ruining the customer experience. “If data is the new currency, its credit line is trust,” said Baxter.

The info is here.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.