Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Software. Show all posts
Showing posts with label Software. Show all posts

Thursday, November 8, 2018

Code of Ethics Doesn’t Influence Decisions of Software Developers

Emerson Murphy-Hill, Justin Smith, & Matt Shipman
NC State Pressor
Originally released October 8, 2018

The world’s largest computing society, the Association for Computing Machinery (ACM), updated its code of ethics in July 2018 – but new research from North Carolina State University shows that the code of ethics does not appear to affect the decisions made by software developers.

“We applauded the decision to update the ACM code of ethics, but wanted to know whether it would actually make a difference,” says Emerson Murphy-Hill, co-author of a paper on the work and an adjunct associate professor of computer science at NC State.

“This issue is timely, given the tech-related ethics scandals in the news in recent years, such as when Volkwagen manipulated its technology that monitored vehicle emissions. And developers will continue to face work-related challenges that touch on ethical issues, such as the appropriate use of artificial intelligence.”

For the study, researchers developed 11 written scenarios involving ethical challenges, most of which were drawn from real-life ethical questions posted by users on the website Stack Overflow. The study included 105 U.S. software developers with five or more years of experience and 63 software engineering graduate students at a university. Half of the study participants were shown a copy of the ACM code of ethics, the other half were simply told that ethics are important as part of an introductory overview of the study. All study participants were then asked to read each scenario and state how they would respond to the scenario.

“There was no significant difference in the results – having people review the code of ethics beforehand did not appear to influence their responses,” Murphy-Hill says.

The press release is here.

The research is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.

Friday, August 3, 2018

How AI is transforming the NHS

Ian Sample
The Guardian
Originally posted July 4, 2018

Here is an excerpt:

With artificial intelligence (AI), the painstaking task can be completed in minutes. For the past six months, Jena has used a Microsoft system called InnerEye to mark up scans automatically for prostate cancer patients. Men make up a third of the 2,500 cancer patients his department treats every year. When a scan is done, the images are anonymised, encrypted and sent to the InnerEye program. It outlines the prostate on each image, creates a 3D model, and sends the information back. For prostate cancer, the entire organ is irradiated.

The software learned how to mark up organs and tumours by training on scores of images from past patients that had been seen by experienced consultants. It already saves time for prostate cancer treatment. Brain tumours are next on the list.

Automating the process does more than save time. Because InnerEye trains on images marked up by leading experts, it should perform as well as a top consultant every time. The upshot is that treatment is delivered faster and more precisely. “We know that how well we do the contouring has an impact on the quality of the treatment,” Jena says. “The difference between good and less good treatment is how well we hit the tumour and how well we avoid the healthy tissues.”

The article is here.

Friday, December 30, 2016

Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do

Julie Bort
Business Insider
Originally published November 20, 2016

Here is an excerpt:

He pointed out that "there are hints" that developers will increasingly face some real heat in the years to come. He cited Volkswagen America's CEO, Michael Horn, who at first blamed software engineers for the company's emissions cheating scandal during a Congressional hearing, claimed the coders had acted on their own "for whatever reason." Horn later resigned after US prosecutors accused the company of making this decision at the highest levels and then trying to cover it up.

But Martin pointed out, "The weird thing is, it was software developers who wrote that code. It was us. Some programmers wrote cheating code. Do you think they knew? I think they probably knew."

Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.

But Sourour points out that it's not just about accidentally killing people or deliberately polluting the air. Software has already been used by Wall Street firms to manipulate stock quotes.

The article is here.

Thursday, November 3, 2016

In the World of A.I. Ethics, the Answers Are Murky

Mike Brown
Inverse
Originally posted October 12, 2016

Here is an excerpt:

“We’re not issuing a formal code of ethics. No hard-coded rules are really possible,” Raja Chatila, chair of the initiative’s executive committee, tells Inverse. “The final aim is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.”

It all sounds lovely, but surely a lot of this is ignoring cross-cultural differences. What if, culturally, you hold different values about how your money app should manage your checking account? A 2014 YouGov poll found that 63 percent of British citizens believed that, morally, people have a duty to contribute money to public services through taxation. In the United States, that figure was just 37 percent, with a majority instead responding that there was a stronger moral argument that people have a right to the money they earn. Is it even possible to come up with a single, universal code of ethics that could translate across cultures for advanced A.I.?

The article is here.

Friday, July 18, 2014

Electronic Health Records: First, Do No Harm?

EHRs are commonly promoted as boosting patient safety, but are we all being fooled?

By David F. Carr
InformationWeek
Originally published June 26, 2014

One of the top stated goals of the federal Meaningful Use program encouraging adoption of electronic health records (EHR) technology is to improve patient safety. But is there really a cause-and-effect relationship between digitizing health records and reducing medical errors? Poorly implemented health information technology can also introduce new errors, whether from scrambled data or confusing user interfaces, sometimes causing harm to flesh-and-blood patients.

The entire article is here.

Wednesday, March 27, 2013

How Does Technology Affect Business Ethics?

By Hans Fredrick
azcentral.com

The more integrated a piece of technology becomes into the way we do business, the more the potential ethical conundrums posed by that technology become apparent. Ethical business practices need to grow and evolve in step with technology. While new devices and advances may make the day-to-day operations of running a business easier, they also create challenges that the ethical businessperson must contend with.

Privacy

Privacy has become a much larger concern in the modern technological age. Business ethicists are still learning and debating how much privacy people are entitled to in the digital age, as are lawmakers. For instance, many employers had taken to the practice of requiring potential employees to provide them with the password to their Facebook pages. This opened up the door to potential privacy issues, not to mention discriminatory hiring practices. In 2012, a law was passed in California to prohibit this particular breach of privacy; but in some jurisdictions, the decision whether or not to ask for this information is still an ethical, rather than a legal matter.

The entire story is here.