Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Corporations. Show all posts
Showing posts with label Corporations. Show all posts

Sunday, August 4, 2019

First Steps Towards an Ethics of Robots and Artificial Intelligence

John Tasioulas
King's College London

Abstract

This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognize that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.

From the section: Ethical Questions: Frames and Levels

Difficult questions arise as to how best to integrate these three modes of regulating RAIs, and there is a serious worry about the tendency of industry-based codes of ethics to upstage democratically enacted law in this domain, especially given the considerable political clout wielded by the small number of technology companies that are driving RAI-related developments. However, this very clout creates the ever-present danger that powerful corporations may be able to shape any resulting laws in ways favourable to their interests rather than the common good (Nemitz 2018, 7). Part of the difficulty here stems from the fact that three levels of ethical regulation inter-relate in complex ways. For example, it may be that there are strong moral reasons against adults creating or using a robot as a sexual partner (third level). But, out of respect for their individual autonomy, they should be legally free to do so (first level). However, there may also be good reasons to cultivate a social morality that generally frowns upon such activities (second level), so that the sale and public display of sex robots is legally constrained in various ways (through zoning laws, taxation, age and advertising restrictions, etc.) akin to the legal restrictions on cigarettes or gambling (first level, again). Given this complexity, there is no a priori assurance of a single best way of integrating the three levels of regulation, although there will nonetheless be an imperative to converge on some universal standards at the first and second levels where the matter being addressed demands a uniform solution across different national jurisdictional boundaries.

The paper is here.

Thursday, July 11, 2019

The Business of Health Care Depends on Exploiting Doctors and Nurses

Danielle Ofri
The New York Times
Originally published June 8, 2019

One resource seems infinite and free: the professionalism of caregivers.

You are at your daughter’s recital and you get a call that your elderly patient’s son needs to talk to you urgently.  A colleague has a family emergency and the hospital needs you to work a double shift.  Your patient’s M.R.I. isn’t covered and the only option is for you to call the insurance company and argue it out.  You’re only allotted 15 minutes for a visit, but your patient’s medical needs require 45.

These quandaries are standard issue for doctors and nurses.  Luckily, the response is usually standard issue as well: An overwhelming majority do the right thing for their patients, even at a high personal cost.

It is true that health care has become corporatized to an almost unrecognizable degree.  But it is also true that most clinicians remain committed to the ethics that brought them into the field in the first place.  This makes the hospital an inspiring place to work.

Increasingly, though, I’ve come to the uncomfortable realization that this ethic that I hold so dear is being cynically manipulated.

By now, corporate medicine has milked just about all the “efficiency” it can out of the system.  With mergers and streamlining, it has pushed the productivity numbers about as far as they can go.

But one resource that seems endless — and free — is the professional ethic of medical staff members.

This ethic holds the entire enterprise together.  If doctors and nurses clocked out when their paid hours were finished, the effect on patients would be calamitous.  Doctors and nurses know this, which is why they don’t shirk.  The system knows it, too, and takes advantage.

The demands on medical professionals have escalated relentlessly in the past few decades, without a commensurate expansion of time and resources.  For starters, patients are sicker these days.  The medical complexity per patient — the number and severity of chronic conditions — has steadily increased, meaning that medical encounters are becoming ever more involved.  They typically include more illnesses to treat, more medications to administer, more complications to handle — all in the same-length office or hospital visit.

The information is here.

Sunday, December 30, 2018

AI thinks like a corporation—and that’s worrying

Jonnie Penn
The Economist
Originally posted November 26, 2018

Here is an excerpt:

Perhaps as a result of this misguided impression, public debates continue today about what value, if any, the social sciences could bring to artificial-intelligence research. In Simon’s view, AI itself was born in social science.

David Runciman, a political scientist at the University of Cambridge, has argued that to understand AI, we must first understand how it operates within the capitalist system in which it is embedded. “Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves,” he explains.

“Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years,” says Mr Runciman. The worry is, these are systems we “never really learned how to control.”

After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.

Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc. It may be, given the costly labour required to identify and address these harms, that something akin to “ethics as a service” will emerge as a new cottage industry. Ms O’Neil, for example, now runs her own service that audits algorithms.

The info is here.

Monday, November 12, 2018

7 Ways Marketers Can Use Corporate Morality to Prepare for Future Data Privacy Laws

Patrick Hogan
Adweek.com
Originally posted October 10, 2018

Here is an excerpt:

Many organizations have already made responsible adjustments in how they communicate with users about data collection and use and have become compliant to support recent laws. However, compliance does not always equal responsibility, and even though companies do require consent and provide information as required, linking to the terms of use, clicking a checkbox or double opting-in still may not be enough to stay ahead or protect consumers.

The best way to reduce the impact of the potential legislation is to take proactive steps now that set a new standard of responsibility in data use for your organization. Below are some measurable ways marketers can lead the way for the changing industry and creating a foundational perception shift away from data and back to the acknowledgment of putting other humans first.

Create an action plan for complete data control and transparency

Set standards and protocols for your internal teams to determine how you are going to communicate with each other and your clients about data privacy, thus creating a path for all employees to follow and abide by moving forward.

Map data in your organization from receipt to storage to expulsion

Accountability is key. As a business, you should be able to know and speak to what is being done with the data that you are collecting throughout each stage of the process.

The info is here.

Friday, September 29, 2017

How Silicon Valley is erasing your individuality

Franklin Foer
Washington Post
Originally posted September 8, 2017

Here is an excerpt:

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

The article is here.

Sunday, May 28, 2017

CRISPR Makes it Clear: US Needs a Biology Strategy, FAST

Amy Webb
Wired
Originally published

Here is an excerpt:

Crispr can be used to engineer agricultural products like wheat, rice, and animals to withstand the effects of climate change. Seeds can be engineered to produce far greater yields in tiny spaces, while animals can be edited to create triple their usual muscle mass. This could dramatically change global agricultural trade and cause widespread geopolitical destabilization. Or, with advance planning, this technology could help the US forge new alliances.

How comfortable do you feel knowing that there is no group coordinating a national biology strategy in the US, and that a single for-profit company holds a critical mass of intellectual property rights to the future of genomic editing?

While I admire Zheng’s undeniable smarts and creativity, for-profit companies don’t have a mandate to balance the tension between commercial interests and what’s good for humanity; there is no mechanism to ensure that they’ll put our longer-term best interests first.

The article is here.

Wednesday, April 5, 2017

Canada passes genetic ‘anti-discrimination’ law

Xavier Symons
BioEdge
Originally published 10 March 2017

Canada’s House of Commons has passed a controversial new law that prevents corporations from demanding genetic information from potential employees or customers.

The law, known as ‘Bill S-201’, makes it illegal for companies to deny someone a job if they refuse a genetic test, and also prevents insurance companies from making new customer policies conditional on the supply of genetic information. Insurance companies will no longer be able to solicit genetic tests so as to determine customer premiums.

Critics of the bill said that insurance premiums would skyrocket, in some cases up to 30 or 50 per cent, if companies are prevented from obtaining genetic data. And Prime Minister Justin Trudeau labelled the proposed legislation “unconstitutional” as it impinges on what he believes should be a matter for individual provinces to regulate.

The article is here.

Saturday, December 3, 2016

Data Ethics: The New Competitive Advantage

Gry Hasselbalch
Tech Crunch
Originally posted November 14, 2016

Here is an excerpt:

What is data ethics?

Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.

A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.

The article is here.

Tuesday, December 29, 2015

Is Anyone Competent to Regulate Artificial Intelligence?

By John Danaher
Philosophical Disquisitions
Posted November 21, 2015

Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. If developed recklessly and improperly, it could pose a significant risk. Typically, we try to manage this risk/reward ratio through various regulatory mechanisms. But AI poses significant regulatory challenges. In a previous post, I outlined eight of these challenges. They were arranged into three main groups. The first consisted of definitional problems: what is AI anyway? The second consisted of ex ante problems: how could you safely guide the development of AI technology? And the third consisted of ex post problems: what happens once the technology is unleashed into the world? They are depicted in the diagram above.

The entire blog post is here.

Wednesday, October 22, 2014

Are Workplace Personality Tests Fair?

By Lauren Weber and Elizabeth Dwoskin
The Wall Street Journal
Originally posted September 29, 2014

Here is an excerpt:

Workplace personality testing has become a $500 million-a-year business and is growing by 10% to 15% a year, estimates Hogan Assessment Systems Inc., a Tulsa, Okla., testing company. Xerox Corp. says tests have reduced attrition in high-turnover customer-service jobs by 20 or more days in some cases. Dialog Direct, of Highland Park, Mich., says the testing software allows the call-center operator and manager to predict with 80% accuracy which employees will get the highest performance scores.

But the rise of personality tests has sparked growing scrutiny of their effectiveness and fairness. Some companies have scaled back, changed or eliminated their use of such tests. Civil-rights groups long focused on overt forms of workplace discrimination claim that data-driven algorithms powering the tests could make jobs harder to get for people who don't conform to rigid formulas.

The entire article is here.

Wednesday, July 16, 2014

Executive Beware: The SEC Now Wants To Police Unethical Corporate Conduct

By John Carney and Jenna Felz
Forbes
Originally posted on June 26, 2014


With the appointment of Chairwoman Mary Jo White, President Obama made clear that a tough cop would run the Securities Exchange Commission (SEC) and make enforcement a top priority.  This pro-enforcement, “tough cop,” stance is nothing new to an agency with a storied history of investigating and civilly prosecuting some of the biggest frauds on Wall Street.  But what is new is the Chairwoman’s tactical decision to redeploy significant enforcement resources on small, non-criminal violations.  Chairwoman White underscored the importance of the SEC’s role as “tough cop” especially in cases “when there is no criminal violation,” declaring that the SEC “is the only agency that can play that role.”  These bold statements signal the SEC’s renewed focus on policing not only illegal, but also unethical, conduct.

Saturday, December 7, 2013

Executive whistle blowing: what to do when no one listens

By Andrea Bonime-Blanc on Nov 5, 2013
The Ethical Corporation

I recently heard a keynote address by the former chief executive of Olympus, Michael Woodford. Woodford was the Olympus boss who within months of his appointment blew the whistle on the company’s multi-year $1bn-plus financial fraud. After exposing the company’s fraud, Woodford wrote about it in the book Exposure, soon to become a movie.

This example underscores the difficulty that all whistleblowers (or people who dare to speak up) experience within their organisations. Speaking up about perceived or actual wrongdoing can be one of the most difficult and vexing ethical, moral, legal and personal dilemmas anyone can face in their lifetime. The stories of those who have blown the whistle only to be ostracised, demoted or terminated are the stuff of the bestseller lists and box office blockbusters.

The entire article is here.