Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 9, 2018

How To Deliver Moral Leadership To Employees

John Baldoni
Forbes.com
Originally posted April 12, 2018

Here is an excerpt:

When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?

First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)

LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.

Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”

The information is here.

Getting Ethics Training Right for Leaders and Employees

Deloitte
The Wall Street Journal
Originally posted April 9, 2018

Here is an excerpt:

Ethics training has needed a serious redesign for some time, and we are seeing three changes to make training more effective. First, many organizations recognize that compliance training is not enough. Simply knowing the rules and how to call the ethics helpline does not necessarily mean employees will raise their voice when they see ethical issues in the workplace. Even if employees want to say something they often hesitate, worried that they may not be heard, or even worse, that voicing may lead to formal or informal retaliation. Overcoming this hesitation requires training to help employees learn how to voice their values with in-person, experiential practice in everyday workplace situations. More and more organizations are investing in this training, as a way to simultaneously support employees, reduce risk and proactively reshape their culture.

Another significant change in ethics training is a focus on helping senior leaders consider how their own ethical leadership shapes the culture. This requires leaders to examine the signals they send in their everyday behaviors, and how these signals make employees feel safe to voice ideas and concerns. In my training sessions with senior leaders, we use exercises that help them identify the leadership behaviors that create such trust, and those that may be counter-productive. We then redesign the everyday processes, such as the weekly meeting or decision-making models, that encourage voice and explicitly elevate ethical concerns.

Third, more organizations are seeing the connection between ethics and greater sense of purpose in the workplace. Employee engagement, performance and retention often increases when employees feel they are contributing something beyond profit creation. Ethics training can help employees see this connection and practice the so-called giver strategies that help others, their organizations, and their own careers at the same time.

The article is here.

Tuesday, May 8, 2018

AI Without Borders: How To Create Universally Moral Machines

Abinash Tripathy
Forbes.com
Originally posted April 11, 2018

Here is an excerpt:

Ultimately, developing moral machines will be a learning process. It’s not surprising that early versions of advanced machine learning have adopted undesirable human traits. It is promising, however, that immense thought and care are being put into these issues. Pioneers including DeepMind, researchers at Duke University, the German government, and the Leverhulme Centre for the Future of Intelligence have invested research, experimentation and thought into determining the best way not to model machines after humans as they exist but after an ideal version of human intelligence.

Despite this care, there will always be those who use technological advancements with malicious intent. Organizations will need to prepare for the potential harm that can arise both from competitors and from internal AI developments. From bots to AI assistants, to AI lawyers, to simple automated technologies such as those used in manufacturing, we must decide what is right, what is wrong and what aspects of humanity we are truly willing to hand over to machines.

The information is here.

Many People Taking Antidepressants Discover They Cannot Quit

Benedict Carey & Robert Gebeloff
The New York Times
Originally posted April 7, 2018

Here is an excerpt:

Dr. Peter Kramer, a psychiatrist and author of several books about antidepressants, said that while he generally works to wean patients with mild-to-moderate depression off medication, some report that they do better on it.

“There is a cultural question here, which is how much depression should people have to live with when we have these treatments that give so many a better quality of life,” Dr. Kramer said. “I don’t think that’s a question that should be decided in advance.”

Antidepressants are not harmless; they commonly cause emotional numbing, sexual problems like a lack of desire or erectile dysfunction and weight gain. Long-term users report in interviews a creeping unease that is hard to measure: Daily pill-popping leaves them doubting their own resilience, they say.

“We’ve come to a place, at least in the West, where it seems every other person is depressed and on medication,” said Edward Shorter, a historian of psychiatry at the University of Toronto. “You do have to wonder what that says about our culture.”

Patients who try to stop taking the drugs often say they cannot. In a recent survey of 250 long-term users of psychiatric drugs — most commonly antidepressants — about half who wound down their prescriptions rated the withdrawal as severe. Nearly half who tried to quit could not do so because of these symptoms.

In another study of 180 longtime antidepressant users, withdrawal symptoms were reported by more than 130. Almost half said they felt addicted to antidepressants.

The information is here.

Monday, May 7, 2018

Microsoft is cutting off some sales over AI ethics

Alan Boyle
www.geekwire.com
Originally published April 9, 2018

Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off some of its customers, says Eric Horvitz, technical fellow and director at Microsoft Research Labs.

Horvitz laid out Microsoft’s commitment to AI ethics today during the Carnegie Mellon University – K&L Gates Conference on Ethics and AI, presented in Pittsburgh.

One of the key groups focusing on the issue at Microsoft is the Aether Committee, where “Aether” stands for AI and Ethics in Engineering and Research.

“It’s been an intensive effort … and I’m happy to say that this committee has teeth,” Horvitz said during his lecture.

He said the committee reviews how Microsoft’s AI technology could be used by its customers, and makes recommendations that go all the way up to senior leadership.

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’ ”

Horvitz didn’t go into detail about which customers or specific applications have been ruled out as the result of the Aether Committee’s work, although he referred to Microsoft’s human rights commitments.

The information is here.

A revolution in our sense of self

Nick Chater
The Guardian
Originally posted April 1, 2018

Here is an excerpt:

One crucial clue that the inner oracle is an illusion comes, on closer analysis, from the fact that our explanations are less than watertight. Indeed, they are systematically and spectacularly leaky. Now it is hardly controversial that our thoughts seem fragmentary and contradictory. I can’t quite tell you how a fridge works or how electricity flows around the house. I continually fall into confusion and contradiction when struggling to explain rules of English grammar, how quantitative easing works or the difference between a fruit and a vegetable.

But can’t the gaps be filled in and the contradictions somehow resolved? The only way to find out is to try. And try we have. Two thousand years of philosophy have been devoted to the problem of “clarifying” many of our commonsense ideas: causality, the good, space, time, knowledge, mind and many more; clarity has, needless to say, not been achieved. Moreover, science and mathematics began with our commonsense ideas, but ended up having to distort them so drastically – whether discussing heat, weight, force, energy and many more – that they were refashioned into entirely new, sophisticated concepts, with often counterintuitive consequences. This is one reason why “real” physics took centuries to discover and presents a fresh challenge to each generation of students.

Philosophers and scientists have found that beliefs, desires and similar every-day psychological concepts turn out to be especially puzzling and confused. We project them liberally: we say that ants “know” where the food is and “want” to bring it back to the nest; cows “believe” it is about rain; Tamagotchis “want” to be fed; autocomplete “thinks” I meant to type gristle when I really wanted grist. We project beliefs and desires just as wildly on ourselves and others; since Freud, we even create multiple inner selves (id, ego, superego), each with its own motives and agendas. But such rationalisations are never more than convenient fictions. Indeed, psychoanalysis is projection at its apogee: stories of greatest possible complexity can be spun from the barest fragments of behaviours or snippets of dreams.

The information is here.

Saturday, May 5, 2018

Deep learning: Why it’s time for AI to get philosophical

Catherine Stinson
The Globe and Mail
Originally published March 23, 2018

Here is an excerpt:

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

The information is here.

Friday, May 4, 2018

Will Tech Companies Ever Take Ethics Seriously?

Evan Selinger
www.medium.com
Originally published April 9, 2018

Here are two excerpts:

And let’s face it, tech companies are in a structural bind, because they simultaneously serve many masters who can have competing priorities: shareholders, regulators, and consumers. Indeed, while “conscientious capitalism” sounds nice, anyone who takes political economy seriously knows we should be wary of civics being conflated with keeping markets going and companies appealing to ethics as an end-run strategy to avoid robust regulation.

But what if there is reason — even if just a sliver of practical optimism — to be more hopeful? What if the responses to the Cambridge Analytica scandal have already set in motion a reckoning throughout the tech world that’s moving history to a tipping point? What would it take for tech companies to do some real soul searching and embrace Spider-Man’s maxim that great responsibility comes with great power?

(cut)

Responsibility has many dimensions. But as far as Hartzog is concerned — and the “values in design” literature supports this contention — the three key ideals that tech companies should be prioritizing are: promoting genuine trust (through greater transparency and less manipulation), respecting obscurity (the ability for people to be more selective when sharing personal information in public and semipublic spaces), and treating dignity as sacrosanct (by fostering genuine autonomy and not treating illusions of user control as the real deal). At the very least, embracing these goals means that companies will have to come up with better answers to two fundamental questions: What signals do their design choices send to users about how their products should be perceived and used? What socially significant consequences follow from their design choices lowering transaction costs and making it easier or harder to do things, such as communicate and be observed?

The information is here.