Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Crowdsourced. Show all posts
Showing posts with label Crowdsourced. Show all posts

Wednesday, July 21, 2021

The Parliamentary Approach to Moral Uncertainty

Toby Newberry & Toby Ord
Future of Humanity Institute
University of Oxford 2021

Abstract

We introduce a novel approach to the problem of decision-making under moral uncertainty, based
on an analogy to a parliament. The appropriate choice under moral uncertainty is the one that
would be reached by a parliament comprised of delegates representing the interests of each moral
theory, who number in proportion to your credence in that theory. We present what we see as the
best specific approach of this kind (based on proportional chances voting), and also show how the
parliamentary approach can be used as a general framework for thinking about moral uncertainty,
where extant approaches to addressing moral uncertainty correspond to parliaments with different
rules and procedures.

Here is an excerpt:

Moral Parliament

Imagine that each moral theory in which you have credence got to send delegates to an internal parliament, where the number of delegates representing each theory was proportional to your credence in that theory. Now imagine that these delegates negotiate with each other, advocating on behalf of their respective moral theories, until eventually the parliament reaches a decision by the delegates voting on the available options. This would provide a novel approach to decision-making under moral uncertainty that may avoid some of the problems that beset the others, and it may even provide a new framework for thinking about moral uncertainty more broadly.

(cut)

Here, we endorse a common-sense approach to the question of scale which has much in common with standard decision-theoretic conventions. The suggestion is that one should convene Moral Parliament for those decision-situations to which it is intuitively appropriate, such as those involving non-trivial moral stakes, where the possible options are relatively well-defined, and so on. Normatively speaking, if Moral Parliament is the right approach to take to moral uncertainty, then it may also be right to apply it to all decision-situations (however this is defined). But practically speaking, this would be very difficult to achieve. This move has essentially the same implications as the approach of sidestepping the question but comes with a positive endorsement of Moral Parliament’s application to ‘the kinds of decision-situations typically described in papers on moral uncertainty’. This is the sense in which the common-sense approach resembles standard decision-theoretic conventions. 

Wednesday, May 26, 2021

Before You Answer, Consider the Opposite Possibility—How Productive Disagreements Lead to Better Outcomes

Ian Leslie
The Atlantic
Originally published 25 Apr 21

Here is an excerpt:

This raises the question of how a wise inner crowd can be cultivated. Psychologists have investigated various methods. One, following Stroop, is to harness the power of forgetting. Reassuringly for those of us who are prone to forgetting, people with poor working memories have been shown to have a wiser inner crowd; their guesses are more independent of one another, so they end up with a more diverse set of estimates and a more accurate average. The same effect has been achieved by spacing the guesses out in time.

More sophisticated methods harness the mind’s ability to inhabit different perspectives and look at a problem from more than one angle. People generate more diverse estimates when prompted to base their second or third guess on alternative assumptions; one effective technique is simply asking people to “consider the opposite” before giving a new answer. A fascinating recent study in this vein harnesses the power of disagreement itself. A pair of Dutch psychologists, Philippe Van de Calseyde and Emir Efendić, asked people a series of questions with numerical answers, such as the percentage of the world’s airports located in the U.S.. Then they asked participants to think of someone in their life with whom they often disagreed—that uncle with whom they always argue about politics—and to imagine what that person would guess.

The respondents came up with second estimates that were strikingly different from their first estimate, producing a much more accurate inner crowd. The same didn’t apply when they were asked to imagine how someone they usually agree with would answer the question, which suggests that the secret is to incorporate the perspectives of people who think differently from us. That the respondents hadn’t discussed that particular question with their disagreeable uncle did not matter. Just the act of thinking about someone with whom they argued a lot was enough to jog them out of habitual assumptions.

Saturday, May 5, 2018

Deep learning: Why it’s time for AI to get philosophical

Catherine Stinson
The Globe and Mail
Originally published March 23, 2018

Here is an excerpt:

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

The information is here.

Monday, November 6, 2017

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence

Dom Galeon
Futurism.com
Originally published October 17, 2017

Here is an excerpt:

Crowdsourced Morality

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that’s better than one built by an individual.

The article is here.