Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, May 5, 2018

Deep learning: Why it’s time for AI to get philosophical

Catherine Stinson
The Globe and Mail
Originally published March 23, 2018

Here is an excerpt:

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

The information is here.

Friday, May 4, 2018

Will Tech Companies Ever Take Ethics Seriously?

Evan Selinger
www.medium.com
Originally published April 9, 2018

Here are two excerpts:

And let’s face it, tech companies are in a structural bind, because they simultaneously serve many masters who can have competing priorities: shareholders, regulators, and consumers. Indeed, while “conscientious capitalism” sounds nice, anyone who takes political economy seriously knows we should be wary of civics being conflated with keeping markets going and companies appealing to ethics as an end-run strategy to avoid robust regulation.

But what if there is reason — even if just a sliver of practical optimism — to be more hopeful? What if the responses to the Cambridge Analytica scandal have already set in motion a reckoning throughout the tech world that’s moving history to a tipping point? What would it take for tech companies to do some real soul searching and embrace Spider-Man’s maxim that great responsibility comes with great power?

(cut)

Responsibility has many dimensions. But as far as Hartzog is concerned — and the “values in design” literature supports this contention — the three key ideals that tech companies should be prioritizing are: promoting genuine trust (through greater transparency and less manipulation), respecting obscurity (the ability for people to be more selective when sharing personal information in public and semipublic spaces), and treating dignity as sacrosanct (by fostering genuine autonomy and not treating illusions of user control as the real deal). At the very least, embracing these goals means that companies will have to come up with better answers to two fundamental questions: What signals do their design choices send to users about how their products should be perceived and used? What socially significant consequences follow from their design choices lowering transaction costs and making it easier or harder to do things, such as communicate and be observed?

The information is here.

Psychology will fail if it keeps using ancient words like “attention” and “memory”

Olivia Goldhill
Quartz.com
Originally published April 7, 2018

Here is an excerpt:

Then there are “jangle fallacies,” when two things that are the same are seen as different because they have different names. For example, “working memory” is used to describe the ability to keep information mind. It’s not clear this is meaningfully different from simply “paying attention” to particular aspects of information.

Scientific concepts should be operationalized, meaning measurable and testable in experiments that produce clear-cut results. “You’d hope that a scientific concept would name something that one can use to then make predictions about how it’s going to work. It’s not clear that ‘attention’ does that for us,” says Poldrack.

It’s no surprise “attention” and “memory” don’t perfectly map onto the brain functions scientists know of today, given that they entered the lexicon centuries ago, when we knew very little about the internal workings of the brain or our own mental processes. Psychology, Poldrack argues, cannot be a precise science as long as it relies on these centuries-old, lay terms, which have broad, fluctuating usage. The field has to create new terminology that accurately describes mental processes. “It hurts us a lot because we can’t really test theories,” says Poldrack. “People can talk past one another. If one person says I’m studying ‘working memory’ and the other people says ‘attention,’ they can be finding things that are potentially highly relevant to one another but they’re talking past one another.”

The information is here.

Thursday, May 3, 2018

Why Pure Reason Won’t End American Tribalism

Robert Wright
www.wired.com
Originally published April 9, 2018

Here is an excerpt:

Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.

So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.

But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.

I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.

The article is here.

We can train AI to identify good and evil, and then use it to teach us morality

Ambarish Mitra
Quartz.com
Originally published April 5, 2018

Here is an excerpt:

To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.

If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.

While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues.

The information is here.

Wednesday, May 2, 2018

How Do You Know You Are Reading This?

Jason Pontin
www.wired.com
Originally published April 2, 2018

Here are two excerpts:

Understanding consciousness better would solve some urgent, practical problems. It would be useful, for instance, to know whether patients locked in by stroke are capable of thought. Similarly, one or two patients in a thousand later recall being in pain under general anesthesia, though they seemed to be asleep. Could we reliably measure whether such people are conscious? Some of the heat of the abortion debate might dissipate if we knew when and to what degree fetuses are conscious. We are building artificial intelligences whose capabilities rival or exceed our own. Soon, we will have to decide: Are our machines conscious, to even a small degree, and do they have rights, which we are bound to respect? These are questions of more than academic philosophical interest.

(cut)

IIT doesn’t try to answer the hard problem. Instead, it does something more subtle: It posits that consciousness is a feature of the universe, like gravity, and then tries to solve the pretty hard problem of determining which systems are conscious with a mathematical measurement of consciousness represented by the Greek letter phi (Φ). Until Massimini’s test, which was developed in partnership with Tononi, there was little experimental evidence of IIT, because calculating the phi value of a human brain with its tens of billions of neurons was impractical. PCI is “a poor man’s phi” according to Tononi. “The poor man’s version may be poor, but it works better than anything else. PCI works in dreaming and dreamless sleep. With general anesthesia, PCI is down, and with ketamine it’s up more. Now we can tell, just by looking at the value, whether someone is conscious or not. We can assess consciousness in nonresponsive patients.”

The information is here.

Institutional Research Misconduct Reports Need More Credibility

Gunsalus CK, Marcus AR, Oransky I.
JAMA. 2018;319(13):1315–1316.
doi:10.1001/jama.2018.0358

Institutions have a central role in protecting the integrity of research. They employ researchers, own the facilities where the work is conducted, receive grant funding, and teach many students about the research process. When questions arise about research misconduct associated with published articles, scientists and journal editors usually first ask the researchers’ institution to investigate the allegations and then report the outcomes, under defined circumstances, to federal oversight agencies and other entities, including journals.

Depending on institutions to investigate their own faculty presents significant challenges. Misconduct reports, the mandated product of institutional investigations for which US federal dollars have been spent, have a wide range of problems. These include lack of standardization, inherent conflicts of interest that must be addressed to directly ensure credibility, little quality control or peer review, and limited oversight. Even when institutions act, the information they release to the public is often limited and unhelpful.

As a result, like most elements of research misconduct, little is known about institutions’ responses to potential misconduct by their own members. The community that relies on the integrity of university research does not have access to information about how often such claims arise, or how they are resolved. Nonetheless, there are some indications that many internal reviews are deficient.

The article is here.

Tuesday, May 1, 2018

'They stole my life away': women forcibly sterilised by Japan speak out

Daniel Hurst
The Guardian
Originally published April 3, 2018

Here is an excerpt:

Between 1948 and 1996, about 25,000 people were sterilised under the law, including 16,500 who did not consent to the procedure. The youngest known patients were just nine or 10 years old. About 70% of the cases involved women or girls.

Yasutaka Ichinokawa, a sociology professor at the University of Tokyo, says psychiatrists identified patients whom they thought needed sterilisation. Carers at nursing homes for people with intellectual disabilities also had sterilisation initiatives. Outside such institutions, the key people were local welfare officers known as Minsei-iin.

“All of them worked with goodwill, and they thought sterilisations were for the interests of the people for whom they cared, but today we must see this as a violation of the reproductive rights of people with disabilities,” Ichinokawa says.

After peaking at 1,362 cases in a single year in the mid-1950s, the figures began to decline in tandem with a shift in public attitudes.

In 1972, the government triggered protests by proposing an amendment to the Eugenic Protection Law to allow pregnant women with disabled foetuses to have induced abortions.

The information is here.

If we want moral AI, we need to teach it right from wrong

Emma Kendrew
Management Today
Originally posted April 3, 2018

Here is an excerpt:

Ethical constructs need to come before, not after, developing other skills. We teach children morality before maths. When they can be part of a social environment, we teach them language skills and reasoning. All of this happens before they enter a formal classroom.

Four out of five executives see AI working next to humans in their organisations as a co-worker within the next two years. It’s imperative that we learn to nurture AI to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.

AI Needs to Be Raised to Benefit Business and Society

AI is becoming smarter and more capable than ever before. With neural networks giving AI the ability to learn, the technology is evolving into an independent problem solver.

Consequently, we need to create learning-based AI that foster ethics and behave responsibly – imparting knowledge without bias, so that AI will be able to operate more effectively in the context of its situation. It will also be able to adapt to new requirements based on feedback from both its artificial and human peers. This feedback loop is an essential and also fundamental part of human learning.

The information is here.