Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

Friday, May 11, 2018

AI experts want government algorithms to be studied like environmental hazards

Dave Gershgorn
Quartz (www.qz.com)
Originally published April 9, 2018

Artificial intelligence experts are urging governments to require assessments of AI implementation that mimic the environmental impact reports now required by many jurisdictions.

AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.

“If governments deploy systems on human populations without frameworks for accountability, they risk losing touch with how decisions have been made, thus rendering them unable to know or respond to bias, errors, or other problems,” the report said. “The public will have less insight into how agencies function, and have less power to question or appeal decisions.”

The information is here.

Tuesday, May 8, 2018

AI Without Borders: How To Create Universally Moral Machines

Abinash Tripathy
Forbes.com
Originally posted April 11, 2018

Here is an excerpt:

Ultimately, developing moral machines will be a learning process. It’s not surprising that early versions of advanced machine learning have adopted undesirable human traits. It is promising, however, that immense thought and care are being put into these issues. Pioneers including DeepMind, researchers at Duke University, the German government, and the Leverhulme Centre for the Future of Intelligence have invested research, experimentation and thought into determining the best way not to model machines after humans as they exist but after an ideal version of human intelligence.

Despite this care, there will always be those who use technological advancements with malicious intent. Organizations will need to prepare for the potential harm that can arise both from competitors and from internal AI developments. From bots to AI assistants, to AI lawyers, to simple automated technologies such as those used in manufacturing, we must decide what is right, what is wrong and what aspects of humanity we are truly willing to hand over to machines.

The information is here.

Friday, April 20, 2018

Making a Thinking Machine

Leah Winerman
The Monitor on Psychology - April 2018

Here is an excerpt:

A 'Top Down' Approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more “top-down” approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world. Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately.

The article is here.

Thursday, April 19, 2018

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Sandra Upson
Wired.com
Originally posted February 16, 2018

Here is an excerpt:

But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?

A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.

The information is here.

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Sunday, March 18, 2018

Machine Theory of Mind

Neil C. Rabinowitz, F. Perbet, H. F. Song, C. Zhang, S.M. Ali Eslami, M. Botvinick
Artificial Intelligence
Submitted February 2018

Abstract

Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.

The research is here.

Tuesday, February 20, 2018

This Cat Sensed Death. What if Computers Could, Too?

Siddhartha Mukherjee
The New York Times
Originally published January 3, 2017

Here are two excerpts:

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

(cut)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

The article is here.

Friday, January 5, 2018

Implementation of Moral Uncertainty in Intelligent Machines

Kyle Bogosian
Minds and Machines
December 2017, Volume 27, Issue 4, pp 591–608

Abstract

The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.

Introduction

Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems, generally referred to as artificial moral agents (AMAs), can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity.

Approaches towards moral decisionmaking generally fall into two camps, “top-down” and “bottom-up” approaches (Allen et al 2005). Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision-making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al 2005) and preference utilitarianism (Oesterheld 2016). Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2013). There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015).

The article is here.

Thursday, January 4, 2018

Artificial Intelligence Seeks An Ethical Conscience

Tom Simonite
wired.com
Originally published December 7, 2017

Here is an excerpt:

Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.

Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.

Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.

The article is here.

Thursday, December 21, 2017

An AI That Can Build AI

Dom Galeon and Kristin Houser
Futurism.com
Originally published on December 1, 2017

Here is an excerpt:

Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.

Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.

Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.

The information is here.

Tuesday, December 12, 2017

Can AI Be Taught to Explain Itself?

Cliff Kuang
The New York Times Magazine
Originally published November 21, 2017

Here are two excerpts:

In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.

(cut)

“Artificial intelligence” is a misnomer, an airy and evocative term that can be shaded with whatever notions we might have about what “intelligence” is in the first place. Researchers today prefer the term “machine learning,” which better describes what makes such algorithms powerful. Let’s say that a computer program is deciding whether to give you a loan. It might start by comparing the loan amount with your income; then it might look at your credit history, marital status or age; then it might consider any number of other data points. After exhausting this “decision tree” of possible variables, the computer will spit out a decision. If the program were built with only a few examples to reason from, it probably wouldn’t be very accurate. But given millions of cases to consider, along with their various outcomes, a machine-learning algorithm could tweak itself — figuring out when to, say, give more weight to age and less to income — until it is able to handle a range of novel situations and reliably predict how likely each loan is to default.

The article is here.

Wednesday, December 6, 2017

What the heck is machine learning, and why is it everywhere these days?

Luke Dormehl
Digital Trends
Originally published November 18, 2017

Here is an excerpt:

Which programming languages to machine learners use?

Like the question above, there’s no one answer to this. Machine learning is a big field and, with so much ground to cover, there’s no one language that does absolutely everything.

Due to its simplicity, and the availability of deep learning libraries such as TensorFlow and PyTorch, Python is currently the number one language. If you’re thinking about delving into machine learning for the first time, it’s also one of the most accessible languages — and there are loads of online resources available.

Java is a good option, too, and comes with a great community of its own, while C++ and R are also worth checking out.

Is machine learning the perfect solution to all our AI problems?

You can probably guess where we’re going with this. No, machine learning isn’t infallible. Algorithms can still be subject to human biases, and the rule of “garbage in, garbage out” holds as true here as it does to any other data-driven field.

There are also questions about transparency, particularly when you’re dealing with the kind of “black boxes” that are an essential part of neural networks.

But as a tool that’s helping to revolutionize technology as we know it, and making AI available to the masses? You bet that it’s a great tool!

The article is here.

Thursday, November 30, 2017

Artificial Intelligence & Mental Health

Smriti Joshi
Chatbot News Daily
Originally posted

Here is an excerpt:

There are many barriers to getting quality mental healthcare, from searching for a provider who practices in a user's geographical location to screening multiple potential therapists in order to find someone you feel comfortable speaking with. The stigma associated with seeking mental health treatment often leaves people silently suffering from a psychological issue. These barriers stop many people from finding help and AI is being looked at a potential tool to bridge this gap between service providers and service users.

Imagine how many people would be benefitted if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection. A psychiatrist or psychologist examines a person’s tone, word choice, and the length of a phrase etc and these are all crucial cues to understanding what’s going on in someone’s mind. Machine learning is now being applied by researchers to diagnose people with mental disorders. Harvard University and University of Vermont researchers are working on integrating machine learning tools and Instagram to improve depression screening. Using color analysis, metadata, and algorithmic face detection, they were able to reach 70 percent accuracy in detecting signs of depression. The research wing at IBM is using transcripts and audio from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania, and depression. A research, led by John Pestian, a professor at Cincinnati Children’s Hospital Medical Centre showed that machine learning is up to 93 percent accurate in identifying a suicidal person.

The post is here.

Tuesday, November 14, 2017

What is consciousness, and could machines have it?

Stanislas Dehaene, Hakwan Lau, & Sid Kouider
Science  27 Oct 2017: Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The article is here.

Thursday, October 5, 2017

Biased Algorithms Are Everywhere, and No One Seems to Care

Will Knight
MIT News
Originally published July 12, 2017

Here is an excerpt:

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

The article is here.

Monday, September 4, 2017

Teaching A.I. Systems to Behave Themselves

Cade Metz
The New York Times
Originally published August 13, 2017

Here is an excerpt:

Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.

The article is here.

Monday, August 28, 2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Adam Lerer and Alexander Peysakhovich
(2017)

Abstract

In social dilemmas individuals face a temptation to increase their payoffs in the short run at a cost to the long run total welfare. Much is known about how cooperation can be stabilized in the simplest of such settings: repeated Prisoner’s Dilemma games. However, there is relatively little work on generalizing these insights to more complex situations. We start to fill this gap by showing how to use modern reinforcement learning methods to generalize a highly successful Prisoner’s Dilemma strategy: tit-for-tat. We construct artificial agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (following a bad turn try to return to mutual cooperation). We show both theoretically and experimentally that generalized tit-for-tat agents can maintain cooperation in more complex environments. In contrast, we show that employing purely reactive training techniques can lead to agents whose behavior results in socially inefficient outcomes.

The paper is here.

Monday, August 14, 2017

AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Mark Wilson
Co.Design
Originally posted July 14, 2017

Here is an excerpt:

But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

The article is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Monday, July 17, 2017

The ethics of brain implants and ‘brainjacking’

Chelsey Ballarte
Geek Wire
Originally published June 29, 2017

Here is an excerpt:

Fetz and the report’s other authors say we should regard advancements in machine learning and artificial intelligence with the same measure of caution we use when we consider accountability for self-driving cars and privacy for smartphones.

Fetz recalled the time security researchers proved they could hack into a Jeep Cherokee over the internet and disable it as it drove on the freeway. He said that in the world of prosthetics, a hacker could conceivably take over someone’s arm.

“The hack could override the signals,” he said. It could even override a veto, and that’s the danger. The strategy to head off that scenario would have to be to make sure the system can’t be influenced from the outside.

Study co-author John Donoghue, a director of the Wyss Center for Bio and Neuroengineering in Geneva, said these are just a few things we would have to think about if these mechanisms became the norm.

“We must carefully consider the consequences of living alongside semi-intelligent, brain-controlled machines, and we should be ready with mechanisms to ensure their safe and ethical use,” he said in a news release.

Donoghue said that as technology advances, we need to be ready to think about how our current laws would apply. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field,” he said.

The article is here.