Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Learning. Show all posts
Showing posts with label Learning. Show all posts

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018

Abstract

Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Saturday, September 22, 2018

The Business Case for Curiosity

Francesca Gino
Harvard Business Review
Originally posted September-October Issue

Here are two excerpts:

The Benefits of Curiosity

New research reveals a wide range of benefits for organizations, leaders, and employees.

Fewer decision-making errors.

In my research I found that when our curiosity is triggered, we are less likely to fall prey to confirmation bias (looking for information that supports our beliefs rather than for evidence suggesting we are wrong) and to stereotyping people (making broad judgments, such as that women or minorities don’t make good leaders). Curiosity has these positive effects because it leads us to generate alternatives.

(cut)

It’s natural to concentrate on results, especially in the face of tough challenges. But focusing on learning is generally more beneficial to us and our organizations, as some landmark studies show. For example, when U.S. Air Force personnel were given a demanding goal for the number of planes to be landed in a set time frame, their performance decreased. Similarly, in a study led by Southern Methodist University’s Don VandeWalle, sales professionals who were naturally focused on performance goals, such as meeting their targets and being seen by colleagues as good at their jobs, did worse during a promotion of a product (a piece of medical equipment priced at about $5,400) than reps who were naturally focused on learning goals, such as exploring how to be a better salesperson. That cost them, because the company awarded a bonus of $300 for each unit sold.

A body of research demonstrates that framing work around learning goals (developing competence, acquiring skills, mastering new situations, and so on) rather than performance goals (hitting targets, proving our competence, impressing others) boosts motivation. And when motivated by learning goals, we acquire more-diverse skills, do better at work, get higher grades in college, do better on problem-solving tasks, and receive higher ratings after training. Unfortunately, organizations often prioritize performance goals.

The information is here.

Wednesday, August 8, 2018

The Road to Pseudoscientific Thinking

Julia Shaw
The Road to Pseudoscientific ThinkingScientific American
Originally published January 16, 2017

Here is the conclusion:

So, where to from here? Are there any cool, futuristic, applications of such insights? According to McColeman “I expect that category learning work from human learning will help computer vision moving forward, as we understand the regularities in the environment that people are picking up on. There’s still a lot of room for improvement in getting computer systems to notice the same things that people notice.” We need to help people, and computers, to avoid being distracted by unimportant, attention-grabbing, information.

The take-home message from this line of research seems to be: When fighting the post-truth war against pseudoscience and misinformation, make sure that important information is eye-catching and quickly understandable.

The information is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Monday, July 9, 2018

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Friday, June 29, 2018

The Surprising Power of Questions

Alison Wood Brooks and Leslie K. John
Harvard Business Review
May-June 2018 Issue

Here are two excerpts:

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.

The New Socratic Method

The first step in becoming a better questioner is simply to ask more questions. Of course, the sheer number of questions is not the only factor that influences the quality of a conversation: The type, tone, sequence, and framing also matter.

(cut)

Not all questions are created equal. Alison’s research, using human coding and machine learning, revealed four types of questions: introductory questions (“How are you?”), mirror questions (“I’m fine. How are you?”), full-switch questions (ones that change the topic entirely), and follow-up questions (ones that solicit more information). Although each type is abundant in natural conversation, follow-up questions seem to have special power. They signal to your conversation partner that you are listening, care, and want to know more. People interacting with a partner who asks lots of follow-up questions tend to feel respected and heard.

An unexpected benefit of follow-up questions is that they don’t require much thought or preparation—indeed, they seem to come naturally to interlocutors. In Alison’s studies, the people who were told to ask more questions used more follow-up questions than any other type without being instructed to do so.

The article is here.

This article clearly relates to psychotherapy communication.

Thursday, June 14, 2018

The Benefits of Admitting When You Don’t Know

Tenelle Porter
Behavioral Scientist
Originally published April 30, 2018

Here is an excerpt:

We found that the more intellectually humble students were more motivated to learn and more likely to use effective metacognitive strategies, like quizzing themselves to check their own understanding. They also ended the year with higher grades in math. We also found that the teachers, who hadn’t seen students’ intellectual humility questionnaires, rated the more intellectually humble students as more engaged in learning.

Next, we moved into the lab. Could temporarily boosting intellectual humility make people more willing to seek help in an area of intellectual weakness? We induced intellectual humility in half of our participants by having them read a brief article that described the benefits of admitting what you do not know. The other half read an article about the benefits of being very certain of what you know. We then measured their intellectual humility.

Those who read the benefits-of-humility article self-reported higher intellectual humility than those in the other group. What’s more, in a follow-up exercise 85 percent of these same participants sought extra help for an area of intellectual weakness. By contrast, only 65 percent of the participants who read about the benefits of being certain sought the extra help that they needed. This experiment provided evidence that enhancing intellectual humility has the potential to affect students’ actual learning behavior.

Together, our findings illustrate that intellectual humility is associated with a host of outcomes that we think are important for learning in school, and they suggest that boosting intellectual humility may have benefits for learning.

The article is here.

Monday, June 11, 2018

Can Morality Be Engineered In Artificial General Intelligence Systems?

Abhijeet Katte
Analytics India Magazine
Originally published May 10, 2018

Here is an excerpt:

This report Engineering Moral Agents – from Human Morality to Artificial Morality discusses challenges in engineering computational ethics and how mathematically oriented approaches to ethics are gaining traction among researchers from a wide background, including philosophy. AGI-focused research is evolving into the formalization of moral theories to act as a base for implementing moral reasoning in machines. For example, Kevin Baum from the University of Saarland talked about a project about teaching formal ethics to computer-science students wherein the group was involved in building a database of moral-dilemma examples from the literature to be used as benchmarks for implementing moral reasoning.

Another study, titled Towards Moral Autonomous Systems from a group of European researchers states that today there is a real need for a functional system of ethical reasoning as AI systems that function as part of our society are ready to be deployed.One of the suggestions include having every assisted living AI system to have  a “Why did you do that?” button which, when pressed, causes the robot to explain why it carried out the previous action.

The information is here.

Tuesday, June 5, 2018

Norms and the Flexibility of Moral Action

Oriel Feldman Hall, Jae-Young Son, and Joseph Heffner
Preprint

ABSTRACT

A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.

The research is here.

Sunday, May 20, 2018

Robot cognition requires machines that both think and feel

Luiz Pessosa
www.aeon.com
Originally published April 13, 2018

Here is an excerpt:

Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.

The information is here.

Friendly note: I don't agree with everything I post.  In this case, I do not believe that AI needs emotions and feelings.  Rather, AI will have a different form of consciousness.  We don't need to try to reproduce our experiences exactly.  AI consciousness will likely have flaws, like we do.  We need to be able to manage AI given the limitations we create.

Tuesday, May 1, 2018

If we want moral AI, we need to teach it right from wrong

Emma Kendrew
Management Today
Originally posted April 3, 2018

Here is an excerpt:

Ethical constructs need to come before, not after, developing other skills. We teach children morality before maths. When they can be part of a social environment, we teach them language skills and reasoning. All of this happens before they enter a formal classroom.

Four out of five executives see AI working next to humans in their organisations as a co-worker within the next two years. It’s imperative that we learn to nurture AI to address many of the same challenges faced in human education: fostering an understanding of right and wrong, and what it means to behave responsibly.

AI Needs to Be Raised to Benefit Business and Society

AI is becoming smarter and more capable than ever before. With neural networks giving AI the ability to learn, the technology is evolving into an independent problem solver.

Consequently, we need to create learning-based AI that foster ethics and behave responsibly – imparting knowledge without bias, so that AI will be able to operate more effectively in the context of its situation. It will also be able to adapt to new requirements based on feedback from both its artificial and human peers. This feedback loop is an essential and also fundamental part of human learning.

The information is here.

Monday, April 16, 2018

Psychotherapy Is 'The' Biological Treatment

Robert Berezin
Medscape.com
Originally posted March 16, 2018

Neuroscience surprisingly teaches us that not only is psychotherapy purely biological, but it is the only real biological treatment. It addresses the brain in the way it actually develops, matures, and operates. It follows the principles of evolutionary adaptation. It is consonant with genetics. And it specifically heals the problematic adaptations of the brain in precisely the ways that they evolved in the first place. Psychotherapy deactivates maladaptive brain mappings and fosters new and constructive pathways. Let me explain.

The operations of the brain are purely biological. The brain maps our experiences and memories through the linking of trillions of neuronal connections. These interconnected webs create larger circuits that map all throughout the architecture of the cortex. This generates high-level symbolic neuronal maps that take form as images in our consciousness. The play of consciousness is the highest level of symbolic form. It is a living theater of "image-ination," a representational world that consists of a cast of characters who relate together by feeling as well as scenarios, plots, set designs, and landscape.

As we adapt to our environment, the brain maps our emotional experience through cortical memory. This starts very early in life. If a baby is startled by a loud noise, his arms and legs will flail. His heart pumps adrenaline, and he cries. This "startle" maps a fight-or-flight response in his cortex, which is mapped through serotonin and cortisol. The baby is restored by his mother's holding. Her responsive repair once again re-establishes and maintains his well-being, which is mapped through oxytocin. These ongoing formative experiences of life are mapped into memory in precisely these two basic ways.

The article is here.

Monday, March 5, 2018

Donald Trump and the rise of tribal epistemology

David Roberts
Vox.com
Originally posted May 19, 2017 and still extremely important

Here is an excerpt:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

Donald Trump and his team represent an assault on almost every American institution — they make no secret of their desire to “deconstruct the administrative state” — but their hostility toward the media is unique in its intensity.

It is Trump’s obsession and favorite target. He sees himself as waging a “running war” on the mainstream press, which his consigliere Steve Bannon calls “the opposition party.”

The article is here.

Saturday, February 17, 2018

Fantasy and Dread: The Demand for Information and the Consumption Utility of the Future

Ananda R. Ganguly and Joshua Tasoff
Management Science
Last revised: 1 Jun 2016

Abstract

We present evidence that intrinsic demand for information about the future is increasing in expected future consumption utility. In the first experiment, subjects may resolve a lottery now or later. The information is useless for decision making but the larger the reward, the more likely subjects are to pay to resolve the lottery early. In the second experiment subjects may pay to avoid being tested for HSV-1 and the more highly feared HSV-2. Subjects are three times more likely to avoid testing for HSV-2, suggesting that more aversive outcomes lead to more information avoidance. In a third experiment, subjects make choices about when to get tested for a fictional disease. Some subjects behave in a way consistent with expected utility theory and others exhibit greater delay of information for more severe diseases. We also find that information choice is correlated with positive affect, ambiguity aversion, and time preference as some theories predict.

The research is here.

Monday, February 5, 2018

A Robot Goes to College

Lindsay McKenzie
Inside Higher Ed
Originally published December 21, 2017

A robot called Bina48 has successfully taken a course in the philosophy of love at Notre Dame de Namur University, in California.

According to course instructor William Barry, associate professor of philosophy and director of the Mixed Reality Immersive Learning and Research Lab at NDNU, Bina48 is the world’s first socially advanced robot to complete a college course, a feat he described as “remarkable.” The robot took part in class discussions, gave a presentation with a student partner and participated in a debate with students from another institution.

(cut)

Barry said that working with Bina48 had been a valuable experience for him and his students. “We need to get over our existential fear about robots and see them as an opportunity,” he said. “If we approach artificial intelligence with a sense of the dignity and sacredness of all life, then we will produce robots with those same values,” he said.

The information is here.

Thursday, October 12, 2017

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Saturday, September 30, 2017

What is New In Psychotherapy & Counseling in the Last 10 Years



Sam Knapp and I will be presenting this unique blend of small group learning, research, and lecture.

It has been estimated that the half-life for a professional psychologist is 9 years. Thus, professional psychologists need to work assiduously to keep up to date with the changes in the field. This continuing education program strives to do that by having participants reflect on the most significant changes in the field in the last 10 years. To facilitate this reflection, the presenter offers his update in the psychotherapy and counseling literature in the last 10 years as an opportunity for participants to reflect on and consider their perceptions of the important developments in the field. This focuses on changes in psychotherapy and counseling and does not consider changes in other fields, except as they influence psychotherapy or counseling. There will be considerable participant interaction.

Thursday, July 6, 2017

What the Rise of Sentient Robots Will Mean for Human Beings

George Musser
NBC
Originally posted June 19, 2017

Here is an excerpt:

“People expect that self-awareness is going to be this end game of artificial intelligence when really there are no scientific pursuits where you start at the end,” says Justin Hart, a computer scientist at the University of Texas. He and other researchers are already building machines with rudimentary minds. One robot wriggles like a newborn baby to understand its body. Another robot babbles about what it sees and cries when you hit it. Another sets off to explore its world on its own.

No one claims that robots have a rich inner experience — that they have pride in floors they've vacuumed or delight in the taste of 120-volt current. But robots can now exhibit some similar qualities to the human mind, including empathy, adaptability, and gumption.

Beyond it just being cool to create robots, researchers design these cybernetic creatures because they’re trying to fix flaws in machine-learning systems. Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.

Humans can also be hard to read. We spend an inordinate amount of time analyzing ourselves and others, and arguably, that’s the main role of our conscious minds. If machines had minds, they might not be so inscrutable. We could simply ask them why they did what they did.

The article is here.

Wednesday, June 14, 2017

Should We Outsource Our Moral Beliefs to Others?

Grace Boey
3 Quarks Daily
Originally posted May 29, 2017

Here is an excerpt:

Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.

Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.

Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.

The article is here.