Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Knowledge. Show all posts
Showing posts with label Knowledge. Show all posts

Tuesday, September 10, 2019

Can Ethics Be Taught?

Peter Singer
Project Syndicate
Originally published August 7, 2019

Can taking a philosophy class – more specifically, a class in practical ethics – lead students to act more ethically?

Teachers of practical ethics have an obvious interest in the answer to that question. The answer should also matter to students thinking of taking a course in practical ethics. But the question also has broader philosophical significance, because the answer could shed light on the ancient and fundamental question of the role that reason plays in forming our ethical judgments and determining what we do.

Plato, in the Phaedrus, uses the metaphor of a chariot pulled by two horses; one represents rational and moral impulses, the other irrational passions or desires. The role of the charioteer is to make the horses work together as a team. Plato thinks that the soul should be a composite of our passions and our reason, but he also makes it clear that harmony is to be found under the supremacy of reason.

In the eighteenth century, David Hume argued that this picture of a struggle between reason and the passions is misleading. Reason on its own, he thought, cannot influence the will. Reason is, he famously wrote, “the slave of the passions.”

The info is here.

Tuesday, July 30, 2019

Is belief superiority justified by superior knowledge?

Michael P.Hall & Kaitlin T.Raimi
Journal of Experimental Social Psychology
Volume 76, May 2018, Pages 290-306

Abstract

Individuals expressing belief superiority—the belief that one's views are superior to other viewpoints—perceive themselves as better informed about that topic, but no research has verified whether this perception is justified. The present research examined whether people expressing belief superiority on four political issues demonstrated superior knowledge or superior knowledge-seeking behavior. Despite perceiving themselves as more knowledgeable, knowledge assessments revealed that the belief superior exhibited the greatest gaps between their perceived and actual knowledge. When given the opportunity to pursue additional information in that domain, belief-superior individuals frequently favored agreeable over disagreeable information, but also indicated awareness of this bias. Lastly, experimentally manipulated feedback about one's knowledge had some success in affecting belief superiority and resulting information-seeking behavior. Specifically, when belief superiority is lowered, people attend to information they may have previously regarded as inferior. Implications of unjustified belief superiority and biased information pursuit for political discourse are discussed.

The research is here.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Saturday, December 22, 2018

Complexities for Psychiatry's Identity As a Medical Specialty

Mohammed Abouelleil Rashed
Kan Zaman Blog
Originally posted November 23, 2018

Here is an excerpt:

Doctors, researchers, governments, pharmaceutical companies, and patient groups each have their own interests and varying abilities to influence the construction of disease categories. This creates the possibility for disagreement over the legitimacy of certain conditions, something we can see playing out in the ongoing debates surrounding Chronic Fatigue Syndrome, a condition that “receives much more attention from its sufferers and their supporters than from the medical community” (Simon 2011: 91). And, in psychiatry, it has long been noted that some major pharmaceutical companies influence the construction of disorder in order to create a market for the psychotropic drugs they manufacture. From the perspective of medical anti-realism (in the constructivist form presented here), these influences are no longer seen as a hindrance to the supposedly objective, ‘natural kind’ status of disease categories, but as key factors involved in their construction. Thus, the lobbying power of the American Psychiatric Association, the vested interests of pharmaceutical companies, and the desire of psychiatrists as a group to maintain their prestige do not undermine the identity of psychiatry as a medical specialty; what they do is highlight the importance of emphasizing the interests of patient groups as well as utilitarian and economic criteria to counteract and respond to the other interests. Medical constructivism is not a uniquely psychiatric ontology, it is a medicine-wide ontology; it applies to schizophrenia as it does to hypertension, appendicitis, and heart disease. Owing to the normative complexity of psychiatry (outlined earlier) and to the fact that loss of freedom is often involved in psychiatric practice, the vested interests involved in psychiatry are more complex and harder to resolve than in many other medical specialties. But that in itself is not a hindrance to psychiatry’s identity as a medical speciality.

The info is here.

Tuesday, December 18, 2018

The Psychology of Political Polarization

Daniel Yudkin
The New York Times - Opinion
Originally posted November 17, 2018

Here is an excerpt:

Our analysis revealed seven groups in the American population, which we categorized as progressive activists, traditional liberals, passive liberals, politically disengaged, moderates, traditional conservatives and devoted conservatives. (Curious which group you belong to? Take our quiz to find out.) We found stark differences in attitudes across groups: For example, only 1 percent of progressive activists, but 97 percent of devoted conservatives, approve of Donald Trump’s performance as president.

Furthermore, our results discovered a connection between core beliefs and political views. Consider the core belief of how safe or threatening you feel the world to be. Forty-seven percent of devoted conservatives strongly believed that the world was becoming an increasingly dangerous place. By contrast, only 19 percent of progressive activists held this view.

In turn, those who viewed the world as a dangerous place were three times more likely to strongly support the building of a border wall between the United States and Mexico, and twice as likely to view Islam as a national threat. By contrast, those who did not see the world as dangerous were 50 percent more likely to believe that people were too worried about terrorism and 50 percent more likely to believe that immigration was good for America.

The info is here.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.

Monday, November 19, 2018

Why Facts Don’t Change Our Minds

James Clear
www.jamesclear.com
Undated

Facts Don't Change Our Minds. Friendship Does.

Convincing someone to change their mind is really the process of convincing them to change their tribe. If they abandon their beliefs, they run the risk of losing social ties. You can’t expect someone to change their mind if you take away their community too. You have to give them somewhere to go. Nobody wants their worldview torn apart if loneliness is the outcome.

The way to change people’s minds is to become friends with them, to integrate them into your tribe, to bring them into your circle. Now, they can change their beliefs without the risk of being abandoned socially.

The British philosopher Alain de Botton suggests that we simply share meals with those who disagree with us:
“Sitting down at a table with a group of strangers has the incomparable and odd benefit of making it a little more difficult to hate them with impunity. Prejudice and ethnic strife feed off abstraction. However, the proximity required by a meal – something about handing dishes around, unfurling napkins at the same moment, even asking a stranger to pass the salt – disrupts our ability to cling to the belief that the outsiders who wear unusual clothes and speak in distinctive accents deserve to be sent home or assaulted. For all the large-scale political solutions which have been proposed to salve ethnic conflict, there are few more effective ways to promote tolerance between suspicious neighbours than to force them to eat supper together.” 
Perhaps it is not difference, but distance that breeds tribalism and hostility. As proximity increases, so does understanding. I am reminded of Abraham Lincoln's quote, “I don't like that man. I must get to know him better.”

Facts don't change our minds. Friendship does.

Friday, November 9, 2018

Believing without evidence is always morally wrong

Francisco Mejia Uribe
aeon.co
Originally posted November 5, 2018

Here are two excerpts:

But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.

(cut)

Translating Clifford’s warning to our interconnected times, what he tells us is that careless believing turns us into easy prey for fake-news peddlers, conspiracy theorists and charlatans. And letting ourselves become hosts to these false beliefs is morally wrong because, as we have seen, the error cost for society can be devastating. Epistemic alertness is a much more precious virtue today than it ever was, since the need to sift through conflicting information has exponentially increased, and the risk of becoming a vessel of credulity is just a few taps of a smartphone away.

Clifford’s third and final argument as to why believing without evidence is morally wrong is that, in our capacity as communicators of belief, we have the moral responsibility not to pollute the well of collective knowledge. In Clifford’s time, the way in which our beliefs were woven into the ‘precious deposit’ of common knowledge was primarily through speech and writing. Because of this capacity to communicate, ‘our words, our phrases, our forms and processes and modes of thought’ become ‘common property’. Subverting this ‘heirloom’, as he called it, by adding false beliefs is immoral because everyone’s lives ultimately rely on this vital, shared resource.

The info is here.

Thursday, November 8, 2018

Code of Ethics Doesn’t Influence Decisions of Software Developers

Emerson Murphy-Hill, Justin Smith, & Matt Shipman
NC State Pressor
Originally released October 8, 2018

The world’s largest computing society, the Association for Computing Machinery (ACM), updated its code of ethics in July 2018 – but new research from North Carolina State University shows that the code of ethics does not appear to affect the decisions made by software developers.

“We applauded the decision to update the ACM code of ethics, but wanted to know whether it would actually make a difference,” says Emerson Murphy-Hill, co-author of a paper on the work and an adjunct associate professor of computer science at NC State.

“This issue is timely, given the tech-related ethics scandals in the news in recent years, such as when Volkwagen manipulated its technology that monitored vehicle emissions. And developers will continue to face work-related challenges that touch on ethical issues, such as the appropriate use of artificial intelligence.”

For the study, researchers developed 11 written scenarios involving ethical challenges, most of which were drawn from real-life ethical questions posted by users on the website Stack Overflow. The study included 105 U.S. software developers with five or more years of experience and 63 software engineering graduate students at a university. Half of the study participants were shown a copy of the ACM code of ethics, the other half were simply told that ethics are important as part of an introductory overview of the study. All study participants were then asked to read each scenario and state how they would respond to the scenario.

“There was no significant difference in the results – having people review the code of ethics beforehand did not appear to influence their responses,” Murphy-Hill says.

The press release is here.

The research is here.

Wednesday, October 31, 2018

Learning Others’ Political Views Reduces the Ability to Assess and Use Their Expertise in Nonpolitical Domains

Marks, Joseph and Copland, Eloise and Loh, Eleanor and Sunstein, Cass R. and Sharot, Tali.
Harvard Public Law Working Paper No. 18-22. (April 13, 2018).

Abstract

On political questions, many people are especially likely to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. Here, we test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that person is less skilled in that domain than someone with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. The results demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.

You can download the paper here.

Probably a good resource to contemplate before discussing politics in psychotherapy.

Tuesday, October 2, 2018

Philosophy of Multicultures

Owen Flanagan
Philosophers Magazine
Originally published August 19, 2018

Here is an excerpt:

First, as I have been insisting, we live increasingly in multicultural, multiethnic, cosmopolitan worlds. Depending on one’s perspective these worlds are grand experiments in tolerant living, worlds in which prejudices break down; or they are fractured, wary, tense ethnic and religious cohousing projects; or they are melting pots where differences are thinned out and homogenised over time; or they are admixtures or collages of the best values, norms, and practices, the sociomoral equivalent of fine fusion cuisine or excellent world music that creates flavours or sounds from multiple fine sources; or on the other side, a blend of the worst of incommensurable value systems and practices, clunky and degenerate. It is good for ethicists to know more about people who are not from the North Atlantic (or its outposts). Or even if they are from the North Atlantic are not from elites or are not from “around here”. It matters how members of original displaced communities or people who were brought here or came here as chattel slaves or indentured workers or political refugees or for economic opportunity, have thought about virtues, values, moral psychology, normative ethics, and good human lives.

Second, most work in empirical moral psychology has been done on WEIRD people (Western Educated Industrialised Rich Democratic) and there is every reason to think WEIRD people are unrepresentative, possibly the most unrepresentative group imaginable, less representative than our ancestors when the ice melted at the end of the Pleistocene. It may be the assumptions we make about the nature of persons and the human good in the footnotes to Plato lineage and which seem secure are in fact parochial and worth re-examining.

Third, the methods of genetics, empirical psychology, evolutionary psychology, and neuroscience get lots of attention recently in moral psychology, as if they can ground an entirely secular and neutral form of common life. But it would be a mistake to think that these sciences are superior to the wisdom of the ages in gaining deep knowledge about human nature and the human good or that they are robust enough to provide a picture of a good life.

The info is here.

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.

Friday, July 6, 2018

People who think their opinions are superior to others are most prone to overestimating their relevant knowledge and ignoring chances to learn more

Tom Stafford
Blog Post: Research Digest
Originally posted May 31, 2018

Here is an excerpt:

Finally and more promisingly, the researchers found some evidence that belief superiority can be dented by feedback. If participants were told that people with beliefs like theirs tended to score poorly on topic knowledge, or if they were directly told that their score on the topic knowledge quiz was low, this not only reduced their belief superiority, it also caused them to seek out the kind of challenging information they had previously neglected in the headlines task (though the evidence for this behavioural effect was mixed).

The studies all involved participants accessed via Amazon’s Mechanical Turk, allowing the researchers to work with large samples of Americans for each experiment. Their findings mirror the well-known Dunning-Kruger effect – Kruger and Dunning showed that for domains such as judgments of grammar, humour or logic, the most skilled tend to underestimate their ability, while the least skilled overestimate it. Hall and Raimi’s research extends this to the realm of political opinions (where objective assessment of correctness is not available), showing that the belief your opinion is better than other people’s tends to be associated with overestimation of your relevant knowledge.

The article is here.

Monday, June 25, 2018

The primeval tribalism of American politics

The Economist
Originally posted May 24, 2018

Here is an excerpt:

The problem is structural: the root of tribalism is human nature, and the current state of American democracy is distinctly primeval. People have an urge to belong to exclusive groups and to affirm their membership by beating other groups. A new book by the political scientist Lilliana Mason, “Uncivil Agreement”, describes the psychology experiments that proved this. In one, members of randomly selected groups were told to share a pile of cash between their group and another. Given the choice of halving the sum, or of keeping a lesser portion for themselves and handing an even smaller portion to the other group, they preferred the second option. The common good meant nothing. Winning was all. This is the logic of American politics today.

How passion got strained

The main reason for that, Ms Mason argues, is a growing correlation between partisan and other important identities, concerning race, religion and so on. When the electorate was more jumbled (for example, when the parties had similar numbers of racists and smug elitists) most Americans had interests in both camps. That allowed people to float between, or at least to respect them. The electorate is now so sorted—with Republicans the party of less well-educated and socially conservative whites and Democrats for everyone else—as to provide little impediment to a deliciously self-affirming intertribal dust-up.

The article is here.

Monday, June 4, 2018

Human-sounding Google Assistant sparks ethics questions

The Strait Times
Originally published May 9, 2018

Here are some excerpts:

The new Google digital assistant converses so naturally it may seem like a real person.

The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers, but left others fretting over the ethics of how the human-seeming software might be used.

(cut)

The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing "robocalls" by marketers or political campaigns.

(cut)

Digital assistants making arrangements for people also raises the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.

The information is here.

Sunday, May 20, 2018

Robot cognition requires machines that both think and feel

Luiz Pessosa
www.aeon.com
Originally published April 13, 2018

Here is an excerpt:

Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.

Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.

The information is here.

Friendly note: I don't agree with everything I post.  In this case, I do not believe that AI needs emotions and feelings.  Rather, AI will have a different form of consciousness.  We don't need to try to reproduce our experiences exactly.  AI consciousness will likely have flaws, like we do.  We need to be able to manage AI given the limitations we create.

Friday, May 18, 2018

You don’t have a right to believe whatever you want to

Daniel DeNicola
aeon.co
Originally published May 14, 2018

Here is the conclusion:

Unfortunately, many people today seem to take great licence with the right to believe, flouting their responsibility. The wilful ignorance and false knowledge that are commonly defended by the assertion ‘I have a right to my belief’ do not meet James’s requirements. Consider those who believe that the lunar landings or the Sandy Hook school shooting were unreal, government-created dramas; that Barack Obama is Muslim; that the Earth is flat; or that climate change is a hoax. In such cases, the right to believe is proclaimed as a negative right; that is, its intent is to foreclose dialogue, to deflect all challenges; to enjoin others from interfering with one’s belief-commitment. The mind is closed, not open for learning. They might be ‘true believers’, but they are not believers in the truth.

Believing, like willing, seems fundamental to autonomy, the ultimate ground of one’s freedom. But, as Clifford also remarked: ‘No one man’s belief is in any case a private matter which concerns himself alone.’ Beliefs shape attitudes and motives, guide choices and actions. Believing and knowing are formed within an epistemic community, which also bears their effects. There is an ethic of believing, of acquiring, sustaining, and relinquishing beliefs – and that ethic both generates and limits our right to believe. If some beliefs are false, or morally repugnant, or irresponsible, some beliefs are also dangerous. And to those, we have no right.

The information is here.

Wednesday, May 16, 2018

Escape the Echo Chamber

C Thi Nguyen
www.medium.com
Originally posted April 12, 2018

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making — wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trustpeople from the other side.

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission. That omission might be purposeful: we might be selectively avoiding contact with contrary views because, say, they make us uncomfortable. As social scientists tell us, we like to engage in selective exposure, seeking out information that confirms our own worldview. But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests. When we take networks built for social reasons and start using them as our information feeds, we tend to miss out on contrary views and run into exaggerated degrees of agreement.

The information is here.