Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Control. Show all posts
Showing posts with label Control. Show all posts

Wednesday, April 4, 2018

Musk and Zuckerberg are fighting over whether we rule technology—or it rules us

Michael Coren
Quartz.com
Originally posted April 1, 2018

Here is an excerpt:

Musk wants to rein in AI, which he calls “a fundamental risk to the existence of human civilization.” Zuckerberg has dismissed such views calling their proponents “naysayers.” During a Facebook live stream last July, he added, “In some ways I actually think it is pretty irresponsible.” Musk was quick to retort on Twitter. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”

Both men’s views on the risks and rewards of technology are embodied in their respective companies. Zuckerberg has famously embraced the motto “Move fast and break things.” That served Facebook well as it exploded from a college campus experiment in 2004 to an aggregator of the internet for more than 2 billion users.

Facebook has treated the world as an infinite experiment, a game of low-stakes, high-volume tests that reliably generate profits, if not always progress. Zuckerberg’s main concern has been to deliver the fruits of digital technology to as many people as possible, as soon as possible. “I have pretty strong opinions on this,” Zuckerberg has said. “I am optimistic. I think you can build things and the world gets better.”

The information is here.

Tuesday, March 27, 2018

Neuroblame?

Stephen Rainey
Practical Ethics
Originally posted February 15, 2018

Here is an excerpt:

Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.

Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.

The blog post is here.

Monday, March 19, 2018

#MeToo in Medicine: Waiting for the Reckoning

Elizabeth Chuck
NBC News
Originally posted February 21, 2018

Here is an excerpt:

Health care organizations make clear that they do not condone inappropriate behavior. The American Medical Association calls workplace sexual harassment unethical and specifically states in its Code of Medical Ethics that “Sexual relationships between medical supervisors and trainees are not acceptable, even if consensual.”

Westchester Medical Center Health Network, where Jenkins says she was sexually harassed as a resident, maintains that it has never tolerated workplace harassment. In a statement to NBC News, it said that the surgeon in question "has not worked at Westchester Medical Center for years and we have no record of a report."

"Our policies on harassment are strict, clear and presented to all employees consistently," it said.

"Mechanisms have been and continue to be in place to enable confidential reporting and allegations involving staff are investigated swiftly and thoroughly. Disciplinary actions are taken, as appropriate, after internal review," the statement said, adding that Westchester Medical Center's policies were "continuously examined and enhanced" and that reporting sexual harassment was encouraged through its confidential 24-hour hotline.

More than a hotline is needed, said many females in medicine, who want to see an overhaul of their entire profession — with men made aware of what's unacceptable and women looking out for one another and supporting each other.

The article is here.

Saturday, January 6, 2018

The Myth of Responsibility

Raoul Martinez
RSA.org
Originally posted December 7, 2017

Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy.  Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.

4 minutes video is worth watching.....

Tuesday, December 12, 2017

Regulation of AI: Not If But When and How

Ben Loewenstein
RSA.org
Originally published November 21, 2017

Here is an excerpt:

Firstly, AI is already embedded in today’s world, albeit in infant form. Fully autonomous vehicles are not for sale yet but self-parking cars have been in the market for years. We already rely on biometric technology like facial recognition to grant us entry into a country and robots are giving us banking advice.

Secondly, there is broad consensus that controls are needed. For example, a report issued last December by the office of former US President Barack Obama concluded that “aggressive policy action” would be required in the event of large job losses due to automation to ensure it delivers prosperity. If the American Government is no longer a credible source of accurate information for you, take the word of heavyweights like Bill Gates and Elon Musk, both of whom have called for AI to be regulated.

Finally, the building blocks of AI regulation are already looming in the form of rules like the European Union’s General Data Protection Regulation, which will take effect next year. The UK government’s independent review’s recommendations are also likely to become government policy. This means that we could see a regime established where firms within the same sector share data with each other under prescribed governance structures in an effort to curb the monopolies big tech companies currently enjoy on consumer information.

The latter characterises the threat facing the AI industry: the prospect of lawmakers making bold decisions that alter the trajectory of innovation. This is not an exaggeration.

The article is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.

Friday, July 7, 2017

Is The Concern Artificial Intelligence — Or Autonomy?

Alva Noe
npr.org
Originally posted June 16, 2017

Here is an excerpt:

The big problem AI faces is not the intelligence part, really. It's the autonomy part. Finally, at the end of the day, even the smartest computers are tools, our tools — and their intentions are our intentions. Or, to the extent that we can speak of their intentions at all — for example of the intention of a self-driving car to avoid an obstacle — we have in mind something it was designed to do.

Even the most primitive organism, in contrast, at least seems to have a kind of autonomy. It really has its own interests. Light. Food. Survival. Life.

The danger of our growing dependence on technologies is not really that we are losing our natural autonomy in quite this sense. Our needs are still our needs. But it is a loss of autonomy, nonetheless. Even auto mechanics these days rely on diagnostic computers and, in the era of self-driving cars, will any of us still know how to drive? Think what would happen if we lost electricity, or if the grid were really and truly hacked? We'd be thrown back into the 19th century, as Dennett says. But in many ways, things would be worse. We'd be thrown back — but without the knowledge and know-how that made it possible for our ancestors to thrive in the olden days.

I don't think this fear is unrealistic. But we need to put it in context.

The article is here.

Tuesday, February 28, 2017

Google Doesn't Want to Accidentally Make Skynet, So It's Creating an AI Off Switch

Darren Orf
Gizmodo
Originally posted June 3, 2017

There are two unmistakable sides to the debate concerning the future of artificial intelligence. In the “boom” corner are companies like Google, Facebook, Amazon, and Microsoft aggressively investing in technology to make AI systems smarter and smarter. And in the “doom” corner are prominent thinkers like Elon Musk and Stephen Hawking who’ve said that AI is like “summoning the demon.”

Now, one of the the most advanced AI outfits, Google’s DeepMind, is taking safety measures in case human operators need to “take control of a robot that is misbehaving [that] may lead to irreversible consequences,” which I assume includes but is not limited to killing all humans. However, this paper doesn’t get nearly so apocalyptic and keeps examples simple, like intelligent robots working in a factory.

The article is here.

Monday, October 17, 2016

Affective nudging

Eric Schliesser
Digressions and Impressions blog
Originally published September 30, 2016

Here is an excerpt:

Nudging is paternalist. But by making exit easy and avoidance cheap nudges are thought to avoid the worst moral and political problems of paternalism and (other) manipulative practices. (What counts as a significant change of economic incentives is, of course, very contestable, but we leave that aside here.) Nudges may, in fact, sometimes enhance autonomy and freedom, but the way Sunstein & Thaler define 'nudge' one may nudge also for immoral ends. Social engineering does not question the ends.

The modern administrative state is, however, not just a rule-following Weberian bureaucracy where the interaction between state and citizen is governed by the exchange of forms, information, and money. Many civil servants, including ones with very distinct expertise (physicians, psychologists, lawyers, engineers, social service workers, therapists, teachers, correction officers, etc.) enter quite intimately into the lives of lots of citizens. Increasingly (within the context of new public management), government professionals and hired consultants are given broad autonomy to meet certain targets (quotas, budget or volume numbers, etc.) within constrained parameters. (So, for example, a physician is not just a care provider, but also somebody who can control costs.) Bureaucratic management and the political class are agnostic about how the desired outcomes are met, as long as it is legal, efficient and does not generate bad media or adverse political push-back.

The blog post is here.

Tuesday, August 30, 2016

Here Are the Feels That Make Internet Things Go Viral

By Drake Baer
The Science of Us
Originally posted May 25, 2016

Here is an excerpt:

Across the two languages, the researchers found, the stories that were most widely shared were high in “dominance,” or the feeling of being in control. Posts that make you feel happy or inspired are high in dominance, the research says, while stories that make you feel sad are disempowering. (This is also why “21 Pictures That Will Restore Your Faith In Humanity” is perhaps the finest BuzzFeed post of all, and like all quality vintages, it only gets better with age).

While dominance led to sharing in this data set, arousal (the feeling of being upset or excited, as indicated by giving angry affective feedback) predicted commenting. So if a story makes you really upset — as perhaps may be exploited by a presidential candidate or two — you’ll be more likely to comment, providing further explanation for why internet comments tend toward viciousness.

Wednesday, May 25, 2016

Should we be afraid of AI?

by Luciano Floridi
Aeon
Originally posted May 9, 2016

Here is an excerpt:

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviours, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that ‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its smart technologies as well.

The article is here.

Thursday, May 5, 2016

We are zombies rewriting our mental history to feel in control

By Matthew Hutson
Daily News
Originally posted April 15 2016

Here is an excerpt:

Another possibility, one Bear prefers, is that we misperceive the order of events in the moment due to inherent limitations in perceptual processing. To put it another way, our brain isn’t trying to trick us into believing we are in control – just that it struggles to process a rapid sequence of events in the correct order.

Such findings may also imply that many of the choices we believe we make only appear to be signs of free will after the fact.

Everyday examples of this “postdictive illusion of choice” abound. You only think that you consciously decided to scratch an itch, make a deft football play, or blurt out an insult, when really you’re just taking credit for reflexive actions.

The article is here.

Thursday, April 21, 2016

The Science of Choosing Wisely — Overcoming the Therapeutic Illusion

David Casarett
New England Journal of Medicine 2016; 374:1203-1205
March 31, 2016
DOI: 10.1056/NEJMp1516803

Here are two excerpts:

The success of such efforts, however, may be limited by the tendency of human beings to overestimate the effects of their actions. Psychologists call this phenomenon, which is based on our tendency to infer causality where none exists, the “illusion of control.” In medicine, it may be called the “therapeutic illusion” (a label first applied in 1978 to “the unjustified enthusiasm for treatment on the part of both patients and doctors”). When physicians believe that their actions or tools are more effective than they actually are, the results can be unnecessary and costly care. Therefore, I think that efforts to promote more rational decision making will need to address this illusion directly.

(cut)

The outcome of virtually all medical decisions is at least partly outside the physician’s control, and random chance can encourage physicians to embrace mistaken beliefs about causality. For instance, joint lavage is overused for relief of osteoarthritis-related knee pain, despite a recommendation against it from the American Academy of Orthopedic Surgery. Knee pain tends to wax and wane, so many patients report improvement in symptoms after lavage, and it’s natural to conclude that the intervention was effective.

The article is here.

Wednesday, January 13, 2016

The A.I Anxiety

by Joel Achenbach
The Washington Post
Originally published December 27, 2015

Here is an excerpt:

But the discussion reflects a broader truth: We live in an age in which machine intelligence has become a part of daily life. Computers fly planes and soon will drive cars. Computer algorithms anticipate our needs and decide which advertisements to show us. Machines create news stories without human intervention. Machines can recognize your face in a crowd.

New technologies — including genetic engineering and nanotechnology — are cascading upon one another and converging. We don’t know how this will play out. But some of the most serious thinkers on Earth worry about potential hazards — and wonder whether we remain fully in control of our inventions.

The article is here.

Editor's Note: What if a form of consciousness emerges from AI? There are many reasons, except for anthropomorphic bias, to expect a form of consciousness to surface from highly complex, synthetic, artificial intelligence.  What then?  This concern is not addressed in the article.

Friday, January 1, 2016

Why we forgive what can’t be controlled

Martin, J.W. & Cushman, F.A.
Cognition, 147, 133-143

Abstract

Volitional control matters greatly for moral judgment: Coerced agents receive less condemnation for outcomes they cause. Less well understood is the psychological basis of this effect. Control may influence perceptions of intent for the outcome that occurs or perceptions of causal role in that outcome. Here, we show that an agent who chooses to do the right thing but accidentally causes a bad outcome receives relatively more punishment than an agent who is forced to do the ‘‘right” thing but causes a bad outcome.  Thus, having good intentions ironically leads to greater condemnation. This surprising effect does not depend upon perceptions of increased intent for harm to occur, but rather upon perceptions of causal role in the obtained outcome. Further, this effect is specific to punishment: An agent who chooses to do the right thing is rated as having better moral character than a forced agent, even though they cause the same bad outcome. These results clarify how, when and why control influences moral judgment.

The article is here.

Friday, December 11, 2015

Why do we intuitively believe we have free will?

By Tom Stafford
BBC.com
Originally published 7 August 2015

It is perhaps the most famous experiment in neuroscience. In 1983, Benjamin Libet sparked controversy with his demonstration that our sense of free will may be an illusion, a controversy that has only increased ever since.

Libet’s experiment has three vital components: a choice, a measure of brain activity and a clock.
The choice is to move either your left or right arm. In the original version of the experiment this is by flicking your wrist; in some versions of the experiment it is to raise your left or right finger. Libet’s participants were instructed to “let the urge [to move] appear on its own at any time without any pre-planning or concentration on when to act”. The precise time at which you move is recorded from the muscles of your arm.

The article is here.

Sunday, November 22, 2015

A Driverless Car Dystopia? Technology and the Lives We Want to Live

By Anthony Painter
RSA
Originally published November 6, 2015

Here is an excerpt:

There needs to be a bigger public debate about the type of society we want, how technology can help us, and what institutions we need to help us all interface with the changes we are likely to see. Could block-chain, bitcoin and digital currencies help us spread new forms of collective ownership and give us more power over the public services we use? How do we find a sweet-spot where consumers and workers – and we are both - share equally in the benefits of the ‘sharing economy’? Is a universal Basic Income a necessary foundation for a world of varying frequency and diverse work arrangements and obligations to others such as elderly relatives and our kids? What do we want to be private and what are we happy to share with companies or the state? Should this be a security conversation or bigger question of ethics? How should we plan transport, housing, work and services around our needs and the types of lives we want to live in communities that have human worth?

The entire article is here.

Thursday, November 12, 2015

Neuroscientific Prediction and Free Will: What do ordinary people think?

By Gregg D. Caruso
Psychology Today Blog
Originally published October 26, 2015

Some theorists have argued that our knowledge of the brain will one day advance to the point where the perfect neuroscientific prediction of all human choices is theoretically possible. Whether or not such prediction ever becomes a reality, this possibility raises an interesting philosophical question: Would such perfect neuroscientific prediction be compatible with the existence of free will? Philosophers have long debated such questions. The historical debate between compatibilists and incompatibilists, for example, has centered on whether determinism and free will can be reconciled. Determinism is the thesis that every event or action, including human action, is the inevitable result of preceding events and actions and the laws of nature. The question of perfect neuro-prediction is just a more recent expression of this much older debate. While philosophers have their arguments for the compatibility or incompatibility of free will and determinism (or perfect neuroscientific prediction), they also often claim that their intuitions are in general agreement with commonsense judgments. To know whether this is true, however, we first need to know what ordinary folk think about these matters. Fortunately, recent research in psychology and experimental philosophy has begun to shed some light on this.

The entire article is here.

Monday, November 2, 2015

Does Disbelief in Free Will Increase Anti-Social Behavior?

By Gregg Caruso
Psychology Today Blog
Originally published October 16, 2015

Here is an excerpt:

Rather than defend free will skepticism, however, I would like to examine an important practical question: What if we came to disbelieve in free will and basic desert moral responsibility? What would this mean for our interpersonal relationships, society, morality, meaning, and the law? What would it do to our standing as human beings? Would it cause nihilism and despair as some maintain? Or perhaps increase anti-social behavior as some recent studies have suggested (more of this in a moment)? Or would it rather have a humanizing effect on our practices and policies, freeing us from the negative effects of free will belief? These questions are of profound pragmatic importance and should be of interest independent of the metaphysical debate over free will. As public proclamations of skepticism continue to rise, and as the media continues to run headlines proclaiming that free will is an illusion, we need to ask what effects this will have on the general public and what the responsibility is of professionals.

In recent years a small industry has actually grown up around precisely these questions. In the skeptical community, for example, a number of different positions have been developed and advanced—including Saul Smilansky’s illusionism, Thomas Nadelhoffer’s disillusionism, Shaun Nichols’ anti-revolution, and the optimistic skepticism of Derk Pereboom, Bruce Waller, and myself.

The entire article is here.

Tuesday, July 14, 2015

Consciousness has less control than believed

San Francisco State University
Press Release
Originally released June 23, 2015

Consciousness -- the internal dialogue that seems to govern one's thoughts and actions -- is far less powerful than people believe, serving as a passive conduit rather than an active force that exerts control, according to a new theory proposed by an SF State researcher.

Associate Professor of Psychology Ezequiel Morsella's "Passive Frame Theory" suggests that the conscious mind is like an interpreter helping speakers of different languages communicate.

"The interpreter presents the information but is not the one making any arguments or acting upon the knowledge that is shared," Morsella said. "Similarly, the information we perceive in our consciousness is not created by conscious processes, nor is it reacted to by conscious processes. Consciousness is the middle-man, and it doesn't do as much work as you think."

Morsella and his coauthors' groundbreaking theory, published online on June 22 by the journal Behavioral and Brain Sciences, contradicts intuitive beliefs about human consciousness and the notion of self.

The entire pressor is here.