Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Control. Show all posts
Showing posts with label Control. Show all posts

Monday, July 15, 2019

How the concept of forgiveness is used to gaslight women

Sophie King
Medium.com
Originally posted June 13, 2019

I’m not against the concept of forgiveness, I’ve chosen to forgive people countless times. However, what I’m definitely against, is pressuring people to forgive and shaming them if they don’t. I’ve found there’s a lot of stigma attached to those who choose not to forgive, especially if you’re a woman.

Women that don’t forgive, are assumed to be “scorned”, “bitter and twisted”. The stereotypes that surround “unforgiving” women, are used to gaslight them.

When women express that they’re upset or angry (and justifiably so), as a result of being hurt, people dismiss them as “bitter” and the validity of their feelings and experiences are questioned.

She isn’t psychologically traumatised because she’s been wronged, she’s just a “scorned woman”, “got an axe to grind”, “holding a grudge” and “unable to move on”. The fault lies with her, not the perpetrator because she won’t “let it go” and “get over it”. She’s not the victim, she’s bringing it on herself by not forgiving. The blame is shifted from the wrongdoer to the victim.

The info is here.

Saturday, June 1, 2019

Does It Matter Whether You or Your Brain Did It?

Uri Maoz, K. R. Sita, J. J. A. van Boxtel, and L. Mudrik
Front. Psychol., 30 April 2019
https://doi.org/10.3389/fpsyg.2019.00950

Abstract

Despite progress in cognitive neuroscience, we are still far from understanding the relations between the brain and the conscious self. We previously suggested that some neuroscientific texts that attempt to clarify these relations may in fact make them more difficult to understand. Such texts—ranging from popular science to high-impact scientific publications—position the brain and the conscious self as two independent, interacting subjects, capable of possessing opposite psychological states. We termed such writing ‘Double Subject Fallacy’ (DSF). We further suggested that such DSF language, besides being conceptually confusing and reflecting dualistic intuitions, might affect people’s conceptions of moral responsibility, lessening the perception of guilt over actions. Here, we empirically investigated this proposition with a series of three experiments (pilot and two preregistered replications). Subjects were presented with moral scenarios where the defendant was either (1) clearly guilty, (2) ambiguous, or (3) clearly innocent while the accompanying neuroscientific evidence about the defendant was presented using DSF or non-DSF language. Subjects were instructed to rate the defendant’s guilt in all experiments. Subjects rated the defendant in the clearly guilty scenario as guiltier than in the two other scenarios and the defendant in the ambiguously described scenario as guiltier than in the innocent scenario, as expected. In Experiment 1 (N = 609), an effect was further found for DSF language in the expected direction: subjects rated the defendant less guilty when the neuroscientific evidence was described using DSF language, across all levels of culpability. However, this effect did not replicate in Experiment 2 (N = 1794), which focused on different moral scenario, nor in Experiment 3 (N = 1810), which was an exact replication of Experiment 1. Bayesian analyses yielded strong evidence against the existence of an effect of DSF language on the perception of guilt. Our results thus challenge the claim that DSF language affects subjects’ moral judgments. They further demonstrate the importance of good scientific practice, including preregistration and—most critically—replication, to avoid reaching erroneous conclusions based on false-positive results.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Tuesday, April 9, 2019

N.J. approves bill giving terminally ill people the right to end their lives

Susan Livio
www.nj.com
Originally posted March 25, 2019

New Jersey is poised to become the eighth state to allow doctors to write a lethal prescription for terminally ill patients who want to end their lives.

The state Assembly voted 41-33 with four abstentions Monday to pass the “Medical Aid in Dying for the Terminally Ill Act." Minutes later, the state Senate approved the bill 21-16.

Gov. Phil Murphy later issued a statement saying he would sign the measure into law.

“Allowing terminally ill and dying residents the dignity to make end-of-life decisions according to their own consciences is the right thing to do," the Democratic governor said. "I look forward to signing this legislation into law.”

The measure (A1504) would take effect four months after it is signed.

Susan Boyce, 55 of Rumson, smiled and wept after the final vote.

“I’ve been working on this quite a while," said Boyce, who is diagnosed with a terminal auto immune disease, Alpha-1 antitrypsin deficiency, and needs an oxygen tank to breathe.

The info is here.

Saturday, February 16, 2019

There’s No Such Thing as Free Will

Stephen Cave
The Atlantic
Originally published June 2016

Here is an excerpt:

What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

(cut)

Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.

The info is here.

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Monday, December 31, 2018

How free is our will?

Kevin Mitchell
Wiring The Brain Blog
Originally posted November 25, 2018

Here is an excerpt:

Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

The info is here.

Thursday, December 27, 2018

You Snooze, You Lose: Insurers Make The Old Adage Literally True

Justin Volz
ProPublica
Originally published November 21, 2018

Here is an excerpt:

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

The info is here.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Friday, November 30, 2018

To regulate AI we need new laws, not just a code of ethics

Paul Chadwick
The Guardian
Originally posted October 28, 2018

Here is an excerpt:

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”.

Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of “infrastructures of public discourse”; collection of personal data and profiling of people; and domination of investment in AI, most of it a “black box” not open to public scrutiny.

The key question is which of the challenges of AI “can be safely and with good conscience left to ethics” and which need law. Nemitz sees much that needs law.

In an argument both biting and sophisticated, Nemitz sketches a regulatory framework for AI that will seem to some like the GDPR on steroids.

Among several large claims, Nemitz argues that “not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships.”

The info is here.

Tuesday, November 20, 2018

How tech employees are pushing Silicon Valley to put ethics before profit

Alexia Fernández Campbell
vox.com
Originally published October 18, 2018

The chorus of tech workers demanding American tech companies put ethics before profit is growing louder.

In recent days, employees at Google and Microsoft have been pressuring company executives to drop bids for a $10 billion contract to provide cloud computing services to the Department of Defense.

As part of the contract, known as JEDI, engineers would build cloud storage for military data; there are few public details about what else it would entail. But one thing is clear: The project would involve using artificial intelligence to make the US military a lot deadlier.

“This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said at a March industry event about JEDI.

Thousands of Google employees reportedly pressured the company to drop its bid for the project, and many had said they would refuse to work on it. They pointed out that such work may violate the company’s new ethics policy on the use of artificial intelligence. Google has pledged not to use AI to make “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” a policy company employees had pushed for.

The info is here.

Thursday, November 1, 2018

How much control do you really have over your actions?

Michael Price
Sciencemag.org
Originally posted October 1, 2018

Here is an excerpt:

Philosophers have wrestled with questions of free will—that is, whether we are active drivers or passive observers of our decisions—for millennia. Neuroscientists tap-dance around it, asking instead why most of us feel like we have free will. They do this by looking at rare cases in which people seem to have lost it.

Patients with both alien limb syndrome and akinetic mutism have lesions in their brains, but there doesn’t seem to be a consistent pattern. So Darby and his colleagues turned to a relatively new technique known as lesion network mapping.

They combed the literature for brain imaging studies of both types of patients and mapped out all of their reported brain lesions. Then they plotted those lesions onto maps of brain regions that reliably activate together at the same time, better known as brain networks. Although the individual lesions in patients with the rare movement disorders appeared to occur without rhyme or reason, the team found, those seemingly arbitrary locations fell within distinct brain networks.

The researchers compared their results with those from people who lost some voluntary movement after receiving temporary brain stimulation, which uses low-voltage electrodes or targeted magnetic fields to temporarily “knock offline” brain regions.

The networks that caused loss of voluntary movement or agency in those studies matched Darby and colleagues’ new lesion networks. This suggests these networks are involved in voluntary movement and the perception that we’re in control of, and responsible for, our actions, the researchers report today in the Proceedings of the National Academy of Sciences.

The info is here.

Friday, October 19, 2018

If Humility Is So Important, Why Are Leaders So Arrogant?

Bill Taylor
Harvard Business Review
Originally published October 15, 2018

Here is an excerpt:

With all due modesty, I’d offer a few answers to these vexing questions. For one thing, too many leaders think they can’t be humble and ambitious at the same time. One of the great benefits of becoming CEO of a company, head of a business unit, or leader of a team, the prevailing logic goes, is that you’re finally in charge of making things happen and delivering results. Edgar Schein, professor emeritus at MIT Sloan School of Management, and an expert on leadership and culture, once asked a group of his students what it means to be promoted to the rank of manager. “They said without hesitation, ‘It means I can now tell others what to do.’” Those are the roots of the know-it-all style of leadership. “Deep down, many of us believe that if you are not winning, you are losing,” Schein warns. The “tacit assumption” among executives “is that life is fundamentally and always a competition” — between companies, but also between individuals within companies. That’s not exactly a mindset that recognizes the virtues of humility.

In reality, of course, humility and ambition need not be at odds. Indeed, humility in the service of ambition is the most effective and sustainable mindset for leaders who aspire to do big things in a world filled with huge unknowns. Years ago, a group of HR professionals at IBM embraced a term to capture this mindset. The most effective leaders, they argued, exuded a sense of “humbition,” which they defined as “one part humility and one part ambition.” We “notice that by far the lion’s share of world-changing luminaries are humble people,” they wrote. “They focus on the work, not themselves. They seek success — they are ambitious — but they are humbled when it arrives…They feel lucky, not all-powerful.”

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Forbes.com
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Friday, September 28, 2018

A Debate Over ‘Rational Suicide’

Paula Span
The New York Times
Originally posted August 31, 2018

Here is an excerpt:

Is suicide by older adults ever a rational choice? It’s a topic many older people discuss among themselves, quietly or loudly — and one that physicians increasingly encounter, too. Yet most have scant training or experience in how to respond, said Dr. Meera Balasubramaniam, a geriatric psychiatrist at the New York University School of Medicine.

“I found myself coming across individuals who were very old, doing well, and shared that they wanted to end their lives at some point,” said Dr. Balasubramaniam. “So many of our patients are confronting this in their heads.”

She has not taken a position on whether suicide can be rational — her views are “evolving,” she said. But hoping to generate more medical discussion, she and a co-editor explored the issue in a 2017 anthology, “Rational Suicide in the Elderly,” and she revisited it recently in an article in the Journal of the American Geriatrics Society.

The Hastings Center, the ethics institute in Garrison, N.Y., also devoted much of its latest Hastings Center Report to a debate over “voluntary death” to forestall dementia.

Every part of this idea, including the very phrase “rational suicide,” remains intensely controversial. (Let’s leave aside the related but separate issue of physician aid in dying, currently legal in seven states and the District of Columbia, which applies only to mentally competent people likely to die of a terminal illness within six months.)

The info is here.

Thursday, September 27, 2018

Superstition predicts perception of illusory control

Oren Griffiths, Noor Shehabi  Robin A. Murphy  Mike E. Le Pelley
British Journal of Psychology
First published August 24, 2018

Abstract

Superstitions are common, yet we have little understanding of the cognitive mechanisms that bring them about. This study used a laboratory‐based analogue for superstitious beliefs that involved people monitoring the relationship between undertaking an action (pressing a button) and an outcome occurring (a light illuminating). The task was arranged such that there was no objective contingency between pressing the button and the light illuminating – the light was just as likely to illuminate whether the button was pressed or not. Nevertheless, most people rated the causal relationship between the button press and the light illuminating to be moderately positive, demonstrating an illusion of causality. This study found that the magnitude of this illusion was predicted by people's level of endorsement of common superstitious beliefs (measured using a novel Superstitious Beliefs Questionnaire), but was not associated with mood variables or their self‐rated locus of control. This observation is consistent with a more general individual difference or bias to overweight conjunctive events over disjunctive events during causal reasoning in those with a propensity for superstitious beliefs.

The research is here.

Saturday, June 30, 2018

The Ethics of Ceding More Power To Machines

Brhmie Balaram
www.theRSA.org
Originally posted May 31, 2018

Here is an excerpt:

This gets to the crux of people’s fears about AI – there is a perception that we may be ceding too much power to AI, regardless of the reality. The public’s concerns seem to echo that of the academic Virginia Eubanks, who argues that the fundamental problem with these systems is that they enable the ethical distance needed “to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state.”

Yet, these systems also have the potential to increase the fairness of outcomes if they are able to improve accuracy and minimise biases. They may also increase efficiency and savings for both the organisation that deploys the systems, as well as the people subject to the decision.

These are the sorts of trade-offs that a public dialogue, and in particular, a long-form deliberative process like a citizens’ jury, can address.

The info is here.

Tuesday, June 19, 2018

British Public Fears the Day When "Computer Says No"

Jasper Hamill
The Metro
Originally published May 31, 2018

Governments and tech companies risk a popular backlash against artificial intelligence (AI) unless they open up about how it will be used, according to a new report.

A poll conducted for the Royal Society of Arts (RSA) revealed widespread concern that AI will create a ‘Computer Says No’ culture, in which crucial decisions are made automatically without consideration of individual circumstances.

If the public feels ‘victimised or disempowered’ by intelligent machines, they may resist the introduction of new technologies, even if it holds back progress which could benefit them, the report warned.

Fear of inflexible and unfeeling automatic decision-making was a greater concern than robots taking humans’ jobs among those taking part in a survey by pollsters YouGov for the RSA.

The information is here.

Friday, June 1, 2018

The toxic legacy of Canada's CIA brainwashing experiments

Ashifa Kassam
The Guardian
Originally published May 3, 2018

Here is an excerpt:

Patients were subjected to high-voltage electroshock therapy several times a day, forced into drug-induced sleeps that could last months and injected with megadoses of LSD.

After reducing them to a childlike state – at times stripping them of basic skills such as how to dress themselves or tie their shoes – Cameron would attempt to reprogram them by bombarding them with recorded messages for up to 16 hours at a time. First came negative messages about their inadequacies, followed by positive ones, in some cases repeated up to half a million times.

“He couldn’t get his patients to listen to them enough so he put speakers in football helmets and locked them on their heads,” said Johnson. “They were going crazy banging their heads into walls, so he then figured he could put them in a drug induced coma and play the tapes as long as he needed.”

Along with intensive bouts of electroshock therapy, Johnson’s grandmother was given injections of LSD on 14 occasions. “She said that made her feel like her bones were melting. She would say: ‘I don’t want these,’” said Johnson. “And the doctors and nurses would say to her: ‘You’re a bad wife, you’re a bad mother. If you wanted to get better, you would do this for your family. Think about your daughter.’”

The information is here.