Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Help. Show all posts
Showing posts with label Help. Show all posts

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Monday, January 15, 2024

The man helping prevent suicide with Google adverts

Looi, M.-K. (2023).
BMJ.

Here are two excerpts:

Always online

A big challenge in suicide prevention is that people often experience suicidal crises at times when they’re away from clinical facilities, says Nick Allen, professor of psychology at the University of Oregon.

“It’s often in the middle of the night, so one of the great challenges is how can we be there for someone when they really need us, which is not necessarily when they’re engaged with clinical services.”

Telemedicine and other digital interventions came to prominence at the height of the pandemic, but “there’s an app for that” does not always match the patient in need at the right time. Says Onie, “The missing link is using existing infrastructure and habits to meet them where they are.”

Where they are is the internet. “When people are going through suicidal crises they often turn to the internet for information. And Google has the lion’s share of the search business at the moment,” says Allen, who studies digital mental health interventions (and has had grants from Google for his research).

Google’s core business stores information from searches, using it to fuel a highly effective advertising network in which companies pay to have links to their websites and products appear prominently in the “sponsored” sections at the top of all relevant search results.

The company holds 27.5% of the digital advertising market—earning the company around $224bn from search advertising alone in 2022.

If it knows enough about us to serve up relevant adverts, then it knows when a user is displaying red flag behaviour for suicide. Onie set out to harness this.

“It’s about the ‘attention economy,’” he says, “There’s so much information, there’s so much noise. How do we break through and make sure that the first thing that people see when they’re contemplating suicide is something that could be helpful?”

(cut)

At its peak the campaign was responding to over 6000 searches a day for each country. And the researchers saw a high level of response.

Typically, most advertising campaigns see low engagement in terms of clickthrough rates (the number of people that actually click on an advert when they see it). Industry benchmarks consider 3.17% a success. The Black Dog campaign saw 5.15% in Australia and 4.02% in the US. Preliminary data show Indonesia to be even higher—as much as 12%.

Because this is an advertising campaign, another measure is cost effectiveness. Google charges the advertiser per click on its advert, so the more engaged an audience is (and thus what Google considers to be a relevant advert to a relative user) the higher the charge. Black Dog’s campaign saw such a high number of users seeing the ads, and such high numbers of users clicking through, that the cost was below that of the industry average of $2.69 a click—specifically, $2.06 for the US campaign. Australia was higher than the industry average, but early data indicate Indonesia was delivering $0.86 a click.

-------
I could not find a free pdf.  The link above works, but is paywalled. Sorry. :(

Thursday, December 21, 2023

Chatbot therapy is risky. It’s also not useless

A.W. Ohlheiser
vox.com
Originally posted 14 Dec 23

Here is an excerpt:

So what are the risks of chatbot therapy?

There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the biases built into many of these systems as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society.

But the biggest risk of chatbot therapy — whether it’s poorly conceived or provided by software that was not designed for mental health — is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch.

Stade, in her working paper, notes that while large language models have a “promising” capacity to conduct some of the skills needed for psychotherapy, there’s a difference between “simulating therapy skills” and “implementing them effectively.” She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events.

Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isn’t eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patient’s husband recently retired. She’s angry because suddenly he’s home all the time, taking up her space.

“So much of therapy is being responsive to emerging context, what you’re seeing, what you’re noticing,” Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient.


Here is my take:

The promise of AI in mental health care dances on a delicate knife's edge. Chatbot therapy, with its alluring accessibility and anonymity, tempts us with a quick fix for the ever-growing burden of mental illness. Yet, as with any powerful tool, its potential can be both a balm and a poison, demanding a wise touch for its ethical wielding.

On the one hand, imagine a world where everyone, regardless of location or circumstance, can find a non-judgmental ear, a gentle guide through the labyrinth of their own minds. Chatbots, tireless and endlessly patient, could offer a first step of support, a bridge to human therapy when needed. In the hushed hours of isolation, they could remind us we're not alone, providing solace and fostering resilience.

But let us not be lulled into a false sense of ease. Technology, however sophisticated, lacks the warmth of human connection, the nuanced understanding of a shared gaze, the empathy that breathes life into words. We must remember that a chatbot can never replace the irreplaceable – the human relationship at the heart of genuine healing.

Therefore, our embrace of chatbot therapy must be tempered with prudence. We must ensure adequate safeguards, preventing them from masquerading as a panacea, neglecting the complex needs of human beings. Transparency is key – users must be aware of the limitations, of the algorithms whispering behind the chatbot's words. Above all, let us never sacrifice the sacred space of therapy for the cold efficiency of code.

Chatbot therapy can be a bridge, a stepping stone, but never the destination. Let us use technology with wisdom, acknowledging its potential good while holding fast to the irreplaceable value of human connection in the intricate tapestry of healing. Only then can we mental health professionals navigate the ethical tightrope and make technology safe and effective, when and where possible.

Wednesday, November 8, 2023

Everything you need to know about artificial wombs

Cassandra Willyard
MIT Technology Review
Originally posted 29 SEPT 23

Here is an excerpt:

What is an artificial womb?

An artificial womb is an experimental medical device intended to provide a womblike environment for extremely premature infants. In most of the technologies, the infant would float in a clear “biobag,” surrounded by fluid. The idea is that preemies could spend a few weeks continuing to develop in this device after birth, so that “when they’re transitioned from the device, they’re more capable of surviving and having fewer complications with conventional treatment,” says George Mychaliska, a pediatric surgeon at the University of Michigan.

One of the main limiting factors for survival in extremely premature babies is lung development. Rather than breathing air, babies in an artificial womb would have their lungs filled with lab-made amniotic fluid, that mimics the amniotic fluid they would have hadjust like they would in utero. Neonatologists would insert tubes into blood vessels in the umbilical cord so that the infant’s blood could cycle through an artificial lung to pick up oxygen. 

The device closest to being ready to be tested in humans, called the EXTrauterine Environment for Newborn Development, or EXTEND, encases the baby in a container filled with lab-made amniotic fluid. It was invented by Alan Flake and Marcus Davey at the Children’s Hospital of Philadelphia and is being developed by Vitara Biomedical.


Here is my take:

Artificial wombs are experimental medical devices that aim to provide a womb-like environment for extremely premature infants. The technology is still in its early stages of development, but it has the potential to save the lives of many babies who would otherwise not survive.

Overall, artificial wombs are a promising new technology with the potential to revolutionize the care of premature infants. However, more research is needed to fully understand the risks and benefits of the technology before it can be widely used.

Here are some additional ethical concerns that have been raised about artificial wombs:
  • The potential for artificial wombs to be used to create designer babies or to prolong the lives of fetuses with severe disabilities.
  • The potential for artificial wombs to be used to exploit or traffick babies.
  • The potential for artificial wombs to exacerbate existing social and economic inequalities.
It is important to have a public conversation about these ethical concerns before artificial wombs become widely available. We need to develop clear guidelines for how the technology should be used and ensure that it is used in a way that benefits all of society.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Wednesday, April 26, 2023

A Prosociality Paradox: How Miscalibrated Social Cognition Creates a Misplaced Barrier to Prosocial Action

Epley, N., Kumar, A., Dungan, J., &
Echelbarger, M. (2023).
Current Directions in Psychological Science,
32(1), 33–41. 
https://doi.org/10.1177/09637214221128016

Abstract

Behaving prosocially can increase well-being among both those performing a prosocial act and those receiving it, and yet people may experience some reluctance to engage in direct prosocial actions. We review emerging evidence suggesting that miscalibrated social cognition may create a psychological barrier that keeps people from behaving as prosocially as would be optimal for both their own and others’ well-being. Across a variety of interpersonal behaviors, those performing prosocial actions tend to underestimate how positively their recipients will respond. These miscalibrated expectations stem partly from a divergence in perspectives, such that prosocial actors attend relatively more to the competence of their actions, whereas recipients attend relatively more to the warmth conveyed. Failing to fully appreciate the positive impact of prosociality on others may keep people from behaving more prosocially in their daily lives, to the detriment of both their own and others’ well-being.

Undervaluing Prosociality

It may not be accidental that William James (1896/1920) named “the craving to be appreciated” as “the deepest principle in human nature” only after receiving a gift of appreciation that he described as “the first time anyone ever treated me so kindly.” “I now perceive one immense omission in my [Principles of Psychology],” he wrote regarding the importance of appreciation. “I left it out altogether . . . because I had never had it gratified till now” (p. 33).

James does not seem to be unique in failing to recognize the positive impact that appreciation can have on recipients. In one experiment (Kumar & Epley, 2018, Experiment 1), MBA students thought of a person they felt grateful to, but to whom they had not yet expressed their appreciation. The students, whom we refer to as expressers, wrote a gratitude letter to this person and then reported how they expected the recipient would feel upon receiving it: how surprised the recipient would be to receive the letter, how surprised the recipient would be about the content, how negative or positive the recipient would feel, and how awkward the recipient would feel. Expressers willing to do so then provided recipients’ email addresses so the recipients could be contacted to report how they actually felt receiving their letter. Although expressers recognized that the recipients would feel positive, they did not recognize just how positive the recipients would feel: Expressers underestimated how surprised the recipients would be to receive the letter, how surprised the recipients would be by its content, and how positive the recipients would feel, whereas they overestimated how awkward the recipients would feel. Table 1 shows the robustness of these results across an additional published experiment and 17 subsequent replications (see Fig. 1 for overall results; full details are available at OSF: osf.io/7wndj/). Expressing gratitude has a reliably more positive impact on recipients than expressers expect.

Conclusion

How much people genuinely care about others has been debated for centuries. In summarizing the purely selfish viewpoint endorsed by another author, Thomas Jefferson (1854/2011) wrote, “I gather from his other works that he adopts the principle of Hobbes, that justice is founded in contract solely, and does not result from the construction of man.” Jefferson felt differently: “I believe, on the contrary, that it is instinct, and innate, that the moral sense is as much a part of our constitution as that of feeling, seeing, or hearing . . . that every human mind feels pleasure in doing good to another” (p. 39).

Such debates will never be settled by simply observing human behavior because prosociality is not simply produced by automatic “instinct” or “innate” disposition, but rather can be produced by complicated social cognition (Miller, 1999). Jefferson’s belief that people feel “pleasure in doing good to another” is now well supported by empirical evidence. However, the evidence we reviewed here suggests that people may avoid experiencing this pleasure not because they do not want to be good to others, but because they underestimate just how positively others will react to the good being done to them.

Sunday, March 12, 2023

Growth of AI in mental health raises fears of its ability to run wild

Sabrina Moreno
Axios.com
Originally posted 9 MAR 23

Here's how it begins:

The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.

Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.

  • Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
  • The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.

What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.

  • The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
  • It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.

Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.

  • KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
  • Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.

Saturday, March 4, 2023

Divide and Rule? Why Ethical Proliferation is not so Wrong for Technology Ethics.

Llorca Albareda, J., Rueda, J.
Philos. Technol. 36, 10 (2023).
https://doi.org/10.1007/s13347-023-00609-8

Abstract

Although the map of technology ethics is expanding, the growing subdomains within it may raise misgivings. In a recent and very interesting article, Sætra and Danaher have argued that the current dynamic of sub-specialization is harmful to the ethics of technology. In this commentary, we offer three reasons to diminish their concern about ethical proliferation. We argue first that the problem of demarcation is weakened if we attend to other sub-disciplines of technology ethics not mentioned by these authors. We claim secondly that the logic of sub-specializations is less problematic if one does adopt mixed models (combining internalist and externalist approaches) in applied ethics. We finally reject that clarity and distinction are necessary conditions for defining sub-fields within ethics of technology, defending the porosity and constructive nature of ethical disciplines.

Conclusion

Sætra and Danaher have initiated a necessary discussion about the increasing proliferation of neighboring sub-disciplines in technology ethics. Although we do not share their concern, we believe that this debate should continue in the future. Just as some subfields have recently been consolidated, others may do the same in the coming decades. The possible emergence of novel domain-specific technology ethics (say Virtual Reality Ethics) suggests that future proposals will point to as yet unknown positive and negative aspects of this ethical proliferation. In part, the creation of new sub-disciplines will depend on the increasing social prominence of other emerging and future technologies. The map of technology ethics thus includes uncharted waters and new subdomains to discover. This makes ethics of technology a fascinatingly lively and constantly evolving field of knowledge.

Monday, February 27, 2023

Domestic violence hotline calls will soon be invisible on your family phone plan

Ashley Belanger
ARS Technica
Originally published 17 FEB 23

Today, the Federal Communications Commission proposed rules to implement the Safe Connections Act, which President Joe Biden signed into law last December. Advocates consider the law a landmark move to stop tech abuse. Under the law, mobile service providers are required to help survivors of domestic abuse and sexual violence access resources and maintain critical lines of communication with friends, family, and support organizations.

Under the proposed rules, mobile service providers are required to separate a survivor’s line from a shared or family plan within two business days. Service providers must also “omit records of calls or text messages to certain hotlines from consumer-facing call and text message logs,” so that abusers cannot see when survivors are seeking help. Additionally, the FCC plans to launch a “Lifeline” program, providing emergency communications support for up to six months for survivors who can’t afford to pay for mobile services.

“These proposed rules would help survivors obtain separate service lines from shared accounts that include their abusers, protect the privacy of calls made by survivors to domestic abuse hotlines, and provide support for survivors who suffer from financial hardship through our affordability programs,” the FCC’s announcement said.

The FCC has already consulted with tech associations and domestic violence support organizations in forming the proposed rules, but now the public has a chance to comment. An FCC spokesperson confirmed to Ars that comments are open now. Crystal Justice, the National Domestic Violence Hotline’s chief external affairs officer, told Ars that it’s critical for survivors to submit comments to help inform FCC rules with their experiences of tech abuse.

To express comments, visit this link and fill in “22-238” as the proceeding number. That will auto-populate a field that says “Supporting Survivors of Domestic and Sexual Violence.”

FCC’s spokesperson told Ars that the initial public comment period will be open for 30 days after the rules are published in the federal register, and then a reply comment period will be open for 30 days after the initial comment period ends.

Sunday, February 19, 2023

Organs in exchange for freedom? Bill raises ethical concerns

Steve LeBlanc
Associated Press
Originally published 8 FEB 23

BOSTON (AP) — A proposal to let Massachusetts prisoners donate organs and bone marrow to shave time off their sentence is raising profound ethical and legal questions about putting undue pressure on inmates desperate for freedom.

The bill — which faces a steep climb in the Massachusetts Statehouse — may run afoul of federal law, which bars the sale of human organs or acquiring one for “valuable consideration.”

It also raises questions about whether and how prisons would be able to appropriately care for the health of inmates who go under the knife to give up organs. Critics are calling the idea coercive and dehumanizing even as one of the bill’s sponsors is framing the measure as a response to the over-incarceration of Hispanic and Black people and the need for matching donors in those communities.

“The bill reads like something from a dystopian novel,” said Kevin Ring, president of Families Against Mandatory Minimums, a Washington, D.C.-based criminal justice reform advocacy group. “Promoting organ donation is good. Reducing excessive prison terms is also good. Tying the two together is perverse.”

(cut)

Offering reduced sentences in exchange for organs is not only unethical, but also violates federal law, according to George Annas, director of the Center for Health Law, Ethics & Human Rights at the Boston University School of Public Health. Reducing a prison sentence is the equivalent of a payment, he said.

“You can’t buy an organ. That should end the discussion,” Annas said. “It’s compensation for services. We don’t exploit prisoners enough?”

Democratic state Rep. Carlos Gonzalez, another co-sponsor of the bill, defended the proposal, calling it a voluntary program. He also said he’s open to establishing a policy that would allow inmates to donate organs and bone marrow without the lure of a reduced sentence. There is currently no law against prisoner organ donation in Massachusetts, he said.

“It’s not quid pro quo. We are open to setting policy without incentives,” Gonzalez said, adding that it is “crucial to respect prisoners’ human dignity and agency by respecting their choice to donate bone marrow or an organ.”

Saturday, January 14, 2023

Individuals prefer to harm their own group rather than help an opposing group

Rachel Gershon and Ariel Fridman
PNAS, 119 (49) e2215633119
https://doi.org/10.1073/pnas.221563311

Abstract

Group-based conflict enacts a severe toll on society, yet the psychological factors governing behavior in group conflicts remain unclear. Past work finds that group members seek to maximize relative differences between their in-group and out-group (“in-group favoritism”) and are driven by a desire to benefit in-groups rather than harm out-groups (the “in-group love” hypothesis). This prior research studies how decision-makers approach trade-offs between two net-positive outcomes for their in-group. However, in the real world, group members often face trade-offs between net-negative options, entailing either losses to their group or gains for the opposition. Anecdotally, under such conditions, individuals may avoid supporting their opponents even if this harms their own group, seemingly inconsistent with “in-group love” or a harm minimizing strategy. Yet, to the best of our knowledge, these circumstances have not been investigated. In six pre-registered studies, we find consistent evidence that individuals prefer to harm their own group rather than provide even minimal support to an opposing group across polarized issues (abortion access, political party, gun rights). Strikingly, in an incentive-compatible experiment, individuals preferred to subtract more than three times as much from their own group rather than support an opposing group, despite believing that their in-group is more effective with funds. We find that identity concerns drive preferences in group decision-making, and individuals believe that supporting an opposing group is less value-compatible than harming their own group. Our results hold valuable insights for the psychology of decision-making in intergroup conflict as well as potential interventions for conflict resolution.

Significance

Understanding the principles guiding decisions in intergroup conflicts is essential to recognizing the psychological barriers to compromise and cooperation. We introduce a novel paradigm for studying group decision-making, demonstrating that individuals are so averse to supporting opposing groups that they prefer equivalent or greater harm to their own group instead. While previous models of group decision-making claim that group members are driven by a desire to benefit their in-group (“in-group love”) rather than harm their out-group, our results cannot be explained by in-group love or by a harm minimizing strategy. Instead, we propose that identity concerns drive this behavior. Our theorizing speaks to research in psychology, political theory, and negotiations by examining how group members navigate trade-offs among competing priorities.

From the Conclusion

We synthesize prior work on support-framing and propose the Identity-Support model, which can parsimoniously explain our findings across win-win and lose-lose scenarios. The model suggests that individuals act in group conflicts to promote their identity, and they do so primarily by providing support to causes they believe in (and avoid supporting causes they oppose; see also SI Appendix, Study S1). Simply put, in win-win contexts, supporting the in-group is more expressive of one’s identity as a group member than harming the opposing group, thereby leading to a preference for in-group support. In lose-lose contexts, supporting the opposing group is more negatively expressive of one’s identity as a group member than harming the in-group, resulting in a preference for in-group harm. Therefore, the principle that individuals make decisions in group conflicts to promote and protect their identity, primarily by allocating their support in ways that most align with their values, offers a single framework that predicts individual behavior in group conflicts in both win-win and lose-lose contexts.

Saturday, October 8, 2022

Preventing an AI-related catastrophe

Benjamin Hilton
8000 Hours
Originally Published August 25th, 2022

Summary

We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.

Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

(cut)

When can we expect transformative AI?

It’s difficult to predict exactly when we will develop AI that we expect to be hugely transformative for society (for better or for worse) — for example, by automating all human work or drastically changing the structure of society. But here we’ll go through a few approaches.

One option is to survey experts. Data from the 2019 survey of 300 AI experts implies that there is 20% probability of human-level machine intelligence (which would plausibly be transformative in this sense) by 2036, 50% probability by 2060, and 85% by 2100. There are a lot of reasons to be suspicious of these estimates,8 but we take it as one data point.

Ajeya Cotra (a researcher at Open Philanthropy) attempted to forecast transformative AI by comparing modern deep learning to the human brain. Deep learning involves using a huge amount of compute to train a model, before that model is able to perform some task. There’s also a relationship between the amount of compute used to train a model and the amount used by the model when it’s run. And — if the scaling hypothesis is true — we should expect the performance of a model to predictably improve as the computational power used increases. So Cotra used a variety of approaches (including, for example, estimating how much compute the human brain uses on a variety of tasks) to estimate how much compute might be needed to train a model that, when run, could carry out the hardest tasks humans can do. She then estimated when using that much compute would be affordable.

Cotra’s 2022 update on her report’s conclusions estimates that there is a 35% probability of transformative AI by 2036, 50% by 2040, and 60% by 2050 — noting that these guesses are not stable.22

Tom Davidson (also a researcher at Open Philanthropy) wrote a report to complement Cotra’s work. He attempted to figure out when we might expect to see transformative AI based only on looking at various types of research that transformative AI might be like (e.g. developing technology that’s the ultimate goal of a STEM field, or proving difficult mathematical conjectures), and how long it’s taken for each of these kinds of research to be completed in the past, given some quantity of research funding and effort.

Davidson’s report estimates that, solely on this information, you’d think that there was an 8% chance of transformative AI by 2036, 13% by 2060, and 20% by 2100. However, Davidson doesn’t consider the actual ways in which AI has progressed since research started in the 1950s, and notes that it seems likely that the amount of effort we put into AI research will increase as AI becomes increasingly relevant to our economy. As a result, Davidson expects these numbers to be underestimates.