Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethical Guidelines. Show all posts
Showing posts with label Ethical Guidelines. Show all posts

Thursday, November 16, 2023

Minds of machines: The great AI consciousness conundrum

Grace Huckins
MIT Technology Review
Originally published 16 October 23

Here is an excerpt:

At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

Not many people dismissed his proposal as ridiculous, Chalmers says: “I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me.” Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.

AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences. Fail to identify a conscious AI, and you might unintentionally subjugate, or even torture, a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code. Both mistakes are easy to make. “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”


Here is my take.

There is an ongoing debate about whether artificial intelligence can ever become conscious or have subjective experiences like humans. Some argue AI will inevitably become conscious as it advances, while others think consciousness requires biological qualities that AI lacks.

Philosopher David Chalmers has proposed a "hard problem of consciousness" - explaining how physical processes in the brain give rise to subjective experience. This issue remains unresolved.

AI systems today show no signs of being conscious or having experiences. But some argue as AI becomes more sophisticated, we may need to consider whether it could develop some level of consciousness.
Approaches like deep learning and neural networks are fueling major advances in narrow AI, but this type of statistical pattern recognition does not seem sufficient to produce consciousness.

Questions remain about whether artificial consciousness is possible or how we could detect if an AI system were to become conscious. There are also ethical implications regarding the rights of conscious AI.

Overall there is much speculation but no consensus on whether artificial general intelligence could someday become conscious like humans are. The answer awaits theoretical and technological breakthroughs.

Tuesday, January 22, 2019

Proceedings Start Against ‘Sokal Squared’ Hoax Professor

Katherine Mangan
The Chronicle of Higher Education
Originally posted January 7, 2019

Here is an excerpt:

The Oregon university’s institutional review board concluded that Boghossian’s participation in the elaborate hoax had violated Portland State’s ethical guidelines, according to documents Boghossian posted online. The university is considering a further charge that he had falsified data, the documents indicate.

Last month Portland State’s vice president for research and graduate studies, Mark R. McLellan, ordered Boghossian to undergo training on human-subjects research as a condition for getting further studies approved. In addition, McLellan said he had referred the matter to the president and provost because Boghossian’s behavior "raises ethical issues of concern."

Boghossian and his supporters have gone on the offensive with an online press kit that links to emails from Portland State administrators. It also includes a video filmed by a documentary filmmaker that shows Boghossian reading an email that asks him to appear before the institutional review board in October. In the video, Boghossian discusses the implications of potentially being found responsible for professional misconduct. He’s speaking with his co-authors, Helen Pluckrose, a self-described "exile from the humanities" who studies medieval religious writings about women, and James A. Lindsay, an author and mathematician.

The info is here.

Monday, December 4, 2017

Ray Kurzweil on Turing Tests, Brain Extenders, and AI Ethics

Nancy Kaszerman
Wired.com
Originally posted November 13, 2017

Here is an excerpt:

There has been a lot of focus on AI ethics, how to keep the technology safe, and it's kind of a polarized discussion like a lot of discussions nowadays. I've actually talked about both promise and peril for quite a long time. Technology is always going to be a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It's also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions: poverty, disease, and so on. Then alarm that these technologies can be destructive and cause even existential risks. And finally I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies because, despite the progress we've made—and that's a-whole-nother issue, people think things are getting worse but they're actually getting better—there's still a lot of human suffering to be overcome. It's only continued progress particularly in AI that's going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.

And there's a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies—how to keep these technologies safe. Now it's 40 years later. We are getting clinical impact of biotechnology. It's a trickle today, it'll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It's a good model for how to proceed.

The article is here.

Friday, October 6, 2017

AI Research Is in Desperate Need of an Ethical Watchdog

Sophia Chen
Wired Science
Originally published September 18, 2017

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

The article is here.

Monday, September 15, 2014

Episode 15: Ethics and Telepsychology (Part 1)

Ethics and Telepsychology involves the rise of technology in the healthcare sector.  There are about 21 states that mandate insurance companies cover telehealth services.  John is joined by Dr. Marlene Maheu, trainer, author, researcher, and the Executive Director of the TeleMental Health Institute, Inc., where she has overseen the delivery of professional training in telemental health to more than 5000 professionals in 39 countries since 2010.  John and Marlene discuss the supporting research for telepsychology and its limitations; practitioner competencies; reimbursable, evidence-based models for telepsychology; and limitations with telepsychology.

At the end of this podcast, the listener will be able to:

1. Outline the general research findings on the usefulness of telepsychology,
2. Describe requirements of competent telepsychology practice,
3. List at least four reimbursable, evidence-based models for legal and ethical telepractice.

Click here to earn one APA-approved CE credit


Or listen directly below



Resources for Episode 15



by Marlene Maheu, Myron L. Pulier, Frank H. Wilhelm and Joseph P. McMenamin 

Bibliography from TeleMental Health Institute, Inc.

Marlene Maheu SlideShare

Gros, D. F., Yoder, M., Tuerk, P. W., Lozano, B. E., & Acierno, R. (2011). Exposure therapy for PTSD delivered to veterans via telehealth: Predictors of treatment completion and outcome and comparison to treatment delivered in person. Behavior Therapy, 42, 276-283. 
doi: 10.1016/j.beth.2010.07.005

Harris, E., & Younggren, J. N. (2011). Risk management in the digital world.
Professional Psychology: Research and Practice, 42, 412-418.
doi: 10.1037/a0025139

Tuesday, September 3, 2013

Inclusion of Ethical Issues in Dementia Guidelines: A Thematic Text Analysis

By H. Knuppel, M. Mertz, M. Schmidhuber, G. Neitzke, and D. Strech
PLOS Medicine - Open Access

Ethical issues were inconsistently addressed in national dementia guidelines, with some guidelines including most and some including few ethical issues. Guidelines should address ethical issues and how to deal with them to help the medical profession understand how to approach care of patients with dementia, and for patients, their relatives, and the general public, all of whom might seek information and advice in national guidelines. There is a need for further research to specify how detailed ethical issues and their respective recommendations can and should be addressed in dementia guidelines.

The entire article is here.

Wednesday, August 7, 2013

Jodi Arias Trial: The Importance of Forensic Psychology Guidelines

By MICHAEL J. PERROTTI, PHD
World of Psychology Blogs

I have served as a clinical and forensic neuropsychologist expert witness for over twenty years. It is of utmost importance that an even playing field be created in adversarial proceedings.

What is conducive to this is use of forensic guidelines as standards by all experts involved in a case.

The Jodi Arias trial depicts apparent omissions of important standards that could influence outcome of assessment. There was a lack of collateral interviews, which the Reference Manual for Scientific Evidence (RMSE) addresses.

In addition, there were other omissions that I believe are important to the outcome of the Jodi Arias trial.

The entire blog post can be found here.

Jodi Arias Trial: Teachable Moments in Forensic Psychology can be found on the Video Resources page of this blog

APA's Forensic Guidelines can be found on the Guides and Guidelines page of this site

Tuesday, June 4, 2013

Scholars call for new ethical guidelines to direct research on social networking

By Jennifer Sereno
University of Wisconsin-Madison News
Originally published January 2013

The unique data collection capabilities of social networking and online gaming websites require new ethical guidance from federal regulators concerning online research involving adolescent subjects, an ethics scholar from the Morgridge Institute for Research and a computer and learning sciences expert from Tufts University argue in the journal Science.

Increasingly, academics are designing and implementing research interventions on social network sites such as Facebook to learn how these interventions may affect user behavior, knowledge, attitudes and psychological health. Online games are being used as research interventions. However, the ability to mine user data (including information about Facebook "friends"), sensitive personal information and behavior raises concerns that deserve closer ethical scrutiny, say Pilar Ossorio and R. Benjamin Shapiro.

Ossorio is a bioethics scholar-in-residence at the Morgridge Institute, a private, nonprofit biomedical research institute on the University of Wisconsin-Madison campus. She also holds joint appointments as a professor of law and bioethics at the University of Wisconsin Law School and the School of Medicine and Public Health. Shapiro is an assistant professor in computer science and education at Tufts, where he is a member of the Center for Engineering Education and Outreach. He previously held appointments in educational research at Morgridge and the Wisconsin Institute for Discovery.

"Given the unprecedented ability of online research using social network sites to identify sensitive personal information concerning the research subject and the subject's online acquaintances, researchers need clarification concerning applicable ethical and regulatory standards," Ossorio says. "Regulators need greater insights into the possible benefits and harms of online social network research, and researchers need to better understand the relevant ethical and regulatory universe so they can design technical strategies for minimizing harm and complying with legal requirements."

For instance, Ossorio says, researchers may be able to design game features that detect player distress and respond by modifying the game environment, and marry those features to data collection technologies that maximally protect users' privacy while still offering useful data to researchers.

Consent for online research is tricky, particularly when it involves minors. Under Shapiro and Ossorio's analysis, current law does not require that researchers obtain parental permission to conduct studies of adolescents on social networking sites. Parental permission is required for younger children, while adolescents and adults provide their own consent. Of course, parents can prohibit their adolescents from any online activity, including research participation, regardless of legal limits on researchers. Parents have the same amount of control over their adolescents' online research participation as they do over any other online activity in which their teens engage.

"Researchers should use the online environment to deliver innovative, informative consent processes that help participants understand the dimensions of the research and the accompanying data collection," Shapiro says. "This is especially important given the general public's ignorance about the ability to collect massive amounts of personal data over the Internet."

If traditional approaches to consent are of limited value for protecting online subjects, Ossorio says, then researchers and regulators should emphasize other aspects of research ethics, such as using all reasonable approaches to minimize research risks. Also, researchers should seek innovative methods for generating transparency around the research enterprise.

Writing in the Policy Forum section of the Jan. 11 edition of Science, Shapiro and Ossorio conclude by emphasizing that the richness of online information should not become the sole domain of commercial marketing interests but should be used to advance understanding of human behavior and inspire positive social outcomes. Elucidating ethical and legal guidelines for design research on social media will create new opportunities for researchers to understand and improve society.

The news release is here.