Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Freedom. Show all posts
Showing posts with label Freedom. Show all posts

Friday, May 12, 2023

‘Mind-reading’ AI: Japan study sparks ethical debate

David McElhinney
Aljazeera.com
Originally posted 7 APR 203

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

“We really didn’t expect this kind of result,” Takagi said.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”


Note: If AI systems can decode human thoughts, it could infringe upon people's privacy and autonomy. There are concerns that this technology could be used for invasive surveillance or to manipulate people's thoughts and behavior. Additionally, there are concerns about how this technology could be used in legal proceedings and whether it violates human rights.

Tuesday, October 5, 2021

Social Networking and Ethics

Vallor, Shannon
The Stanford Encyclopedia of Philosophy 
(Fall 2021 Edition), Edward N. Zalta (ed.)

Here is an excerpt:

Contemporary Ethical Concerns about Social Networking Services

While early SNS scholarship in the social and natural sciences tended to focus on SNS impact on users’ psychosocial markers of happiness, well-being, psychosocial adjustment, social capital, or feelings of life satisfaction, philosophical concerns about social networking and ethics have generally centered on topics less amenable to empirical measurement (e.g., privacy, identity, friendship, the good life and democratic freedom). More so than ‘social capital’ or feelings of ‘life satisfaction,’ these topics are closely tied to traditional concerns of ethical theory (e.g., virtues, rights, duties, motivations and consequences). These topics are also tightly linked to the novel features and distinctive functionalities of SNS, more so than some other issues of interest in computer and information ethics that relate to more general Internet functionalities (for example, issues of copyright and intellectual property).

Despite the methodological challenges of applying philosophical theory to rapidly shifting empirical patterns of SNS influence, philosophical explorations of the ethics of SNS have continued in recent years to move away from Borgmann and Dreyfus’ transcendental-existential concerns about the Internet, to the empirically-driven space of applied technology ethics. Research in this space explores three interlinked and loosely overlapping kinds of ethical phenomena:
  • direct ethical impacts of social networking activity itself (just or unjust, harmful or beneficial) on participants as well as third parties and institutions;
  • indirect ethical impacts on society of social networking activity, caused by the aggregate behavior of users, platform providers and/or their agents in complex interactions between these and other social actors and forces;
  • structural impacts of SNS on the ethical shape of society, especially those driven by the dominant surveillant and extractivist value orientations that sustain social networking platforms and culture.
Most research in the field, however, remains topic- and domain-driven—exploring a given potential harm or domain-specific ethical dilemma that arises from direct, indirect, or structural effects of SNS, or more often, in combination. Sections 3.1–3.5 outline the most widely discussed of contemporary SNS’ ethical challenges.

Sunday, August 22, 2021

America’s long history of anti-science has dangerously undermined the COVID vaccine

Peter Hotez
The Dallas Morning News
Originally published 15 Aug 21

Here is an excerpt:

America’s full-throated enthusiasm for vaccines lasted until the early 2000s. The 1998 Lancet publication of a paper from Andrew Wakefield and his colleagues, which wrongly asserted that the measles virus in the MMR vaccine replicated in the colons of children to cause pervasive developmental disorder (autism), ushered in a new era of distrust for vaccine.

It also resulted in distrust for the U.S. Health and Human Services agencies promoting vaccinations. The early response from the Centers for Disease Control and Prevention was to dismiss growing American discontent for vaccines as a fringe element, until eventually in the 2010s anti-vaccine sentiment spread across the internet.

The anti-vaccine movement eventually adopted medical freedom and used it to gain strength and accelerate in size, internet presence and external funding. Rising out of the American West, anti-vaccine proponents insisted that only parents could make vaccine choices and they were prepared to resist government requirements for school entry or attendance.

In California, the notion of vaccine choice gained strength in the 2010s, leading to widespread philosophical exemptions to childhood MMR vaccines and other immunizations. Vaccine exemptions reached critical mass, ultimately culminating in a 2014–2015 measles epidemic in Orange County.

The outbreak prompted state government intervention through the introduction of California Senate Bill 277 that eliminated these exemptions and prevented further epidemics, but it also triggered aggressive opposition. Anti-vaccine health freedom groups harassed members of the Legislature and labeled prominent scientists as pharma shills. They touted pseudoscience, claiming that vaccines were toxic, or that natural immunity acquired from the illness was superior and more durable than vaccine-induced immunity.

Health freedom then expanded through newly established anti-vaccine political action committees in Texas and Oklahoma in the Southwest, Oregon in the Pacific Northwest, and Michigan and Ohio in the Midwest, while additional anti-vaccine organizations formed in almost every state.

These groups lobbied state legislatures to promote or protect vaccine exemptions, while working to cloak or obscure classroom or schoolwide disclosures of vaccine exemptions. They also introduced menacing consent forms to portray vaccines as harmful or toxic.

The Texans for Vaccine Choice PAC formed in 2015, helping to accelerate personal belief immunization exemptions to a point where today approximately 72,000 Texas schoolchildren miss vaccines required for school entry and attendance.

Tuesday, May 12, 2020

Freedom in an Age of Algocracy

John Danaher
forthcoming in Oxford Handbook on the Philosophy of Technology
edited by Shannon Vallor

Abstract

There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus in this way enables us to see how algorithmic governance can be both emancipatory and enslaving, and provides a framework for future development and activism around the creation of this technology.

From the Conclusion:

Finally, I’ve outlined a framework for thinking about the likely impact of algocracy on freedom. Given the complexity of freedom and the complexity of algocracy, I’ve argued that there is unlikely to be a simple global assessment of the freedom-promoting or undermining power of algocracy. This is something that has to be assessed and determined on a case-by-case basis. Nevertheless, there are at least five interesting and relatively novel mechanisms through which algocratic systems can both promote and undermine freedom. We should pay attention to these different mechanisms, but do so in a properly contextualized manner, and not by ignoring the pre-existing mechanisms through which freedom is undermined and promoted.

The book chapter is here.

Friday, December 13, 2019

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Friday, January 11, 2019

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Julia Powles
Medium.com
Originally posted December 7, 2018

Here is an excerpt:

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

The info is here.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Sunday, January 6, 2019

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philosophy and Technology:1-25 (forthcoming)

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The paper is here.

Sunday, October 14, 2018

The Myth of Freedom

Yuval Noah Harari
The Guardian
Originally posted September 14, 2018

Here is an excerpt:

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

The info is here.

Thursday, August 2, 2018

Genocide hoax tests ethics of academic publishing

Reuben Rose-Redwood
The Conversation
Originally posted July 3, 2018

Here is an excerpt:

What exactly "merits exposure and debate" in scholarly journals? As the editor of a scholarly journal myself, I am a strong supporter of academic freedom. But journal editors also have a responsibility to uphold the highest standards of academic quality and the ethical integrity of scholarly publications.

When I looked into the pro-Third World Quarterly petition in more detail, I noticed that over a dozen signatories were themselves editors of scholarly journals. Did they truly believe that "any work—however controversial" should be published in their own journals in the name of academic freedom?

If they had no qualms with publishing a case for colonialism, would they likewise have no ethical concerns about publishing a work advocating a case for genocide?

The genocide hoax

In late October 2017, I sent a hoax proposal for a special issue on "The Costs and Benefits of Genocide: Towards a Balanced Debate" to 13 journal editors who had signed the petition supporting the publication of "The Case for Colonialism."

In it, I mimicked the colonialism article's argument by writing: "There is a longstanding orthodoxy that only emphasizes the negative dimensions of genocide and ethnic cleansing, ignoring the fact that there may also be benefits—however controversial—associated with these political practices, and that, in some cases, the benefits may even outweigh the costs."

As I awaited the journal editors' responses, I wondered whether such an outrageous proposal would garner any support from editors who claimed to support the publication of controversial works in scholarly journals.

The information is here.

Wednesday, December 20, 2017

Americans have always been divided over morality, politics and religion

Andrew Fiala
The Fresno Bee
Originally published December 1, 2017

Our country seems more divided than ever. Recent polls from the Pew Center and the Washington Post make this clear. The Post concludes that seven in 10 Americans say we have “reached a dangerous low point” of divisiveness. A significant majority of Americans think our divisions are as bad as they were during the Vietnam War.

But let’s be honest, we have always been divided. Free people always disagree about morality, politics and religion. We disagree about abortion, euthanasia, gay marriage, drug legalization, pornography, the death penalty and a host of other issues. We also disagree about taxation, inequality, government regulation, race, poverty, immigration, national security, environmental protection, gun control and so on.

Beneath our moral and political disagreements are deep religious differences. Atheists want religious superstitions to die out. Theists think we need God’s guidance. And religious people disagree among themselves about God, morality and politics.

The post is here.

Monday, November 13, 2017

Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life

John Danaher
forthcoming in Science and Engineering Ethics

Abstract

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if  people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative  approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (i) the literature on technological unemployment and workplace automation; (ii) the antiwork critique — which I argue gives reasons to embrace technological unemployment; and (iii) the philosophical debate about the conditions for meaning in life — which I argue gives reasons for concern.

The article is here.
 

Saturday, October 7, 2017

Trump Administration Rolls Back Birth Control Mandate

Robert Pear, Rebecca R. Ruiz, and Laurie Godstein
The New York Times
Originally published October 6, 2017

The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protections for lesbian, gay, bisexual and transgender people.

The twin actions, by the Department of Health and Human Services and the Justice Department, were meant to carry out a promise issued by President Trump five months ago, when he declared in the Rose Garden that “we will not allow people of faith to be targeted, bullied or silenced anymore.”

Attorney General Jeff Sessions quoted those words in issuing guidance to federal agencies and prosecutors, instructing them to take the position in court that workers, employers and organizations may claim broad exemptions from nondiscrimination laws on the basis of religious objections.

At the same time, the Department of Health and Human Services issued two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions.

More than 55 million women have access to birth control without co-payments because of the contraceptive coverage mandate, according to a study commissioned by the Obama administration. Under the new regulations, hundreds of thousands of women could lose those benefits.

The article is here.

Italics added.  And, just when the abortion rate was at pre-1973 levels.

Friday, December 9, 2016

Moral neuroenhancement

Earp, B. D., Douglas, T., & Savulescu, J. (forthcoming). Moral neuroenhancement. In S. Johnson & K. Rommelfanger (eds.),  Routledge Handbook of Neuroethics.  New York: Routledge.

Abstract

In this chapter, we introduce the notion of moral neuroenhancement, offering a novel definition as well as spelling out three conditions under which we expect that such neuroenhancement would be most likely to be permissible (or even desirable). Furthermore, we draw a distinction between first-order moral capacities, which we suggest are less promising targets for neurointervention, and second-order moral capacities, which we suggest are more promising. We conclude by discussing concerns that moral neuroenhancement might restrict freedom or otherwise misfire, and argue that these concerns are not as damning as they may seem at first.

The book chapter is here.

Friday, August 5, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis

By John Danaher
Philosophical Disquisitions
Originally published July 19, 2016

The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.

Moral Bioenhancement, Freedom and Reason

Ingmar Persson and Julian Savulescu
Neuroethics
First Online: 09 July 2016
DOI: 10.1007/s12152-016-9268-5

Abstract

In this paper we reply to the most important objections to our advocacy of moral enhancement by biomedical means – moral bioenhancement – that John Harris advances in his new book How to be Good. These objections are to effect that such moral enhancement undercuts both moral reasoning and freedom. The latter objection is directed more specifically at what we have called the God Machine, a super-duper computer which predicts our decisions and prevents decisions to perpetrate morally atrocious acts. In reply, we argue first that effective moral bioenhancement presupposes moral reasoning rather than undermines it. Secondly, that the God Machine would leave us with extensive freedom and that the restrictions it imposes on it are morally justified by the prevention of harm to victims.

The online article is here.

Monday, February 29, 2016

Iranians launch app to escape morality police

The Observers
Originally posted February 10, 2016

Iranian developers just launched a mobile app called "Gershad", which alerts users if the morality police are nearby.

In the Islamic Republic of Iran, the morality police, a unit of the National Police, are charged with insuring that Iranian citizens comply with so-called Islamic law. For example, morality officers have to make sure that women wear their veil correctly. If they see a young man and woman walking together, they can stop them and ask if they are married or from the same family. If the morality police suspect that they are an unmarried couple, they can reprimand them.

The new app is meant for young Iranians, especially young women who wear their veil loosely, pushed far back on their heads and showing their hair and face.

The article is here.

Monday, October 19, 2015

Who's Sweating the Sexbots?

By Julie Beck
The Atlantic
Originally published September 30, 2015

Here is an excerpt:

Katherine Koster, the communications director of the Sex Workers Outreach Project, says that the comparison shows a misunderstanding of the sex trade. “That power relationship that they're assuming exists within the sex trade may or may not exist,” she says. “Sex workers are repeatedly saying that's not always what it looks like.”

Levy writes that the rise of sexbots will mean the decline of the sex industry, but Richardson is less convinced. She believes the introduction of sex robots will somehow further the exploitation of sex workers.

“It became more and more apparent that women in prostitution were already dehumanized, and this was the same model that they then wanted to put into these machines they’re developing,” Richardson says. “When we encourage a kind of scenario in the real world that encourages that mode of operation,we’re basically saying it’s okay for humans to not recognize other people as human subjects.” She says she plans to reach out to sex-work abolition groups around the world as part of the Campaign Against Sex Robots.

The entire article is here.

Wednesday, October 14, 2015

Generation XXX

The Economist
Originally published September 26, 2015

Here is an excerpt:

Whenever pornography becomes more available, it sparks a moral panic. After the advent of girlie magazines in the 1950s, and X-rated rental films in the 1980s, campaigners claimed that porn would dent women’s status, stoke sexual violence and lead men to abandon the search for a mate in favour of private pleasures. Disquiet about the effects of online pornography is once more rising (see article). Most of it is now free. As commercial producers fight over scarce revenue, their wares are becoming more extreme. Because of smartphones, tablets and laptops, hardcore material can be accessed privately by anyone. The result is that many teenagers today have seen a greater number and variety of sex acts than the most debauched Mughal emperor managed in a lifetime.

The entire article is here.

Tuesday, July 21, 2015

Euthanasia cases more than double in northern Belgium

By Raf Casert
Associated Press
Originally published March 17, 2015

Almost one in 20 people in northern Belgium died using euthanasia in 2013, more than doubling the numbers in six years, a study released Tuesday showed.

The universities of Ghent and Brussels found that since euthanasia was legalized in 2002, the acceptance of ending a life at the patient’s request has greatly increased. While a 2007 survey showed only 1.9 percent of deaths from euthanasia in the region, the figure was 4.6 percent in 2013.

The entire article is here.