Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Rights. Show all posts
Showing posts with label Rights. Show all posts

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Wednesday, January 17, 2024

Trump Is Coming for Obamacare Again

Ronald Brownstein
The Atlantic
Originally posted 10 Jan 24

Donald Trump’s renewed pledge on social media and in campaign rallies to repeal and replace the Affordable Care Act has put him on a collision course with a widening circle of Republican constituencies directly benefiting from the law.

In 2017, when Trump and congressional Republicans tried and failed to repeal the ACA, also known as Obamacare, they faced the core contradiction that many of the law’s principal beneficiaries were people and institutions that favored the GOP. That list included lower-middle-income workers without college degrees, older adults in the final years before retirement, and rural communities.


Here's the gist:
  • Trump's stance: He believes Obamacare is a "catastrophe" and wants to replace it with "MUCH BETTER HEALTHCARE."
  • Challenges: Repealing Obamacare is likely an uphill battle. Its popularity has increased, and even some Republicans benefit from the law.
  • Potential consequences: If Trump succeeds, millions of Americans could lose their health insurance, while others face higher premiums.
  • Political implications: Trump's renewed focus on Obamacare could energize his base but alienate moderate voters.

Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Friday, July 28, 2023

Humans, Neanderthals, robots and rights

Mamak, K.
Ethics Inf Technol 24, 33 (2022).

Abstract

Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.

Conclusions

The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.


Here is a summary: 

In terms of robot rights, one factor to consider is the nature of robots. Robots are becoming increasingly sophisticated, and some experts believe that they may eventually become as intelligent as humans. If this is the case, then it is possible that robots could deserve the same rights as humans.

Another factor to consider is the relationship between humans and robots. Humans have a long history of using animals, and some people argue that robots are simply another form of animal. If this is the case, then it is possible that robots do not deserve the same rights as humans.
  • The question of robot rights is a complex one, and there is no easy answer.
  • The nature of robots and the relationship between humans and robots are two important factors to consider when thinking about robot rights.
  • It is important to start thinking about robot rights now, before robots become too sophisticated.

Tuesday, July 18, 2023

How AI is learning to read the human mind

Nicola Smith
The Telegraph
Originally posted 23 May 2023

Here is an excerpt:

‘Brain rights’

But he warned that it could also be weaponised and used for military applications or for nefarious purposes to extract information from people.

“We are on the brink of a crisis from the point of view of mental privacy,” he said. “Humans are defined by their thoughts and their mental processes and if you can access them then that should be the sanctuary.”

Prof Yuste has become so concerned about the ethical implications of advanced neurotechnology that he co-founded the NeuroRights Foundation to promote “brain rights” as a new form of human rights.

The group advocates for safeguards to prevent the decoding of a person’s brain activity without consent, for protection of a person’s identity and free will, and for the right to fair access to mental augmentation technology.

They are currently working with the United Nations to study how human rights treaties can be brought up to speed with rapid progress in neurosciences, and raising awareness of the issues in national parliaments.

In August, the Human Rights Council in Geneva will debate whether the issues around mental privacy should be covered by the International Covenant on Civil and Political Rights, one of the most significant human rights treaties in the world.

The gravity of the task was comparable to the development of the atomic bomb, when scientists working on atomic energy warned the UN of the need for regulation and an international control system of nuclear material to prevent the risk of a catastrophic war, said Prof Yuste.

As a result, the International Atomic Energy Agency (IAEA) was created and is now based in Vienna.

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Wednesday, December 16, 2020

If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs

Anand Vaidya
The Conversation
Originally posted 27 Oct 20

Here is an excerpt:

There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data dilemma

The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.

Friday, December 11, 2020

11th Circuit blocks South FL prohibitions on 'conversion therapy' for minors as unconstitutional

Michael Moline
Florida Pheonix
Originally posted 20 Nov 20

Here is an excerpt:

“We understand and appreciate that the therapy is highly controversial. But the First Amendment has no carve-out for controversial speech. We hold that the challenged ordinances violate the First Amendment because they are content-based regulations of speech that cannot survive strict scrutiny,” Grant wrote.

Judge Beverly Martin dissented, pointing to condemnations of the practice by the American Academy of Pediatrics, the American Psychiatric Association, the American Psychological Association, the American Psychoanalytic Association, the American Academy of Child and Adolescent Psychiatry, the American School Counselor Association, the U.S. Department of Health and Human Services, and the World Health Organization.

“Today’s majority opinion puts a stop to municipal efforts to regulate ‘sexual orientation change efforts’ (commonly known as ‘conversion therapy’), which is known to be a harmful therapeutic practice,” Martin wrote.

“The majority invalidates laws enacted to curb these therapeutic practices, despite strong evidence of the harm they cause, as well as the laws’ narrow focus on licensed therapists practicing on patients who are minors. Although I am mindful of the free-speech concerns the majority expresses, I respectfully dissent from the decision to enjoin these laws.”

Matt Staver, founder and chairman of Liberty Counsel, the conservative legal organization that represented two counselors who challenged the ordinance, welcomed the ruling.

“This is a huge victory for counselors and their clients to choose the counsel of their choice free of political censorship from government ideologues. This case is the beginning of the end of similar unconstitutional counseling bans around the country,” he said in a written statement.

Thursday, November 12, 2020

Deinstitutionalization of People with Mental Illness: Causes and Consequences

Daniel Yohanna, MD
Virtual Mentor. 2013;15(10):886-891.

Here is an excerpt:

State hospitals must return to their traditional role of the hospital of last resort. They must function as entry points to the mental health system for most people with severe mental illness who otherwise will wind up in a jail or prison. State hospitals are also necessary for involuntary commitment. As a nation, we are working through a series of tragedies involving weapons in the hands of people with severe mental illness—in Colorado, where James Holmes killed or wounded 70 people, Arizona, where Jared Loughner killed or wounded 19 people, and Connecticut, where Adam Lanza killed 28 including children as young as 6 years old. All are thought to have had severe mental illness at the time of their crimes. After we finish the debate about the availability of guns, particularly to those with mental illness, we will certainly have to address the mental health system and lack of services, especially for those in need of treatment but unwilling or unable to seek it. With proper services, including involuntary commitment, many who have the potential for violence can be treated. Just where will those services be initiated, and what will be needed?

Nearly 30 years ago, Gudeman and Shore published an estimate of the number of people who would need long-term care—defined as secure, supportive, indefinite care in specialized facilities—in Massachusetts. Although a rather small study, it is still instructive today. They estimated that 15 persons out of 100,000 in the general population would need long-term care. Trudel and colleagues confirmed this approximation with a study of the long-term need for care among people with the most severe and persistent mental illness in a semi-rural area in Canada, where they estimated a need of 12.4 beds per 100,000. A consensus of other experts estimates that the total number of state beds required for acute and long-term care would be more like 50 beds per 100,000 in the population. At the peak of availability in 1955, there were 340 beds per 100,000. In 2010, the number of state beds was 43,318 or 14.1 beds per 100,000.

Wednesday, September 30, 2020

Christians, Gun Rights, and the American Social Compact

David French
The Dispatch
Originally posted September 2020

Here is an excerpt:

Why would I say that Christians are celebrating Rittenhouse? For one thing, a Christian crowdfunding site has raised more than $450,000 for his legal defense. Christian writers have called him a “good Samaritan” and argued that he’s a “decent, idealistic kid who entered that situation with the desire to do good, and, in fact, did do good.” (Emphasis added.)

Rittenhouse’s case comes on the heels of the Republican decision to showcase Mark and Patricia McCloskey at the Republican National Convention, the St. Louis couple that has been criminally charged for brandishing weapons at Black Lives Matter protesters who were marching outside their home.

The McCloskeys are obviously entitled to a legal defense, and I am not opining on the legal merits of their case (again, there is much we don’t know), but as a gun-owner, I cringed at their actions. They weren’t heroic. They were reckless. Pointing a weapon at another human being is a gravely serious act. It’s inherently dangerous, and if done unlawfully it often triggers in its targets an immediate right of violent (and potentially deadly) self-defense.

At the same time, we’re seeing an increasing number of openly-armed, rifle-toting conservative vigilantes not just aggressively confronting far-left crowds in the streets, but also using their weapons to intimidate lawmakers into canceling a legislative session.

In other words, we are watching gun-owners, sometimes cheered on by Christian conservatives, breaking the social compact. They aren’t exercising their rights responsibly, they’re pushing them to the (sometimes literally) bleeding edge, pouring gasoline on a civic fire, and creating real fear in their fellow citizens.

This is exactly when a healthy conservative Christian community rises up and quite simply says, “No.” With one voice it condemns vigilantism and models civic responsibility.

The information is here.

Thursday, April 2, 2020

Intelligence, Surveillance, and Ethics in a Pandemic

Jessica Davis
JustSecurity.org
Originally posted 31 March 20

Here is an excerpt:

It is imperative that States and their citizens question how much freedom and privacy should be sacrificed to limit the impact of this pandemic. It is also not sufficient to ask simply “if” something is legal; we should also ask whether it should be, and under what circumstances. States should consider the ethics of surveillance and intelligence, specifically whether it is justified, done under the right authority, if it can be done with intentionality and proportionality and as a last resort, and if targets of surveillance can be separated from non-targets to avoid mass surveillance. These considerations, combined with enhanced transparency and sunset clauses on the use of intelligence and surveillance techniques, can allow States to ethically deploy these powerful tools to help stop the spread of the virus.

States are employing intelligence and surveillance techniques to contain the spread of the illness because these methods can help track and identify infected or exposed people and enforce quarantines. States have used cell phone data to track people at risk of infection or transmission and financial data to identify places frequented by at-risk people. Social media intelligence is also ripe for exploitation in terms of identifying social contacts. This intelligence, is increasingly being combined with health data, creating a unique (and informative) picture of a person’s life that is undoubtedly useful for virus containment. But how long should States have access to this type of information on their citizens, if at all? Considering natural limits to the collection of granular data on citizens is imperative, both in terms of time and access to this data.

The info is here.

Wednesday, December 25, 2019

Convict Trump: The Constitution is more important than abortion

Paul Miller
The Christian Post
Originally posted 22 Dec 19

Christians should advocate for President Donald J. Trump’s conviction and removal from office by the Senate. While Trump has an excellent record of appointing conservative judges and advancing a prolife agenda, his criminal conduct endangers the Constitution. The Constitution is more important than the prolife cause because without the Constitution, prolife advocacy would be meaningless.

The fact that we live in a democratic republic is what enables us to turn our prolife convictions from private opinion into public advocacy. In other systems of government, the government does not care what its citizens think or believe. Only when the government is forced to take counsel from its citizens through elections, representation, and majoritarian rule do our opinions count.

Our democratic Constitution — adopted to “secure the blessings of liberty” for all Americans — is what guarantees that our voice matters. Without it, we can talk about the evils of abortion until we are blue in the face and it will never affect abortion policy one iota. The Constitution — with its guarantees of free speech, free assembly, the right to petition the government, regular elections, and the peaceful transfer of power — is the only thing that forces the government to listen to us.

Trump’s behavior is a threat to our Constitutional order. The facts behind his impeachment show that he abused a position of public trust for private gain, the definition of corruption and abuse of power. More worryingly, he refused to comply with Congress’s power to investigate his conduct, a fundamental breach of the checks and balances that is the bedrock of our Constitutional order.

The info is here.

Saturday, October 5, 2019

Brain-reading tech is coming. The law is not ready to protect us.

Sigal Samuel
vox.com
Originally posted August 30, 2019

Here is an excerpt:

2. The right to mental privacy

You should have the right to seclude your brain data or to publicly share it.

Ienca emphasized that neurotechnology has huge implications for law enforcement and government surveillance. “If brain-reading devices have the ability to read the content of thoughts,” he said, “in the years to come governments will be interested in using this tech for interrogations and investigations.”

The right to remain silent and the principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

3. The right to mental integrity

You should have the right not to be harmed physically or psychologically by neurotechnology.

BCIs equipped with a “write” function can enable new forms of brainwashing, theoretically enabling all sorts of people to exert control over our minds: religious authorities who want to indoctrinate people, political regimes that want to quash dissent, terrorist groups seeking new recruits.

What’s more, devices like those being built by Facebook and Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth signal, increasing or decreasing the voltage of the current that goes to your brain — thus making you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca said, adding, “A hack like this wouldn’t require that much technological sophistication.”

The info is here.

Tuesday, April 9, 2019

N.J. approves bill giving terminally ill people the right to end their lives

Susan Livio
www.nj.com
Originally posted March 25, 2019

New Jersey is poised to become the eighth state to allow doctors to write a lethal prescription for terminally ill patients who want to end their lives.

The state Assembly voted 41-33 with four abstentions Monday to pass the “Medical Aid in Dying for the Terminally Ill Act." Minutes later, the state Senate approved the bill 21-16.

Gov. Phil Murphy later issued a statement saying he would sign the measure into law.

“Allowing terminally ill and dying residents the dignity to make end-of-life decisions according to their own consciences is the right thing to do," the Democratic governor said. "I look forward to signing this legislation into law.”

The measure (A1504) would take effect four months after it is signed.

Susan Boyce, 55 of Rumson, smiled and wept after the final vote.

“I’ve been working on this quite a while," said Boyce, who is diagnosed with a terminal auto immune disease, Alpha-1 antitrypsin deficiency, and needs an oxygen tank to breathe.

The info is here.

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Saturday, November 24, 2018

Establishing an AI code of ethics will be harder than people think

Karen Hao
www.technologyreview.com
Originally posted October 21, 2018

Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.

"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?

Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.

The info is here.

Wednesday, September 12, 2018

‘My death is not my own’: the limits of legal euthanasia

Henk Blanken
The Guardian
Originally posted August 10, 2018

Here is an excerpt:

Of the 10,000 Dutch patients with dementia who die each year, roughly half of them will have had an advance euthanasia directive. They believed a doctor would “help” them. After all, this was permitted by law, and it was their express wish. Their naive confidence is shared by four out of 10 Dutch adults, who are convinced that a doctor is bound by an advance directive. In fact, doctors are not obliged to do anything. Euthanasia may be legal, but it is not a right.

As doctors have a monopoly on merciful killing, their ethical standard, and not the law, ultimately determines whether a man like Joop can die. An advance directive is just one factor, among many, that a doctor will consider when deciding on a euthanasia case. And even though the law says it’s legal, almost no doctors are willing to perform euthanasia on patients with severe dementia, since such patients are no longer mentally capable of making a “well-considered request” to die.

This is the catch-22. If your dementia is at such an early stage that you are mentally fit enough to decide that you want to die, then it is probably “too early” to want to die. You still have good years left. And yet, by the time your dementia has deteriorated to the point at which you wished (when your mind was intact) to die, you will no longer be allowed to die, as you are not mentally fit to make that decision. It is now “too late” to die.

The info is here.

Friday, May 11, 2018

Samantha’s suffering: why sex machines should have rights too

Victoria Brooks
The Conversation
Originally posted April 5, 2018

Here is the conclusion:

Machines are indeed what we make them. This means we have an opportunity to avoid assumptions and prejudices brought about by the way we project human feelings and desires. But does this ethically entail that robots should be able to consent to or refuse sex, as human beings would?

The innovative philosophers and scientists Frank and Nyholm have found many legal reasons for answering both yes and no (a robot’s lack of human consciousness and legal personhood, and the “harm” principle, for example). Again, we find ourselves seeking to apply a very human law. But feelings of suffering outside of relationships, or identities accepted as the “norm”, are often illegitimised by law.

So a “legal” framework which has its origins in heteronormative desire does not necessarily construct the foundation of consent and sexual rights for robots. Rather, as the renowned post-human thinker Rosi Braidotti argues, we need an ethic, as opposed to a law, which helps us find a practical and sensitive way of deciding, taking into account emergences from cross-species relations. The kindness and empathy we feel toward Samantha may be a good place to begin.

The article is here.

Sunday, April 22, 2018

What is the ethics of ageing?

Christopher Simon Wareham
Journal of Medical Ethics 2018;44:128-132.

Abstract

Applied ethics is home to numerous productive subfields such as procreative ethics, intergenerational ethics and environmental ethics. By contrast, there is far less ethical work on ageing, and there is no boundary work that attempts to set the scope for ‘ageing ethics’ or the ‘ethics of ageing’. Yet ageing is a fundamental aspect of life; arguably even more fundamental and ubiquitous than procreation. To remedy this situation, I examine conceptions of what the ethics of ageing might mean and argue that these conceptions fail to capture the requirements of the desired subfield. The key reasons for this are, first, that they view ageing as something that happens only when one is old, thereby ignoring the fact that ageing is a process to which we are all subject, and second that the ageing person is treated as an object in ethical discourse rather than as its subject. In response to these shortcomings I put forward a better conception, one which places the ageing person at the centre of ethical analysis, has relevance not just for the elderly and provides a rich yet workable scope. While clarifying and justifying the conceptual boundaries of the subfield, the proposed scope pleasingly broadens the ethics of ageing beyond common negative associations with ageing.

The article is here.