Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Humanity. Show all posts
Showing posts with label Humanity. Show all posts

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

Wednesday, July 4, 2018

Curiosity and What Equality Really Means

Atul Gawande
The New Yorker
Originally published June 2, 2018

Here is an excerpt:

We’ve divided the world into us versus them—an ever-shrinking population of good people against bad ones. But it’s not a dichotomy. People can be doers of good in many circumstances. And they can be doers of bad in others. It’s true of all of us. We are not sufficiently described by the best thing we have ever done, nor are we sufficiently described by the worst thing we have ever done. We are all of it.

Regarding people as having lives of equal worth means recognizing each as having a common core of humanity. Without being open to their humanity, it is impossible to provide good care to people—to insure, for instance, that you’ve given them enough anesthetic before doing a procedure. To see their humanity, you must put yourself in their shoes. That requires a willingness to ask people what it’s like in those shoes. It requires curiosity about others and the world beyond your boarding zone.

We are in a dangerous moment because every kind of curiosity is under attack—scientific curiosity, journalistic curiosity, artistic curiosity, cultural curiosity. This is what happens when the abiding emotions have become anger and fear. Underneath that anger and fear are often legitimate feelings of being ignored and unheard—a sense, for many, that others don’t care what it’s like in their shoes. So why offer curiosity to anyone else?

Once we lose the desire to understand—to be surprised, to listen and bear witness—we lose our humanity. Among the most important capacities that you take with you today is your curiosity. You must guard it, for curiosity is the beginning of empathy. When others say that someone is evil or crazy, or even a hero or an angel, they are usually trying to shut off curiosity. Don’t let them. We are all capable of heroic and of evil things. No one and nothing that you encounter in your life and career will be simply heroic or evil. Virtue is a capacity. It can always be lost or gained. That potential is why all of our lives are of equal worth.

The article is here.

Wednesday, June 20, 2018

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Thursday, November 9, 2017

Morality and Machines

Robert Fry
Prospect
Originally published October 23, 2017

Here is an excerpt:

It is axiomatic that robots are more mechanically efficient than humans; equally they are not burdened with a sense of self-preservation, nor is their judgment clouded by fear or hysteria. But it is that very human fallibility that requires the intervention of the defining human characteristic—a moral sense that separates right from wrong—and explains why the ethical implications of the autonomous battlefield are so much more contentious than the physical consequences. Indeed, an open letter in 2015 seeking to separate AI from military application included the signatures of such luminaries as Elon Musk, Steve Wozniak, Stephen Hawking and Noam Chomsky. For the first time, therefore, human agency may be necessary on the battlefield not to take the vital tactical decisions but to weigh the vital moral ones.

So, who will accept these new responsibilities and how will they be prepared for the task? The first point to make is that none of this is an immediate prospect and it may be that AI becomes such a ubiquitous and beneficial feature of other fields of human endeavour that we will no longer fear its application in warfare. It may also be that morality will co-evolve with technology. Either way, the traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for physical endurance. Nor will the quintessential commander’s skill of judging tactical advantage have much value when cognitive computing will instantaneously integrate sensor information. The key human input will be to make the judgments that link moral responsibility to legal consequence.

The article is here.

Tuesday, October 31, 2017

Who Is Rachael? Blade Runner and Personal Identity

Helen Beebee
iai news
Originally posted October 5, 2017

It’s no coincidence that a lot of philosophers are big fans of science fiction. Philosophers like to think about far-fetched scenarios or ‘thought experiments’, explore how they play out, and think about what light they can shed on how we should think about our own situation. What if you could travel back in time? Would you be able to kill your own grandfather, thereby preventing him from meeting your grandmother, meaning that you would never have been born in the first place? What if we could somehow predict with certainty what people would do? Would that mean that nobody had free will? What if I was really just a brain wired up to a sophisticated computer running virtual reality software? Should it matter to me that the world around me – including other people – is real rather than a VR simulation? And how do I know that it’s not?

Questions such as these routinely get posed in sci-fi books and films, and in a particularly vivid and thought-provoking way. In immersing yourself in an alternative version of reality, and by identifying or sympathising with the characters and seeing things from their point of view, you can often get a much better handle on the question. Philip K. Dick – whose Do Androids Dream of Electric Sheep?, first published in 1968, is the story on which the 1982 film Blade Runner is based –  was a master at exploring these kinds of philosophical questions. Often the question itself is left unstated; his characters are generally not much prone to philosophical rumination on their situation. But it’s there in the background nonetheless, waiting for you to find it and to think about what the answer might be.

Some of the questions raised by the original Dick story don’t get any, or much, attention in Blade Runner. Mercerism – the peculiar quasi-religion of the book, which is based on empathy and which turns out to be founded on a lie  – doesn’t get a mention in the film. And while, in the film as in the book, the capacity for empathy is what (supposedly) distinguishes humans from androids (or, in the film, replicants; apparently by 1982 ‘android’ was considered too dated a word), in the film we don’t get the suggestion that the purported significance of empathy, through its role in Mercerism, is really just a ploy: a way of making everyone think that androids lack, as it were, the essence of personhood, and hence can be enslaved and bumped off with impunity.

The article is here.

Saturday, September 9, 2017

Will Technology Help Us Transcend the Human Condition?

Michael Hauskeller & Kyle McNease

Transcendence used to be the end of a spiritual quest and endeavour. Not anymore. Today we are more likely to believe that if anything can help us transcend the human condition it is not God or some kind of religious communion, but science and technology. Confidence is high that, if we do things right, and boldly and without fear embrace the new opportunities that technological progress grants us, we will soon be able to accomplish things that no human has ever done, or even imagined doing, before. With luck, we will be unimaginably smart and powerful, and virtually immortal, all thanks to a development that seems unstoppable and that has already surpassed all reasonable expectations.

Once upon a time, not so long ago, we used maps and atlases to find our way around. Occasionally we even had to stop and ask someone not named Siri or Cortana if we were indeed on the correct route. Today, our cars are navigated by satellites that triangulate our location in real time while circling the earth at thousands of miles per hour, and self-driving cars for everyone are just around the corner. Soon we may not even need cars anymore. Why go somewhere if technology can bring the world to us? Already we are in a position to do most of what we have to or want to do from home: get an education, work, do our shopping, our banking, our communication, all thanks to the internet, which 30 years ago did not exist and is now, to many of us, indispensable. Those who are coming of age today find it difficult to imagine a world without it. Currently, there are over 3.2 billion people connected to the World Wide Web, 2 billion of which live in developing countries. Most of them connect to the Web via increasingly versatile and powerful mobile devices few people would have thought possible a couple of generations ago. Soon we may be able to dispense even with mobile devices and do all of it in our bio-upgraded heads. In terms of the technology we are using every day without a second thought, the world has changed dramatically, and it continues to do so. Computation is now nearly ubiquitous, people seem constantly attached to their cellular phones, iPads, and laptops, enthusiastically endorsing their own progressive cyborgization. And connectivity does not stop at the level of human beings: even our household objects and devices are connected to the internet and communicate with each other, using their own secret language and taking care of things largely without the need for human intervention and control. The world we have built for ourselves thrives on a steady diet of zeroes and ones that have now become our co-creators, continuing the world-building in often unexpected ways.

The paper is here.

Friday, August 18, 2017

Psychologists surveyed hundreds of alt-right supporters. The results are unsettling.

Brian Resnick
Vox.com
Originally posted August 15, 2017

Here is an excerpt:

The alt-right scores high on dehumanization measures

One of the starkest, darkest findings in the survey comes from a simple question: How evolved do you think other people are?

Kteily, the co-author on this paper, pioneered this new and disturbing way to measure dehumanization — the tendency to see others as being less than human. He simply shows study participants the following (scientifically inaccurate) image of a human ancestor slowly learning how to stand on two legs and become fully human.

Participants are asked to rate where certain groups fall on this scale from 0 to 100. Zero is not human at all; 100 is fully human.

On average, alt-righters saw other groups as hunched-over proto-humans.

On average, they rated Muslims at a 55.4 (again, out of 100), Democrats at 60.4, black people at 64.7, Mexicans at 67.7, journalists at 58.6, Jews at 73, and feminists at 57. These groups appear as subhumans to those taking the survey. And what about white people? They were scored at a noble 91.8. (You can look through all the data here.)

The article is here.

Friday, August 11, 2017

What an artificial intelligence researcher fears about AI

Arend Hintze
TechXplore.com
Originally published July 14, 2017

Here is an excerpt:

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

The article is here.

Wednesday, August 2, 2017

A Primatological Perspective on Evolution and Morality

Sarah F. Brosnan
What can evolution tell us about morality?
http://www.humansandnature.org

Morality is a key feature of humanity, but how did we become a moral species? And is morality a uniquely human phenomenon, or do we see its roots in other species? One of the most fun parts of my research is studying the evolutionary basis of behaviors that we think of as quintessentially human, such as morality, to try to understand where they came from and what purpose they serve. In so doing, we can not only better understand why people behave the way that they do, but we also may be able to develop interventions that promote more beneficial decision-making.

Of course, a “quintessentially human” behavior is not replicated, at least in its entirety, in another species, so how does one study the evolutionary history of such behaviors? To do so, we focus on precursor behaviors that are related to the one in question and provide insight into the evolution of the target behavior. A precursor behavior may look very different from the final instantiation; for instance, birds’ wings appear to have originated as feathers that were used for either insulation or advertisement (i.e., sexual selection) that, through a series of intermediate forms, evolved into feathered wings. The chemical definition may be even more apt; a precursor molecule is one that triggers a reaction, resulting in a chemical that is fundamentally different from the initial chemicals used in the reaction.

How is this related to morality? We would not expect to see human morality in other species, as morality implies the ability to debate ethics and develop group rules and norms, which is not possible in non-verbal species. However, complex traits like morality do not arise de novo; like wings, they evolve from existing traits. Therefore, we can look for potential precursors in other species in order to better understand the evolutionary history of morality.

The information is here.

Sunday, January 29, 2017

Neuroexistentialism: Third-Wave Existentialism

Owen Flanagan and Gregg D. Caruso
In Neuroexistentialism: Meaning, Morals, and Purpose in the Age of Neuroscience edited by Flanagan and Caruso

Here is an excerpt:

      The scientific image is also disturbing for other reasons. It maintains, for example, that
the mind is the brain (see fn.4), that humans are animals, that how things seem is not how they
are, that introspection is a poor instrument for revealing how the mind works, that there is no
ghost in the machine, no Cartesian theatre where consciousness comes together, that our sense of
self may in part be an illusion, and that the physical universe is the only universe that there is and
it is causally closed. Many fear that if this is true, then it is the end of the world as we know it, or
knew it under the humanistic regime or image. Neuroexistentialism is one way of expressing
whatever anxiety comes from accepting the picture of myself as an animal (the Darwin part) and
that my mind is my brain, my mental states are brain states (the neuro- part). Taken together the
message is that humans are 100% animal. One might think that that message was already
available in Darwin. What does neuroscience add? It adds evidence, we might say, that Darwin’s
idea is true, and that it is, as Daniel Dennett says “a dangerous idea” (1995). Most people in the
West still hold on to the idea that they have a non-physical soul or mind. But as neuroscience
advances it becomes increasing clear that there is no place in the brain for res cogitans to be nor
any work for it to do. The universe is causally closed and the mind is the brain.

The book chapter is here.

Note to readers: This book chapter, while an introduction to the entire volume, is excellent scholarship.  There are a number of chapters that will likely appeal to clinical psychologists.

Wednesday, September 21, 2016

Forget ideology, liberal democracy’s newest threats come from technology and bioscience

John Naughton
The Guardian
Originally posted August 28, 2016

Here is an excerpt:

Here Harari ventures into the kind of dystopian territory that Aldous Huxley would recognise. He sees three broad directions.

1. Humans will lose their economic and military usefulness, and the economic system will stop attaching much value to them.

2. The system will still find value in humans collectively but not in unique individuals.

3. The system will, however, find value in some unique individuals, “but these will be a new race of upgraded superhumans rather than the mass of the population”. By “system”, he means the new kind of society that will evolve as bioscience and information technology progress at their current breakneck pace. As before, this society will be based on a deal between religion and science but this time humanism will be displaced by what Harari calls “dataism” – a belief that the universe consists of data flows, and the value of any entity or phenomenon is determined by its contribution to data processing.

The article is here.

Monday, May 9, 2016

How Animals Think

By Alison Gopnik
The Atlantic
May 2016

Here is an excerpt:

Psychologists often assume that there is a special cognitive ability—a psychological secret sauce—that makes humans different from other animals. The list of candidates is long: tool use, cultural transmission, the ability to imagine the future or to understand other minds, and so on. But every one of these abilities shows up in at least some other species in at least some form. De Waal points out various examples, and there are many more. New Caledonian crows make elaborate tools, shaping branches into pointed, barbed termite-extraction devices. A few Japanese macaques learned to wash sweet potatoes and even to dip them in the sea to make them more salty, and passed that technique on to subsequent generations. Western scrub jays “cache”—they hide food for later use—and studies have shown that they anticipate what they will need in the future, rather than acting on what they need now.

From an evolutionary perspective, it makes sense that these human abilities also appear in other species. After all, the whole point of natural selection is that small variations among existing organisms can eventually give rise to new species. Our hands and hips and those of our primate relatives gradually diverged from the hands and hips of common ancestors. It’s not that we miraculously grew hands and hips and other animals didn’t. So why would we alone possess some distinctive cognitive skill that no other species has in any form?

The article is here.

Sunday, December 20, 2015

What happens when our computers get smarter than we are?

By Nick Bostrom
Ted Talk
Originally published March 2015.

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?


Monday, December 7, 2015

Everyone Else Could Be a Mindless Zombie

By Kurt Gray
Time Magazine
Originally posted November 17, 2015

Here is an excerpt:

Our research reveals that whether something can think or feel is mostly a matter of perception, which can lead to bizarre reversals. Objectively speaking, humans are smarter than cats, and yet people treat their pets like people and the homeless like objects. Objectively speaking, pigs are smarter than baby seals, but people will scream about seal clubbing while eating a BLT.

That minds are perceived spells trouble for political harmony. When people see minds differently in chickens, fetuses, and enemy combatants, it leads to conflicts about vegetarianism, abortion, and torture. Despite facilitating these debates, mind perception can make our moral opponents seem more humans and less monstrous. With abortion, both liberals and conservatives agree that killing babies is immoral, and disagree only about whether a fetus is a baby or a mass of mindless cells.

The entire article is here.

Sunday, April 12, 2015

Human, All Too Human: 3-Part Documentary Profiles Nietzsche, Heidegger & Sartre

From Open Culture
Originally published in April 8, 2014

Certainly three of the most radical thinkers of the last 150 years, Nietzsche, Heidegger, and Sartre were also three of the most controversial, and at times politically toxic, for their perceived links to totalitarian regimes. In Nietzsche’s case, the connection to Nazism was wholly spurious, concocted after his death by his anti-Semitic sister. Nevertheless, Nietzsche’s philosophy is far from sympathetic to equality, his politics, such as they are, highly undemocratic. The case of Heidegger is much more disturbing—a member of the Nazi party, the author of Being and Time notoriously held fascist views, made all the more clear by the recent publication of his infamous “black notebooks.” And Sartre, author of Being and Nothingness, has long been accused of supporting Stalinism—a charge that may be oversimplified, but is not without some merit.

The three 50 minute videos are here.

Saturday, September 6, 2014

Understanding Heidegger on Technology

By Mark Blitz
The New Atlantis: A Journal of Technology and Society
Originally published in 2014

Here is an excerpt:

Technology as Revealing

Heidegger’s concern with technology is not limited to his writings that are explicitly dedicated to it, and a full appreciation of his views on technology requires some understanding of how the problem of technology fits into his broader philosophical project and phenomenological approach. (Phenomenology, for Heidegger, is a method that tries to let things show themselves in their own way, and not see them in advance through a technical or theoretical lens.) The most important argument in Being and Time that is relevant for Heidegger’s later thinking about technology is that theoretical activities such as the natural sciences depend on views of time and space that narrow the understanding implicit in how we deal with the ordinary world of action and concern. We cannot construct meaningful distance and direction, or understand the opportunities for action, from science’s neutral, mathematical understanding of space and time. Indeed, this detached and “objective” scientific view of the world restricts our everyday understanding. Our ordinary use of things and our “concernful dealings” within the world are pathways to a more fundamental and more truthful understanding of man and being than the sciences provide; science flattens the richness of ordinary concern. By placing science back within the realm of experience from which it originates, and by examining the way our scientific understanding of time, space, and nature derives from our more fundamental experience of the world, Heidegger, together with his teacher Husserl and some of his students such as Jacob Klein and Alexandre Koyré, helped to establish new ways of thinking about the history and philosophy of science.

The entire story is here.

Friday, August 29, 2014

Artificial Wombs Are Coming, but the Controversy Is Already Here

By Zoltan Istvan
MotherBoard
Originally posted August 4, 2014

Of all the transhumanist technologies coming in the near future, one stands out that both fascinates and perplexes people. It's called ectogenesis: raising a fetus outside the human body in an artificial womb.

It has the possibility to change one of the most fundamental acts that most humans experience: the way people go about having children. It also has the possibility to change the way we view the female body and the field of reproductive rights.

Naturally, it's a social and political minefield.

The entire article is here.