Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Future. Show all posts
Showing posts with label Future. Show all posts

Wednesday, May 22, 2019

Why Behavioral Scientists Need to Think Harder About the Future

Ed Brandon
www.behavioralscientist.org
Originally published January 17, 2019

Here is an excerpt:

It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.

As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.

The info is here.

Sunday, April 15, 2018

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Monday, April 9, 2018

Use Your Brain: Artificial Intelligence Isn't Close to Replacing It

Leonid Bershidsky
Bloomberg.com
Originally posted March 19, 2018

Nectome promises to preserve the brains of terminally ill people in order to turn them into computer simulations -- at some point in the future when such a thing is possible. It's a startup that's easy to mock. 1  Just beyond the mockery, however, lies an important reminder to remain skeptical of modern artificial intelligence technology.

The idea behind Nectome is known to mind uploading enthusiasts (yes, there's an entire culture around the idea, with a number of wealthy foundations backing the research) as "destructive uploading": A brain must be killed to map it. That macabre proposition has resulted in lots of publicity for Nectome, which predictably got lumped together with earlier efforts to deep-freeze millionaires' bodies so they could be revived when technology allows it. Nectome's biggest problem, however, isn't primarily ethical.

The company has developed a way to embalm the brain in a way that keeps all its synapses visible with an electronic microscope. That makes it possible to create a map of all of the brain's neuron connections, a "connectome." Nectome's founders believe that map is the most important element of the reconstructed human brain and that preserving it should keep all of a person's memories intact. But even these mind uploading optimists only expect the first 10,000-neuron network to be reconstructed sometime between 2021 and 2024.

The information is here.

Thursday, March 1, 2018

Concern for Others Leads to Vicarious Optimism

Andreas Kappes, Nadira S. Faber, Guy Kahane, Julian Savulescu, Molly J. Crockett
Psychological Science 
First Published January 30, 2018

Abstract

An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

From the Discussion section

Optimism is a self-centered phenomenon in which people underestimate the likelihood of negative future events for themselves compared with others (Weinstein, 1980). Usually, the “other” is defined as a group of average others—an anonymous mass. When past studies asked participants to estimate the likelihood of an event happening to either themselves or the average population, participants did not show a learning bias for the average population (Garrett & Sharot, 2014). These findings are unsurprising given that people typically feel little concern for anonymous groups or anonymous individual strangers (Kogut & Ritov, 2005; Loewenstein et al., 2005). Yet people do care about identifiable others, and we accordingly found that people exhibit an optimistic learning bias for identifiable strangers and, even more markedly, for friends. Our research thereby suggests that optimism in learning is not restricted to oneself. We see not only our own lives through rose-tinted glasses but also the lives of those we care about.

The research is here.

Wednesday, January 17, 2018

‘I want to help humans genetically modify themselves’

Tom Ireland
The Guardian
Originally posted December 24, 2017

Josiah Zayner, 36, recently made headlines by becoming the first person to use the revolutionary gene-editing tool Crispr to try to change their own genes. Part way through a talk on genetic engineering, Zayner pulled out a syringe apparently containing DNA and other chemicals designed to trigger a genetic change in his cells associated with dramatically increased muscle mass. He injected the DIY gene therapy into his left arm, live-streaming the procedure on the internet.

The former Nasa biochemist, based in California, has become a leading figure in the growing “biohacker” movement, which involves loose collectives of scientists, engineers, artists, designers, and activists experimenting with biotechnology outside of conventional institutions and laboratories.

Despite warnings from the US Food and Drug Administration (FDA) that selling gene therapy products without regulatory approval is illegal, Zayner sells kits that allow anyone to get started with basic genetic engineering techniques, and has published a free guide for others who want to take it further and experiment on themselves.

The article is here.

Thursday, January 11, 2018

Is Blended Intelligence the Next Stage of Human Evolution?

Richard Yonck
TED Talk
Published December 8, 2017

What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.


Tuesday, January 2, 2018

Votes for the future

Thomas Wells
Aeon.co
Originally published May 8, 2014

Here is an excerpt:

By contrast, future generations must accept whatever we choose to bequeath them, and they have no way of informing us of their values. In this, they are even more helpless than foreigners, on whom our political decisions about pollution, trade, war and so on are similarly imposed without consent. Disenfranchised as they are, such foreigners can at least petition their own governments to tell ours off, or engage with us directly by writing articles in our newspapers about the justice of their cause. The citizens of the future lack even this recourse.

The asymmetry between past and future is more than unfair. Our ancestors are beyond harm; they cannot know if we disappoint them. Yet the political decisions we make today will do more than just determine the burdens of citizenship for our grandchildren. They also concern existential dangers such as the likelihood of pandemics and environmental collapse. Without a presence in our political system, the plight of future citizens who might suffer or gain from our present political decisions cannot be properly weighed. We need to give them a voice.

How could we do that? After all, they can’t actually speak to us. Yet even if we can’t know what future citizens will actually value and believe in, we can still consider their interests, on the reasonable assumption that they will somewhat resemble our own (everybody needs breathable air, for example). Interests are much easier than wishes, and quite suitable for representation by proxies.

So perhaps we should simply encourage current citizens to take up the Burkean perspective and think of their civic duty in a more extended way when casting votes. Could this work?

The article is here.

Friday, August 25, 2017

A philosopher who studies life changes says our biggest decisions can never be rational

Olivia Goldhill
Quartz.com
Originally published August 13, 2017

At some point, everyone reaches a crossroads in life: Do you decide to take that job and move to a new country, or stay put? Should you become a parent, or continue your life unencumbered by the needs of children?

Instinctively, we try to make these decisions by projecting ourselves into the future, trying to imagine which choice will make us happier. Perhaps we seek counsel or weigh up evidence. We might write out a pro/con list. What we are doing, ultimately, is trying to figure out whether or not we will be better off working for a new boss and living in Morocco, say, or raising three beautiful children.

This is fundamentally impossible, though, says philosopher L.A. Paul at the University of North Carolina at Chapel Hill, a pioneer in the philosophical study of transformative experiences. Certain life choices are so significant that they change who we are. Before undertaking those choices, we are unable to evaluate them from the perspective and values of our future, changed selves. In other words, your present self cannot know whether your future self will enjoy being a parent or not.

The article is here.

Tuesday, May 10, 2016

Where do minds belong?

by Caleb Scharf
Aeon
Originally published March 22, 2016

As a species, we humans are awfully obsessed with the future. We love to speculate about where our evolution is taking us. We try to imagine what our technology will be like decades or centuries from now. And we fantasise about encountering intelligent aliens – generally, ones who are far more advanced than we are. Lately those strands have begun to merge. From the evolution side, a number of futurists are predicting the singularity: a time when computers will soon become powerful enough to simulate human consciousness, or absorb it entirely. In parallel, some visionaries propose that any intelligent life we encounter in the rest of the Universe is more likely to be machine-based, rather than humanoid meat-bags such as ourselves.

These ruminations offer a potential solution to the long-debated Fermi Paradox: the seeming absence of intelligent alien life swarming around us, despite the fact that such life seems possible. If machine intelligence is the inevitable end-point of both technology and biology, then perhaps the aliens are hyper-evolved machines so off-the-charts advanced, so far removed from familiar biological forms, that we wouldn’t recognise them if we saw them. Similarly, we can imagine that interstellar machine communication would be so optimised and well-encrypted as to be indistinguishable from noise. In this view, the seeming absence of intelligent life in the cosmos might be an illusion brought about by our own inadequacies.

The article is here.

Saturday, March 26, 2016

How our bias toward the future can cloud our moral judgment

By Agnieszka Jaroslawska
The Conversation
Originally published March 7, 2016

Here are two excerpts:

It may seem illogical, but research has confirmed that people have markedly different reactions to misdemeanours that have already happened to those that are going to happen in the future. We tend to judge future crimes to be more deliberate, less moral, and more deserving of punishment than equivalent transgressions in the past. Technically speaking, we exhibit “temporal asymmetries” in moral judgements.

(cut)

Research suggests that people rely on their emotions when making judgements of fairness and morality. When emotions run high, judgements are more extreme than when reactions are weak.

The article is here.

Wednesday, January 13, 2016

The A.I Anxiety

by Joel Achenbach
The Washington Post
Originally published December 27, 2015

Here is an excerpt:

But the discussion reflects a broader truth: We live in an age in which machine intelligence has become a part of daily life. Computers fly planes and soon will drive cars. Computer algorithms anticipate our needs and decide which advertisements to show us. Machines create news stories without human intervention. Machines can recognize your face in a crowd.

New technologies — including genetic engineering and nanotechnology — are cascading upon one another and converging. We don’t know how this will play out. But some of the most serious thinkers on Earth worry about potential hazards — and wonder whether we remain fully in control of our inventions.

The article is here.

Editor's Note: What if a form of consciousness emerges from AI? There are many reasons, except for anthropomorphic bias, to expect a form of consciousness to surface from highly complex, synthetic, artificial intelligence.  What then?  This concern is not addressed in the article.

Thursday, December 17, 2015

Artificial Intelligence Ethics a New Focus at Cambridge University

By Amir Mizroch
Wall Street Journal blog
Originally posted December 3, 2015

A new center to study the implications of artificial intelligence and try to influence its ethical development has been established at the U.K.’s Cambridge University, the latest sign that concerns are rising about AI’s impact on everything from loss of jobs to humanity’s very existence.

The Leverhulme Trust, a non-profit foundation that awards grants for academic research in the U.K., on Thursday announced a grant of £10 million ($15 million) over ten years to the university to establish the Leverhulme Centre for the Future of Intelligence.

The entire blog post is here.

Sunday, November 22, 2015

A Driverless Car Dystopia? Technology and the Lives We Want to Live

By Anthony Painter
RSA
Originally published November 6, 2015

Here is an excerpt:

There needs to be a bigger public debate about the type of society we want, how technology can help us, and what institutions we need to help us all interface with the changes we are likely to see. Could block-chain, bitcoin and digital currencies help us spread new forms of collective ownership and give us more power over the public services we use? How do we find a sweet-spot where consumers and workers – and we are both - share equally in the benefits of the ‘sharing economy’? Is a universal Basic Income a necessary foundation for a world of varying frequency and diverse work arrangements and obligations to others such as elderly relatives and our kids? What do we want to be private and what are we happy to share with companies or the state? Should this be a security conversation or bigger question of ethics? How should we plan transport, housing, work and services around our needs and the types of lives we want to live in communities that have human worth?

The entire article is here.