Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Progress. Show all posts
Showing posts with label Progress. Show all posts

Friday, October 13, 2017

Automation on our own terms

Benedict Dellot and Fabian Wallace-Stephens
Medium.com
Originally published September 17, 2017

Here is an excerpt:

There are three main risks of embracing AI and robotics unreservedly:
  1. A rise in economic inequality — To the extent that technology deskills jobs, it will put downward pressure on earnings. If jobs are removed altogether as a result of automation, the result will be greater returns for those who make and deploy the technology, as well as the elite workers left behind in firms. The median OECD country has already seen a decrease in its labour share of income of about 5 percentage points since the early 1990s, with capital’s share swallowing the difference. Another risk here is market concentration. If large firms continue to adopt AI and robotics at a faster rate than small firms, they will gain enormous efficiency advantages and as a result could take excessive share of markets. Automation could lead to oligopolistic markets, where a handful of firms dominate at the expense of others.
  2. A deepening of geographic disparities — Since the computer revolution of the 1980s, cities that specialise in cognitive work have gained a comparative advantage in job creation. In 2014, 5.5 percent of all UK workers operated in new job types that emerged after 1990, but the figure for workers in London was almost double that at 9.8 percent. The ability of cities to attract skilled workers, as well as the diverse nature of their economies, makes them better placed than rural areas to grasp the opportunities of AI and robotics. The most vulnerable locations will be those that are heavily reliant on a single automatable industry, such as parts of the North East that have a large stock of call centre jobs.
  3. An entrenchment of demographic biases — If left untamed, automation could disadvantage some demographic groups. Recall our case study analysis of the retail sector, which suggested that AI and robotics might lead to fewer workers being required in bricks and mortar shops, but more workers being deployed in warehouse operative roles. Given women are more likely to make up the former and men the latter, automation in this case could exacerbate gender pay and job differences. It is also possible that the use of AI in recruitment (e.g. algorithms that screen CVs) could amplify workplace biases and block people from employment based on their age, ethnicity or gender.

Saturday, September 30, 2017

What is New In Psychotherapy & Counseling in the Last 10 Years



Sam Knapp and I will be presenting this unique blend of small group learning, research, and lecture.

It has been estimated that the half-life for a professional psychologist is 9 years. Thus, professional psychologists need to work assiduously to keep up to date with the changes in the field. This continuing education program strives to do that by having participants reflect on the most significant changes in the field in the last 10 years. To facilitate this reflection, the presenter offers his update in the psychotherapy and counseling literature in the last 10 years as an opportunity for participants to reflect on and consider their perceptions of the important developments in the field. This focuses on changes in psychotherapy and counseling and does not consider changes in other fields, except as they influence psychotherapy or counseling. There will be considerable participant interaction.

Sunday, August 27, 2017

Super-intelligence and eternal life

Transhumanism’s faithful follow it blindly into a future for the elite

Alexander Thomas
The Conversation
First published July 31, 2017

The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.

They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).

Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.

“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.

The article is here.

Thursday, August 24, 2017

China's Plan for World Domination in AI Isn't So Crazy After All

Mark Bergen and David Ramli
Bloomberg.com
First published August 14, 2017

Here is an excerpt:

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts China’s biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from China’s investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantly staunch government support that includes handing over gobs of citizens’ data –- something that makes Western officials squirm.

Data is key because that’s how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so it’s much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

The article is here.

Sunday, July 30, 2017

Engineering Eden: The quest for eternal life

Kristin Kostick
Baylor College of Medicine
Originally posted June 2,2017

If you’re like most people, you may associate the phrase “eternal life” with religion: The promise that we can live forever if we just believe in God. You probably don’t associate the phrase with an image of scientists working in a lab, peering at worms through microscopes or mice skittering through boxes. But you should.

The quest for eternal life has only recently begun to step out from behind the pews and into the petri dish.

I recently discussed the increasing feasibility of the transhumanist vision due to continuing advancements in biotech, gene- and cell-therapies. These emerging technologies, however, don’t erase the fact that religion – not science – has always been our salve for confronting death’s inevitability. For believers, religion provides an enduring mechanism (belief and virtue) behind the perpetuity of existence, and shushes our otherwise frantic inability to grasp: How can I, as a person, just end?

The Mormon transhumanist Lincoln Cannon argues that science, rather than religion, offers a tangible solution to this most basic existential dilemma. He points out that it is no longer tenable to believe in eternal life as only available in heaven, requiring the death of our earthly bodies before becoming eternal, celestial beings.

Would a rational person choose to believe in an uncertain, spiritual afterlife over the tangible persistence of one’s own familiar body and the comforting security of relationships we’ve fostered over a lifetime of meaningful interactions?

The article is here.

Saturday, July 8, 2017

The Ethics of CRISPR

Noah Robischon
Fast Company
Originally published on June 20, 2017

On the eve of publishing her new book, Jennifer Doudna, a pioneer in the field of CRISPR-Cas9 biology and genome engineering, spoke with Fast Company about the potential for this new technology to be used for good or evil.

“The worst thing that could happen would be for [CRISPR] technology to be speeding ahead in laboratories,” Doudna tells Fast Company. “Meanwhile, people are unaware of the impact that’s coming down the road.” That’s why Doudna and her colleagues have been raising awareness of the following issues.

DESIGNER HUMANS

Editing sperm cells or eggs—known as germline manipulation—would introduce inheritable genetic changes at inception. This could be used to eliminate genetic diseases, but it could also be a way to ensure that your offspring have blue eyes, say, and a high IQ. As a result, several scientific organizations and the National Institutes of Health have called for a moratorium on such experimentation. But, writes Doudna, “it’s almost certain that germline editing will eventually be safe enough to use in the clinic.”

The article is here.

Monday, May 22, 2017

The morality of technology

Rahul Matthan
Live Mint
Originally published May 3, 2017

Here is an excerpt:

Another example of the two sides of technology is drones—a modern technology that is already being deployed widely—from the delivery of groceries to ensuring that life saving equipment reaches first responders in high density urban areas. But for every beneficent use of drone tech, there are an equal number of dubious uses that challenge our ethical boundaries. Foremost among these is development of AI-powered killer drones—autonomous flying weapons intelligent enough to accurately distinguish between friend and foe and then, autonomously, take the decision to execute a kill.

This duality is inherent in all of tech. But just because technology can be used for evil, that should not, of itself, be a reason not to use it. We need new technology to better ourselves and the world we live in—and we need to be wise about how we apply it so that our use remains consistent with the basic morality inherent in modern society. This implies that each time we make a technological breakthrough we must assess afresh, the contexts within which they could present themselves and the uses to which they should (and should not) be put. If required, we must take the trouble to re-draw our moral boundaries, establishing the limits within which they must be constrained.

The article is here.

Tuesday, May 9, 2017

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.

Friday, April 14, 2017

Ethical Guidelines on Lab-Grown Embryos Beg for Revamping

Karen Weintraub
Scientific American
Originally posted on March 21, 2017

For nearly 40 years scientists have observed their self-imposed ban on doing research on human embryos in the lab beyond the first two weeks after fertilization. Their initial reasoning was somewhat arbitrary: 14 days is when a band of cells known as a primitive streak, which will ultimately give rise to adult tissues, forms in an embryo. It is also roughly the last time a human embryo can divide and create more than one person, and a few days before the nervous system begins to develop. But the so-called 14-day rule has held up all this time partly because scientists could not get an embryo to grow that long outside its mother's body.

Researchers in the U.K. and U.S. recently succeeded for the first time in growing embryos in the lab for nearly two weeks before terminating them, showing that the so-called 14-day rule is no longer a scientific limitation—although it remains a cultural one. Now, a group of Harvard University scientists has published a paper arguing that it is time to reconsider the 14-day rule because of advances in synthetic biology.

The U.S. has no law against growing embryos beyond two weeks—as long as the research is not funded with federal dollars. But most scientific journals will not publish studies that violate the 14-day rule, and the International Society for Stem Cell Research requires its members to agree to the rule in order to qualify for membership.

The article is here.

Monday, April 3, 2017

Can Human Evolution Be Controlled?

William B. Hurlbut
Big Questions Online
Originally published February 17, 2017

Here is an excerpt:

These gene-editing techniques may transform our world as profoundly as many of the greatest scientific discoveries and technological innovations of the past — like electricity, synthetic chemistry, and nuclear physics. CRISPR/Cas9 could provide urgent and uncontroversial progress in biomedical science, agriculture, and environmental ecology. Indeed, the power and depth of operation of these new tools is delivering previously unimagined possibilities for reworking or redeploying natural biological processes — some with startling and disquieting implications. Proposals by serious and well-respected scientists include projects of broad ecological engineering, de-extinction of human ancestral species, a biotechnological “cure” for aging, and guided evolution of the human future.

The questions raised by such projects go beyond issues of individual rights and social responsibilities to considerations of the very source and significance of the natural world, its integrated and interdependent processes, and the way these provide the foundational frame for the physical, psychological, and spiritual meaning of human life.

The article is here.

Thursday, February 2, 2017

Will artificial intelligence help to crack biology?

The Economist
Originally published January 7, 2017

Here is an excerpt:

Another important biological hurdle that AI can help people surmount is complexity. Experimental science progresses by holding steady one variable at a time, an approach that is not always easy when dealing with networks of genes, proteins or other molecules. AI can handle this more easily than human beings.

At BERG Health, the firm’s AI system starts by analysing tissue samples, genomics and other clinical data relevant to a particular disease. It then tries to model from this information the network of protein interactions that underlie that disease. At that point human researchers intervene to test the model’s predictions in a real biological system. One of the potential drugs BERG Health has discovered this way—for topical squamous-cell carcinoma, a form of skin cancer—passed early trials for safety and efficacy, and now awaits full-scale testing. The company says it has others in development.

For all the grand aspirations of the AI folk, though, there are reasons for caution. Dr Mead warns: “I don’t think we are in a state to model even a single cell. The model we have is incomplete.” Actually, that incompleteness applies even to models of single proteins, meaning that science is not yet good at predicting whether a particular modification will make a molecule intended to interact with a given protein a better drug or not. Most known protein structures have been worked out from crystallised versions of the molecule, held tight by networks of chemical bonds. In reality, proteins are flexible, but that is much harder to deal with.

The article is here.

New fertility procedure may lead to 'embryo farming', warn researchers

Ian Sample
The Guardian
Originally posted January 11, 2017

A new lab procedure that could allow fertility clinics to make sperm and eggs from people’s skin may lead to “embryo farming” on a massive scale and drive parents to have only “ideal” future children, researchers warn.

Legal and medical specialists in the US say that while the procedure – known as in vitro gametogenesis (IVG) – has only been demonstrated in mice so far, the field is progressing so fast that the dramatic impact it could have on society must be planned for now.

“We try not to take a position on these issues except to point out that before too long we may well be facing them, and we might do well to start the conversation now,” said Eli Adashi, professor of medical science at Brown University in Rhode Island.

The creation of sperm and eggs from other tissues has become possible through a flurry of recent advances in which scientists have learned first to reprogram adult cells into a younger, more versatile state, and then to grow them into functioning sex cells. In October, scientists in Japan announced for the first time the birth of baby mice from eggs made with their parent’s skin.

The article is here.

Saturday, July 16, 2016

Federal panel approves first test of CRISPR editing in humans

By Laurie McGinley
The Washington Post
Originally posted on June 21, 2016

A National Institutes of Health advisory panel on Tuesday approved the first human use of the gene-editing technology CRISPR, for a study designed to target three types of cancer and funded by tech billionaire Sean Parker’s new cancer institute.

The experiment, proposed by researchers at the University of Pennsylvania, would use CRISPR-Cas9 technology to modify patients’ own T cells to make them more effective in attacking melanoma, multiple myeloma and sarcoma.

The federal Recombinant DNA Advisory Committee approved the Penn proposal unanimously, with one member abstaining. The experiment still must be approved by the Food and Drug Administration, which regulates clinical trials.

The article is here.

Monday, June 20, 2016

Scientists debate effort to build a human genome

By Andrew Joseph
STAT
Originally posted  on June 4, 2016

Here is an excerpt:

Church said the core science of assembling a human genome from basic molecular ingredients dates back to at least 2009. And he noted that scientists have been grappling with related ethical questions for more than a decade, since the early days of synthetic biology opened the door to the idea of someone being able to build a pathogen from basic genetic components.

He said that although the project has no intention of spawning actual humans, the project’s leaders would not ignore the “ethical, social, legal” issues that inherently materialize given where the project could lead.

The article is here.

Thursday, June 16, 2016

The Corporate Joust with Morality

by Caroline Kaeb And David Scheffer
Opino Juris
Originally posted June 6, 2016

Here is the end:

This duel between corporate responsibility and corporate deceit and culpability is no small matter.  The fate of human society and of the earth increasingly falls on the shoulders of corporate executives who either embrace society’s challenges and, if necessary, counterattack for worthy aims or they succumb to dangerous gambits for inflated profits, whatever the impact on society.

The fulcrum of risk management must be forged with sophisticated strategies that propel corporations into the great policy debates of our times in order to promote social responsibility and thus strengthen the long-term viability of corporate operations.  We believe that task must begin in business schools and in corporate boardrooms where decisions that shape the world are made every day.

The article is here.

Monday, May 16, 2016

Inside OpenAI

Cade Metz
Wired.com
Originally posted April 27, 2016

Here is an excerpt:

OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

The article is here.

Tuesday, May 10, 2016

Cadaver study casts doubts on how zapping brain may boost mood, relieve pain

By Emily Underwood
Science
Originally posted April 20, 2016

Here is an excerpt:

Buzsáki expects a living person’s skin would shunt even more current away from the brain because it is better hydrated than a cadaver’s scalp. He agrees, however, that low levels of stimulation may have subtle effects on the brain that fall short of triggering neurons to fire. Electrical stimulation might also affect glia, brain cells that provide neurons with nutrients, oxygen, and protection from pathogens, and also can influence the brain’s electrical activity. “Further questions should be asked” about whether 1- to 2-milliamp currents affect those cells, he says.

Buzsáki, who still hopes to use such techniques to enhance memory, is more restrained than some critics. The tDCS field is “a sea of bullshit and bad science—and I say that as someone who has contributed some of the papers that have put gas in the tDCS tank,” says neuroscientist Vincent Walsh of University College London. “It really needs to be put under scrutiny like this.”

The article is here.

Editor's note:

This article represents the importance of science in the treatment of human suffering. No one wants sham interventions.

However, the stimulation interventions may work, and work effectively, in light of other models of how the brain functions. The brain creates an electromagnetic field that moves beyond the skull.  If the cadaver's brain is shut off, this finding may be irrelevant as the stimulation affects the field that moves beyond the skull.  In other words, how these stimulation procedures influence the electromagnetic field of the brain may be a better model to explain improvement.

Therefore, using dead people to nullify what happens in living people may not be the best standard to evaluate a procedure when researching brain activity.  It is a step to consider and may help develop a better working model of what actually happens with TMS.

By the way, scientists are not exactly certain how lithium or antidepressants work, either.

Saturday, April 2, 2016

Why so many scientists are so ignorant

By Pascal-Emmanuel Gobry
The Week
Originally published March 8, 2016

Here is an excerpt:

Nye fell into the same trap that Neil DeGrasse Tyson and Stephen Hawking have been caught up in. Philosophy, these men of science opine, is largely useless, because it can't give us the sort of certain answers that science can, and amounts to little more than speculation.

There's obviously a grain of truth in this. Philosophy does not give us the certainty that math or experimental science can (but even then — as many philosophers would point out — these fields do not give us as much certainty as is sometimes claimed). But that doesn't mean that philosophy is worthless, or that it doesn't have rigor. Indeed, in a sense, philosophy is inescapable. To argue that philosophy is useless is to do philosophy. Moreover, some existential questions simply can't be escaped, and philosophy is one of the best, or at least least bad, ways we've come up with to address those questions.

The article is here.

Thursday, March 31, 2016

'Body Hacking' Movement Rises Ahead Of Moral Answers

Eyder Peralta
NPR
Originally published 10, 2016

Here is an excerpt:

Sometimes, he said, technology moves too fast and outpaces accepted social boundaries — not to mention laws. He argued that was part of the reason why early wearers of Google Glass were called "glassholes."

"It created a social misunderstanding," Salvador said. "You didn't know what was going on."

To Salvador, the boundaries of acceptance are a matter of our social philosophy, an area that he argued was driven by esoteric discourse without tangible moral and ethical recommendations.

The philosophers, he said, are letting us down.

Alva Noë, a philosopher at the University of California, Berkeley and a contributor to NPR's 13.7: Cosmos and Culture blog, has written extensively on what he calls "cyborgian naturalness." He disagreed that the modern philosophers dropped the ball, saying that tackling the matter would involve unpacking two questions:

  1. Is it OK to cut into human bodies for these kinds of experiments?
  2. How much tolerance should society have for artificially enhancing the body?

To the first question, Noë said he found the "body hacking" experimentation on humans "ethically disturbing" and couldn't fathom a doctor or any other scientists conducting these kinds of operations.

The second question was more complicated.

The article is here.