Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Wednesday, August 17, 2022

Robots became racist after AI training, always chose Black faces as ‘criminals’

Pranshu Verma
The Washington Post
Originally posted 16 JUL 22

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as something of a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences down the road as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

Researchers in recent years have documented multiple cases of biased artificial intelligence algorithms. That includes crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit, as well as facial recognition systems having a hard time accurately identifying people of color.

Tuesday, July 10, 2018

The Artificial Intelligence Ethics Committee

Zara Stone
Forbes.com
Originally published June 11, 2018

Here is an excerpt:

Back to the ethics problem: Some sort of bias is sadly inevitable in programming. “We humans all have a bias,” said computer scientist Ehsan Hoque, who leads the Human-Computer Interaction Lab at Rochester University. “There’s a study where judges make more favorable decisions after a lunch break. Machines have an inherent bias (as they are built by humans) so we need to empower users in ways to make decisions.”

For instance, Walworth's way of empowering his choices is by being conscious about what AI algorithms show him. “I recommend you do things that are counterintuitive,” he said. “For instance, read a spectrum of news, everything from Fox to CNN and The New York Times to combat the algorithm that decides what you see.” Use the Cambridge Analytica election scandal as an example here. Algorithms dictated what you’d see, how you’d see it and if more of the same got shown to you, and were manipulated by Cambridge Analytica to sway voters.

The move to a consciousness of ethical AI  is both a top-down and bottoms up approach. “There’s a rising field of impact investing,” explained Walworth. “Investors and shareholders are demanding something higher than the bottom line, some accountability with the way they spend and invest money.”

The article is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Thursday, March 22, 2018

The Ethical Design of Intelligent Robots

Sunidhi Ramesh
The Neuroethics Blog
Originally published February 27, 2018

Here is an excerpt:

In a 2016 study, a team of Georgia Tech scholars formulated a simulation in which 26 volunteers interacted “with a robot in a non-emergency task to experience its behavior and then [chose] whether [or not] to follow the robot’s instructions in an emergency.” To the researchers’ surprise (and unease), in this “emergency” situation (complete with artificial smoke and fire alarms), “all [of the] participants followed the robot in the emergency, despite half observing the same robot perform poorly [making errors by spinning, etc.] in a navigation guidance task just minutes before… even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered.” It seems that we not only trust robots, but we also do so almost blindly.

The investigators proceeded to label this tendency as a concerning and alarming display of overtrust of robots—an overtrust that applied even to robots that showed indications of not being trustworthy.

Not convinced? Let’s consider the recent Tesla self-driving car crashes. How, you may ask, could a self-driving car barrel into parked vehicles when the driver is still able to override the autopilot machinery and manually stop the vehicle in seemingly dangerous situations? Yet, these accidents have happened. Numerous times.

The answer may, again, lie in overtrust. “My Tesla knows when to stop,” such a driver may think. Yet, as the car lurches uncomfortably into a position that would push the rest of us to slam onto our breaks, a driver in a self-driving car (and an unknowing victim of this overtrust) still has faith in the technology.

“My Tesla knows when to stop.” Until it doesn’t. And it’s too late.

Friday, January 19, 2018

Why banning autonomous killer robots wouldn’t solve anything

Susanne Burri and Michael Robillard
aeon.com
Originally published December 19, 2017

Here is an excerpt:

For another thing, it is naive to assume that we can enjoy the benefits of the recent advances in artificial intelligence (AI) without being exposed to at least some downsides as well. Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.

To put the point more generally, AI technology is tremendously useful, and it already permeates our lives in ways we don’t always notice, and aren’t always able to comprehend fully. Given its pervasive presence, it is shortsighted to think that the technology’s abuse can be prevented if only the further development of autonomous weapons is halted. In fact, it might well take the sophisticated and discriminate autonomous-weapons systems that armies around the world are currently in the process of developing if we are to effectively counter the much cruder autonomous weapons that are quite easily constructed through the reprogramming of seemingly benign AI technology such as the self-driving car.

The article is here.

Thursday, January 4, 2018

Artificial Intelligence Seeks An Ethical Conscience

Tom Simonite
wired.com
Originally published December 7, 2017

Here is an excerpt:

Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.

Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.

Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.

The article is here.

Wednesday, December 6, 2017

What the heck is machine learning, and why is it everywhere these days?

Luke Dormehl
Digital Trends
Originally published November 18, 2017

Here is an excerpt:

Which programming languages to machine learners use?

Like the question above, there’s no one answer to this. Machine learning is a big field and, with so much ground to cover, there’s no one language that does absolutely everything.

Due to its simplicity, and the availability of deep learning libraries such as TensorFlow and PyTorch, Python is currently the number one language. If you’re thinking about delving into machine learning for the first time, it’s also one of the most accessible languages — and there are loads of online resources available.

Java is a good option, too, and comes with a great community of its own, while C++ and R are also worth checking out.

Is machine learning the perfect solution to all our AI problems?

You can probably guess where we’re going with this. No, machine learning isn’t infallible. Algorithms can still be subject to human biases, and the rule of “garbage in, garbage out” holds as true here as it does to any other data-driven field.

There are also questions about transparency, particularly when you’re dealing with the kind of “black boxes” that are an essential part of neural networks.

But as a tool that’s helping to revolutionize technology as we know it, and making AI available to the masses? You bet that it’s a great tool!

The article is here.

Wednesday, October 11, 2017

Moral programming will define the future of autonomous transportation

Josh Althauser
Venture Beat
Originally published September 24, 2017

Here is an excerpt:

First do no harm?

Regardless of public sentiment, driverless cars are coming. Giants like Tesla Motors and Google have already poured billions of dollars into their respective technologies with reasonable success, and Elon Musk has said that we are much closer to a driverless future than most suspect. Robotics software engineers are making strides in self-driving AI at an awe-inspiring (and, for some, alarming) rate.

Beyond our questions of whether we want to hand over the wheel to software, there are deeper, more troubling questions that must be asked. Regardless of current sentiment, driverless cars are on their way. The real questions we should be asking as we edge closer to completely autonomous roadways lie in ethically complex areas. Among these areas of concern, one very difficult question stands out. Should we program driverless cars to kill?

At first, the answer seems obvious. No AI should have the ability to choose to kill a human. We can more easily reconcile death that results from a malfunction of some kind — brakes that give out, a failure of the car’s visual monitoring system, or a bug in the AI’s programmatic makeup. However, defining how and when AI can inflict harm isn’t that simple.

The article is here.

Thursday, August 24, 2017

China's Plan for World Domination in AI Isn't So Crazy After All

Mark Bergen and David Ramli
Bloomberg.com
First published August 14, 2017

Here is an excerpt:

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts China’s biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from China’s investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantly staunch government support that includes handing over gobs of citizens’ data –- something that makes Western officials squirm.

Data is key because that’s how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so it’s much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

The article is here.

Monday, April 24, 2017

Scientists Hack a Human Cell and Reprogram it Like a Computer

Sophia Chen
Wired Magazine
Originally published March 27, 2017

CELLS ARE BASICALLY tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.

Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

The article is here.

Friday, March 10, 2017

A Hippocratic Oath for AI Developers?

Benedict Dellot
RSA.org
Originally posted February 13, 2017

Here is an excerpt:

The largest tech companies – Apple, Amazon, Google, IBM, Microsoft and Facebook – have already committed to creating new standards to guide the development of artificial intelligence. Likewise, a recent EU Parliament investigation recommended the development of an advisory code for robotic engineers, as well as ‘electronic personhood’ for the most sophisticated robots to ensure their behaviour is captured by legal systems.

Other ideas include regulatory ‘sandboxes’ that would give AI developers more freedom to experiment but under the close supervision of the authorities, and ‘software deposits’ for private code that would allow consumer rights organisations and government inspectors the opportunity to audit algorithms behind closed doors. Darpa recently kicked off a new programme called Explainable AI (XAI), which aims to create machine learning systems that can explain the steps they take to arrive at a decision, as well as unpack the strengths and weaknesses of their conclusions.

There have even been calls to instate a Hippocratic Oath for AI developers. This would have the advantage of going straight to the source of potential issues – the people who write the code – rather than relying on the resources, skills and time of external enforcers. An oath might also help to concentrate the minds of the programming community as a whole in getting to grips with the above dilemmas. Inspiration can be taken from the way the IEEE, a technical professional association in the US, has begun drafting a framework for the ‘ethically aligned design’ of AI.

The article is here.

Thursday, December 8, 2016

Morality in transportation

Jeffrey C. Peters
The Conversation by way of Salon
Originally posted November 19, 2016

A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle — and cyclist and pedestrian — proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.

Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year — nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.

There’s a key problem, though: Computers like rules — solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?

The article is here.

Wednesday, November 30, 2016

Can Robots Make Moral Decisions? Should They?

Joelle Renstrom

The Daily Beast
Originally published November 12, 2016

Here is an excerpt:

Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.

Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?

The article is here.

Thursday, September 15, 2016

World's First Self-Driving Taxis Debut in Singapore

Annabelle Liang and Dee-Ann Durbin
Associated Press
August 24, 2016

Here is an excerpt:

The service will start small — six cars now, growing to a dozen by the end of the year. The ultimate goal, say nuTonomy officials, is to have a fully self-driving taxi fleet in Singapore by 2018, which will help sharply cut the number of cars on Singapore's congested roads. Eventually, the model could be adopted in cities around the world, nuTonomy says.

For now, the taxis only will run in a 2.5-square-mile business and residential district called "one-north," and pick-ups and drop-offs will be limited to specified locations. And riders must have an invitation from nuTonomy to use the service. The company says dozens have signed up for the launch, and it plans to expand that list to thousands of people within a few months.

The article is here.

Monday, August 29, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

By Jacob Brogan
Future Tense
Originally published August 11, 2016

Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?

It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way. You’ll have to decide whether a vehicle should choose its passengers or people in an intersection. Others will present two differently composed groups of pedestrians—say, a handful of female doctors or a collection of besuited men—and ask which an empty car should slam into. Further complications—including the presence of animals and details about whether the pedestrians have the right of way—sometimes further muddle the question.

Thursday, July 21, 2016

Frankenstein’s paperclips

The Economist
Originally posted June 25, 2016

Here is an excerpt:

AI researchers point to several technical reasons why fear of AI is overblown, at least in its current form. First, intelligence is not the same as sentience or consciousness, says Mr Ng, though all three concepts are commonly elided. The idea that machines will “one day wake up and change their minds about what they will do” is just not realistic, says Francesca Rossi, who works on the ethics of AI at IBM. Second, an “intelligence explosion” is considered unlikely, because it would require an AI to make each version of itself in less time than the previous version as its intelligence grows. Yet most computing problems, even much simpler ones than designing an AI, take much longer as you scale them up.

Third, although machines can learn from their past experiences or environments, they are not learning all the time.

The article is here.

Monday, May 16, 2016

Inside OpenAI

Cade Metz
Wired.com
Originally posted April 27, 2016

Here is an excerpt:

OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

The article is here.

Friday, November 13, 2015

Why Self-Driving Cars Must Be Programmed to Kill

Emerging Technology From the arXiv
MIT Technology Review
Originally published October 22, 2015

Here is an excerpt:

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.

The entire article is here.

Thursday, October 1, 2015

Ethics Won't Be A Big Problem For Driverless Cars

By Adam Ozimek
Forbes Magazine
Originally posted September 13, 2015

Skeptics of driverless cars have a variety of criticisms, from technical to demand based, but perhaps the most curious is the supposed ethical trolley problem it creates. While the question of how driverless cars will behave in ethical situations is interesting and will ultimately have to be answered by programmers, critics greatly exaggerate its importance. In addition, they assume that driverless cars have to be perfect rather than just better.

(cut)

Patrick Lin asks “Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child?” But seriously, how often do drivers actually make this decision? Accidents that provide this choice seem pretty rare. And if I am wrong and we’re actually living in a world rife with trolley problems for drivers, it seems likely that bad human driving and foresight probably creates many of them. Having driverless cars that don’t get distracted, don’t speed dangerously, and can see 360 degrees will make it less likely that split second life and death choices need to be made.

The entire article is here.

Monday, August 31, 2015

The Moral Code

By Nayef Al-Rodhan
Foreign Affairs
Originally published August 12, 2015

Here is an excerpt:

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will be deployed in more complex situations that require spontaneous choices. The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The entire article is here.