Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fake News. Show all posts
Showing posts with label Fake News. Show all posts

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Sunday, November 18, 2018

Bornstein claims Trump dictated the glowing health letter

Alex Marquardt and Lawrence Crook
CNN.com
Originally posted May 2, 2018

When Dr. Harold Bornstein described in hyperbolic prose then-candidate Donald Trump's health in 2015, the language he used was eerily similar to the style preferred by his patient.

It turns out the patient himself wrote it, according to Bornstein.

"He dictated that whole letter. I didn't write that letter," Bornstein told CNN on Tuesday. "I just made it up as I went along."

The admission is an about face from his answer more than two years when the letter was released and answers one of the lingering questions about the last presidential election. The letter thrust the eccentric Bornstein, with his shoulder-length hair and round eyeglasses, into public view.

"His physical strength and stamina are extraordinary," he crowed in the letter, which was released by Trump's campaign in December 2015. "If elected, Mr. Trump, I can state unequivocally, will be the healthiest individual ever elected to the presidency."

The missive didn't offer much medical evidence for those claims beyond citing a blood pressure of 110/65, described by Bornstein as "astonishingly excellent." It claimed Trump had lost 15 pounds over the preceding year. And it described his cardiovascular health as "excellent."

The info is here.

Thursday, October 4, 2018

7 Short-Term AI ethics questions

Orlando Torres
www.towardsdatascience.com
Originally posted April 4, 2018

Here is an excerpt:

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

The info is here.

Thursday, April 19, 2018

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Sandra Upson
Wired.com
Originally posted February 16, 2018

Here is an excerpt:

But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?

A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.

The information is here.

Monday, April 2, 2018

The Grim Conclusions of the Largest-Ever Study of Fake News

Robinson Meyer
The Atlantic
Originally posted March 8, 2018

Here is an excerpt:

“It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”

The study has already prompted alarm from social scientists. “We must redesign our information ecosystem for the 21st century,” write a group of 16 political scientists and legal scholars in an essay also published Thursday in Science. They call for a new drive of interdisciplinary research “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

“How can we create a news ecosystem … that values and promotes truth?” they ask.

The new study suggests that it will not be easy. Though Vosoughi and his colleagues only focus on Twitter—the study was conducted using exclusive data which the company made available to MIT—their work has implications for Facebook, YouTube, and every major social network. Any platform that regularly amplifies engaging or provocative content runs the risk of amplifying fake news along with it.

The article is here.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.

Friday, December 15, 2017

The Vortex

Oliver Burkeman
The Guardian
Originally posted November 30, 2017

Here is an excerpt:

I realise you don’t need me to tell you that something has gone badly wrong with how we discuss controversial topics online. Fake news is rampant; facts don’t seem to change the minds of those in thrall to falsehood; confirmation bias drives people to seek out only the information that bolsters their views, while dismissing whatever challenges them. (In the final three months of the 2016 presidential election campaign, according to one analysis by Buzzfeed, the top 20 fake stories were shared more online than the top 20 real ones: to a terrifying extent, news is now more fake than not.) Yet, to be honest, I’d always assumed that the problem rested solely on the shoulders of other, stupider, nastier people. If you’re not the kind of person who makes death threats, or uses misogynistic slurs, or thinks Hillary Clinton’s campaign manager ran a child sex ring from a Washington pizzeria – if you’re a basically decent and undeluded sort, in other words – it’s easy to assume you’re doing nothing wrong.

But this, I am reluctantly beginning to understand, is self-flattery. One important feature of being trapped in the Vortex, it turns out, is the way it looks like everyone else is trapped in the Vortex, enslaved by their anger and delusions, obsessed with point-scoring and insult-hurling instead of with establishing the facts – whereas you’re just speaking truth to power. Yet in reality, when it comes to the divisive, depressing, energy-sapping nightmare that is modern online political debate, it’s like the old line about road congestion: you’re not “stuck in traffic”. You are the traffic.

The article is here.