Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Hacking. Show all posts
Showing posts with label Hacking. Show all posts

Wednesday, April 17, 2019

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Wednesday, December 19, 2018

Hackers are not main cause of health data breaches

Lisa Rapaport
Reuters News
Originally posted November 19, 2018

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Most health information data breaches in the U.S. in recent years haven’t been the work of hackers but instead have been due to mistakes or security lapses inside healthcare organizations, a new study suggests.

Another 25 percent of cases involved employee errors like mailing or emailing records to the wrong person, sending unencrypted data, taking records home or forwarding data to personal accounts or devices.

“More than half of breaches were triggered by internal negligence and thus are to some extent preventable,” said study coauthor Ge Bai of the Johns Hopkins Carey Business School in Washington, D.C.

The info is here.

Sunday, November 4, 2018

When Tech Knows You Better Than You Know Yourself

Nicholas Thompson
www.wired.com
Originally published October 4, 2018

Here is an excerpt:

Hacking a Human

NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.

YNH: To hack a human being is to understand what's happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can't be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don't need perfect, you just need to be better than the average human being.

If you have an hour, please watch the video.

Sunday, October 14, 2018

The Myth of Freedom

Yuval Noah Harari
The Guardian
Originally posted September 14, 2018

Here is an excerpt:

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

The info is here.

Sunday, May 27, 2018

​The Ethics of Neuroscience - A Different Lens



New technologies are allowing us to have control over the human brain like never before. As we push the possibilities we must ask ourselves, what is neuroscience today and how far is too far?

The world’s best neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression. Many patients are cured, while others develop side effects such as erratic behaviour and changes in their personality. 

Not only do we have greater understanding of clinical psychology, forensic psychology and criminal psychology, we also have more control. Professional athletes and gamers are now using this technology – some of it untested – to improve performance. However, with these amazing possibilities come great ethical concerns.

This manipulation of the brain has far-reaching effects, impacting the law, marketing, health industries and beyond. We need to investigate the capabilities of neuroscience and ask the ethical questions that will determine how far we can push the science of mind and behaviour.

Thursday, April 12, 2018

The Tech Industry’s War on Kids

Richard Freed
Medium.com
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Tuesday, April 3, 2018

AI Has a Hallucination Problem That's Proving Tough to Fix

Tom Simonite
wired.com
Originally posted March 9, 2018

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

That could be a big problem for products dependent on machine learning, particularly for vision, such as self-driving cars. Leading researchers are trying to develop defenses against such attacks—but that’s proving to be a challenge.

Case in point: In January, a leading machine-learning conference announced that it had selected 11 new papers to be presented in April that propose ways to defend or detect such adversarial attacks. Just three days later, first-year MIT grad student Anish Athalye threw up a webpage claiming to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, a grad student and professor, respectively, at Berkeley.

That project has led to some academic back-and-forth over certain details of the trio’s claims. But there’s little dispute about one message of the findings: It’s not clear how to protect the deep neural networks fueling innovations in consumer gadgets and automated driving from sabotage by hallucination. “All these systems are vulnerable,” says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who has pondered machine learning security for about a decade, and wasn’t involved in the study. “The machine learning community is lacking a methodological approach to evaluate security.”

The article is here.

Wednesday, March 7, 2018

The Squishy Ethics of Sex With Robots

Adam Rogers
Wired.com
Originally published February 2, 2018

Here is an excerpt:

Most of the world is ready to accept algorithm-enabled, internet-connected, virtual-reality-optimized sex machines with open arms (arms! I said arms!). The technology is evolving fast, which means two inbound waves of problems. Privacy and security, sure, but even solving those won’t answer two very hard questions: Can a robot consent to having sex with you? Can you consent to sex with it?

One thing that is unquestionable: There is a market. Either through licensing the teledildonics patent or risking lawsuits, several companies have tried to build sex technology that takes advantage of Bluetooth and the internet. “Remote connectivity allows people on opposite ends of the world to control each other’s dildo or sleeve device,” says Maxine Lynn, a patent attorney who writes the blog Unzipped: Sex, Tech, and the Law. “Then there’s also bidirectional control, which is going to be huge in the future. That’s when one sex toy controls the other sex toy and vice versa.”

Vibease, for example, makes a wearable that pulsates in time to synchronized digital books or a partner controlling an app. We-vibe makes vibrators that a partner can control, or set preset patterns. And so on.

The article is here.

Thursday, September 28, 2017

What’s Wrong With Voyeurism?

David Boonin
What's Wrong?
Originally posted August 31, 2017

The publication last year of The Voyeur’s Motel, Gay Talese’s controversial account of a Denver area motel owner who purportedly spent several decades secretly observing the intimate lives of his customers, raised a number of difficult ethical questions.  Here I want to focus on just one: does the peeping Tom who is never discovered harm his victims?

The peeping Tom profiled in Talese’s book certainly doesn’t think so.  In an excerpt that appeared in the New Yorker in advance of the book’s publication, Talese reports that Gerald Foos, the proprietor in question, repeatedly insisted that his behavior was “harmless” on the grounds that his “guests were unaware of it.”  Talese himself does not contradict the subject of his account on this point, and Foos’s assertion seems to be grounded in a widely accepted piece of conventional wisdom, one that often takes the form of the adage that “what you don’t know can’t hurt you”.  But there’s a problem with this view of harm, and thus a problem with the view that voyeurism, when done successfully, is a harmless vice.

The blog post is here.

Wednesday, September 20, 2017

Companies should treat cybersecurity as a matter of ethics

Thomas Lee
The San Francisco Chronicle
Originally posted September 2, 2017

Here is an excerpt:

An ethical code will force companies to rethink how they approach research and development. Instead of making stuff first and then worrying about data security later, companies will start from the premise that they need to protect consumer privacy before they start designing new products and services, Harkins said.

There is precedent for this. Many professional organizations like the American Medical Association and American Bar Association require members to follow a code of ethics. For example, doctors must pledge above all else not to harm a patient.

A code of ethics for cybersecurity will no doubt slow the pace of innovation, said Maurice Schweitzer, a professor of operations, information and decisions at the University of Pennsylvania’s Wharton School.

Ultimately, though, following such a code could boost companies’ reputations, Schweitzer said. Given the increasing number and severity of hacks, consumers will pay a premium for companies dedicated to security and privacy from the get-go, he said.

In any case, what’s wrong with taking a pause so we can catch our breath? The ethical quandaries technology poses to mankind are only going to get more complex as we increasingly outsource our lives to thinking machines.

That’s why a code of ethics is so important. Technology may come and go, but right and wrong never changes.

The article is here.

Monday, July 17, 2017

The ethics of brain implants and ‘brainjacking’

Chelsey Ballarte
Geek Wire
Originally published June 29, 2017

Here is an excerpt:

Fetz and the report’s other authors say we should regard advancements in machine learning and artificial intelligence with the same measure of caution we use when we consider accountability for self-driving cars and privacy for smartphones.

Fetz recalled the time security researchers proved they could hack into a Jeep Cherokee over the internet and disable it as it drove on the freeway. He said that in the world of prosthetics, a hacker could conceivably take over someone’s arm.

“The hack could override the signals,” he said. It could even override a veto, and that’s the danger. The strategy to head off that scenario would have to be to make sure the system can’t be influenced from the outside.

Study co-author John Donoghue, a director of the Wyss Center for Bio and Neuroengineering in Geneva, said these are just a few things we would have to think about if these mechanisms became the norm.

“We must carefully consider the consequences of living alongside semi-intelligent, brain-controlled machines, and we should be ready with mechanisms to ensure their safe and ethical use,” he said in a news release.

Donoghue said that as technology advances, we need to be ready to think about how our current laws would apply. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field,” he said.

The article is here.

Monday, April 24, 2017

Scientists Hack a Human Cell and Reprogram it Like a Computer

Sophia Chen
Wired Magazine
Originally published March 27, 2017

CELLS ARE BASICALLY tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.

Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

The article is here.

Monday, March 27, 2017

Healthcare Data Breaches Up 40% Since 2015

Alexandria Wilson Pecci
MedPage Today
Originally posted February 26, 2017

Here is an excerpt:

Broken down by industry, hacking was the most common data breach source for the healthcare sector, according to data provided to HealthLeaders Media by the Identity Theft Resource Center. Physical theft was the biggest breach category for healthcare in 2015 and 2014.

Insider theft and employee error/negligence tied for the second most common data breach sources in 2016 in the health industry. In addition, insider theft was a bigger problem in the healthcare sector than in other industries, and has been for the past five years.

Insider theft is alleged to have been at play in the Jackson Health System incident. Former employee Evelina Sophia Reid was charged in a fourteen-count indictment with conspiracy to commit access device fraud, possessing fifteen or more unauthorized access devices, aggravated identity theft, and computer fraud, the Department of Justice said. Prosecutors say that her co-conspirators used the stolen information to file fraudulent tax returns in the patients' names.

The article is here.

Monday, January 11, 2016

Cyber security: Attack of the health hackers

Kara Scannell and Gina Chon
FT.com
Originally published December 21, 2015

Here is an excerpt:

Hackers accessed over 100m health records — 100 times more than ever before — last year. Eight of the 10 largest hacks into any type of healthcare provider happened this year, according to the US Department of Health and Human Services.

Insurers scrambled to hire cyber security companies to scrub their systems. Premera Blue Cross, CareFirst BlueCross BlueShield, and Excellus Health Plan announced breaches affecting at least 22m individuals in total since March, including hacks that stretched back more than a year. Investigators told the FT that they believe some of the hacks are related and trace back to China.

The insurers face multiple investigations from state insurance regulators and attorneys-general and some could face fines for failing to comply with state data privacy laws, while federal law enforcement agencies are investigating who is behind the hacks.

The article is here.

Thursday, September 10, 2015

A tale of vigilante justice

Adulterers, hackers, and the Ashley Madison affair

By Russell Blackford
The Conversation
Originally published on August 23, 2015

Here is an excerpt:

Whatever you think about adulterous liaisons – even if you regard them as outrageous, destructive, morally wicked breaches of trust – this sort of vigilante justice is unacceptable. When vigilantes set out to punish sinners or wrongdoers, the results can be perverse, disproportionate, sometimes extreme and often irreversible. Even the supposed victims of wrongdoers may end up worse off.

It is difficult enough to judge the wisdom of revealing an adulterous affair to an affected individual when the facts are fairly clear and the consequences are possibly manageable. Indiscriminately letting loose this kind of data, affecting millions of personal situations, is atrociously arrogant and callous.

I’m sure that customers signed up to Ashley Madison for a wide range of reasons. Some may have done little or nothing wrong, even by conventional standards of sexual morality, but will now be held up for public shaming. Some may have been sufficiently interested in a phenomenon such as Ashley Madison to want to research it from the inside. Many may simply have been curious.

Others may have toyed with the idea of an affair, but not in a serious way – they may have been driven by their curiosity and other emotions to browse the site, but gone no further. Some may have been in open relationships of one kind or another: but even so, they could be embarrassed, shamed and otherwise harmed by revelations about their memberships.

The entire article is here.

Tuesday, April 7, 2015

Premera Blue Cross Breach May Have Exposed 11 Million Customers' Medical And Financial Data

By Kate Vinton
Forbes
Originally published March 17, 2015

Medical and financial data belonging to as many as 11 million Premera Blue Cross customers may have been exposed in a breach discovered on the same day as the Anthem breach, the health insurance company announced Tuesday.

Premera discovered the breach on January 29, 2015. Working with both Mandiant and the FBI to investigate the attack, the company discovered that the initial attack occurred on May 5, 2014. Premera Blue Cross and Premera Blue Cross Blue Shield of Alaska were both impacted, in addition to affiliate brands Vivacity and Connexion Insurance Solutions. Additionally, other Blue Cross Blue Shield customers in Washington and Alaska may have been affected by the breach.

The entire article is here.

Tuesday, July 29, 2014

Millions of electronic medical records breached

New U.S. government data shows that 32 million residents affected since 2009.

By Ronald Campbell and Deborah Schoch
The Oregon Country Register
Published: July 7, 2014

Thieves, hackers and careless workers have breached the medical privacy of nearly 32 million Americans, including 4.6 million Californians, since 2009.

Those numbers, taken from new U.S. Health & Human Services Department data, underscore a vulnerability of electronic health records.

These records are more detailed than most consumer credit or banking files and could open the door to widespread identity theft, fraud, or worse.

The entire article is here.

Friday, September 16, 2011

New data spill shows risk of online health records


By Jordan Robertson
AP Technology Writer

Until recently, medical files belonging to nearly 300,000 Californians sat unsecured on the Internet for the entire world to see.

There were insurance forms, Social Security numbers and doctors' notes. Among the files were summaries that spelled out, in painstaking detail, a trucker's crushed fingers, a maintenance worker's broken ribs and one man's bout with sexual dysfunction.

At a time of mounting computer hacking threats, the incident offers an alarming glimpse at privacy risks as the nation moves steadily into an era in which every American's sensitive medical information will be digitized.

Electronic records can lower costs, cut bureaucracy and ultimately save lives. The government is offering bonuses to early adopters and threatening penalties and cuts in payments to medical providers who refuse to change.

But there are not-so-hidden costs with modernization.

"When things go wrong, they can really go wrong," says Beth Givens, director of the nonprofit Privacy Rights Clearinghouse, which tracks data breaches. "Even the most well-designed systems are not safe. ... This case is a good example of how the human element is the weakest link."

Southern California Medical-Legal Consultants, which represents doctors and hospitals seeking payment from patients receiving workers' compensation, put the records on a website that it believed only employees could use, owner Joel Hecht says.

The personal data was discovered by Aaron Titus, a researcher with Identity Finder who then alerted Hecht's firm and The Associated Press. He found it through Internet searches, a common tactic for finding private information posted on unsecured sites.

The data were "available to anyone in the world with half a brain and access to Google," Titus says.

Titus says Hecht's company failed to use two basic techniques that could have protected the data — requiring a password and instructing search engines not to index the pages. He called the breach "likely a case of felony stupidity."

One of the patients affected was Paul Thompson, who learned of the breach from Titus.

The Sugarloaf, Calif., electrician blew out his shoulder four years ago on a job wiring up a multiplex movie theater. His insurance company denied his claim, which led to a protracted dispute. He eventually settled.

Thompson says his injury has been a "long, painful road."

Unable to afford surgery in the U.S. to fix his torn rotator cuff, he paid a medical tourism company that was supposed to schedule a cheaper procedure in Costa Rica. The company went bankrupt, however, and Thompson said he lost nearly $7,300.

To have his personal information exposed on top of that was a final indignity.
"I'm totally disgusted about everything," he said, calling the breach "another kick in the stomach."

Thomson is worried that hackers may have spotted his information online and tagged him for future financial scams. He contacted his bank and set up a fraud alert with the credit reporting agencies.

He says the prospect of all health records going electronic — which federal law mandates should happen by 2014 — "scares the living hell out of me."

When mistakes occur, the fallout can be more severe than the typical breach of email addresses or credit card numbers.

The rest of the story can be read here.

Friday, July 22, 2011

Survey: 90% of companies say they've been hacked



By Jaikumar Vijayan
ComputerWorld>Security

If it sometimes appears that just about every company is getting hacked these days, that's because they are.

In a recent survey (download PDF) of 583 U.S companies conducted by Ponemon Research on behalf of Juniper Networks, 90% of the respondents said their organizations' computers had been breached at least once by hackers over the past 12 months.

Nearly 60% reported two or more breaches over the past year. More than 50% said they had little confidence of being able to stave off further attacks over the next 12 months.

Those numbers are significantly higher than findings in similar surveys, and they suggest that a growing number of enterprises are losing the battle to keep malicious intruders out of their networks.

"We expected a majority to say they had experienced a breach," said Johnnie Konstantas, director of product marketing at Juniper, a Sunnyvale, Calif.-based networking company. "But to have 90% saying they had experienced at least one breach, and more than 50% saying they had experienced two or more, is mind-blowing." Those findings suggest "that a breach has become almost a statistical certainty" these days, she said.

The organizations that participated in the Ponemon survey represented a wide cross-section of both the private and public sectors, ranging from small organizations with less than 500 employees to enterprises with workforces of more than 75,000. The online survey was conducted over a five-day period earlier this month.

Roughly half of the respondents blamed resource constraints for their security woes, while about the same number cited network complexity as the primary challenge to implementing security controls.
The Ponemon survey comes at a time of growing concern about the ability of companies to fend off sophisticated cyberattacks. Over the past several months, hackers have broken into numerous supposedly secure organizations, such as security vendor RSA, Lockheed Martin, Oak Ridge National Laboratories and the International Monetary Fund.

Many of the attacks have involved the use of sophisticated malware and social engineering techniques designed to evade easy detection by conventional security tools.

The attacks have highlighted what analysts say is a growing need for enterprises to implement controls for the quick detection and containment of security breaches. Instead of focusing only on protecting against attacks, companies need to prepare for what comes after a targeted breach.

The survey results suggest that some organizations have begun moving in that direction. About 32% of the respondents said their primary security focus was on preventing attacks, but about 16% claimed the primary focus of their security efforts was on quick detection of and response to security incidents. About one out of four respondents said their focus was on aligning security controls with industry best practices.