Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

Wednesday, February 9, 2022

How FDA Failures Contributed to the Opioid Crisis

Andrew Kolodny, MD
AMA J Ethics. 2020;22(8):E743-750. 
doi: 10.1001/amajethics.2020.743.

Abstract

Over the past 25 years, pharmaceutical companies deceptively promoted opioid use in ways that were often neither safe nor effective, contributing to unprecedented increases in prescribing, opioid use disorder, and deaths by overdose. This article explores regulatory mistakes made by the US Food and Drug Administration (FDA) in approving and labeling new analgesics. By understanding and correcting these mistakes, future public health crises caused by improper pharmaceutical marketing might be prevented.

Introduction

In the United States, opioid use disorder (OUD) and opioid overdose were once rare. But over the past 25 years, the number of Americans suffering from OUD increased exponentially and in parallel with an unprecedented increase in opioid prescribing. Today, OUD is common, especially in patients with chronic pain treated with opioid analgesics, and opioid overdose is the leading cause of accidental death.

(cut)

Oversight Recommendations

While fewer clinicians are initiating long-term opioids, overprescribing is still a problem. According to a recently published report, more than 2.9 million people initiated opioid use in December 2017. The FDA’s continued approval of new opioids exacerbates this problem. Each time a branded opioid hits the market, the company, eager for return on its investment, is given an incentive and, in essence, a license to promote aggressive prescribing. The FDA’s continued approval of new opioids pits the financial interests of drug companies against city, state, and federal efforts to discourage initiation of long-term opioids.

To finally end the opioid crisis, the FDA must enforce the Food, Drug, and Cosmetic Act, and it must act on recommendations from the NAS for an overhaul of its opioid approval and removal policies. The broad indication on opioid labels must be narrowed, and an explicit warning against long-term use and high-dose prescribing should be added. The label should reinforce, rather than contradict, guidance from the CDC, the Department of Veterans Affairs, the Agency for Healthcare Research and Quality, and other public health agencies that are calling for more cautious prescribing.

Saturday, July 3, 2021

Binding moral values gain importance in the presence of close others


Yudkin, D.A., Gantman, A.P., Hofmann, W. et al. 
Nat Commun 12, 2718 (2021). 
https://doi.org/10.1038/s41467-021-22566-6

Abstract

A key function of morality is to regulate social behavior. Research suggests moral values may be divided into two types: binding values, which govern behavior in groups, and individualizing values, which promote personal rights and freedoms. Because people tend to mentally activate concepts in situations in which they may prove useful, the importance they afford moral values may vary according to whom they are with in the moment. In particular, because binding values help regulate communal behavior, people may afford these values more importance when in the presence of close (versus distant) others. Five studies test and support this hypothesis. First, we use a custom smartphone application to repeatedly record participants’ (n = 1166) current social context and the importance they afforded moral values. Results show people rate moral values as more important when in the presence of close others, and this effect is stronger for binding than individualizing values—an effect that replicates in a large preregistered online sample (n = 2016). A lab study (n = 390) and two preregistered online experiments (n = 580 and n = 752) provide convergent evidence that people afford binding, but not individualizing, values more importance when in the real or imagined presence of close others. Our results suggest people selectively activate different moral values according to the demands of the situation, and show how the mere presence of others can affect moral thinking.

From the Discussion

Our findings converge with work highlighting the practical contexts where binding values are pitted against individualizing ones. Research on the psychology of whistleblowing, for example, suggests that the decision over whether to report unethical behavior in one’s own organization reflects a tradeoff between loyalty (to one’s community) and fairness (to society in general). Other research has found that increasing or decreasing people’s “psychological distance” from a situation affects the degree to which they apply binding versus individualizing principles. For example, research shows that prompting people to take a detached (versus immersed) perspective on their own actions renders them more likely to apply impartial principles in punishing close others for moral transgressions. By contrast, inducing feelings of empathy toward others (which could be construed as increasing feelings of psychological closeness) increases people’s likelihood of showing favoritism toward them in violation of general fairness norms. Our work highlights a psychological process that might help to explain these patterns of behavior: people are more prone to act according to binding values when they are with close others precisely because that relational context activates those values in the mind.

Wednesday, June 23, 2021

Experimental Regulations for AI: Sandboxes for Morals and Mores

Ranchordas, Sofia
Morals and Machines (vol.1, 2021)
Available at SSRN: 

Abstract

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.

(cut)

In conclusion, AI regulatory sandboxes are not the answer to more innovation in AI. They are part of the path to a more forward-looking approach to the interaction between law and technology. This new approach will most certainly be welcomed with reluctance in years to come as it disrupts existing dogmas pertaining to the way in which we conceive the principle of legal certainty and the reactive—rather than anticipatory—nature of law. However, traditional law and regulation were designed with human agents and enigmas in mind. Many of the problems generated by AI (discrimination, power asymmetries, and manipulation) are still human but their scale and potential for harms (and benefits) have long ceased to be. It is thus time to rethink our fundamental approach to regulation and refocus on the new regulatory subject before us.

Sunday, March 21, 2021

Who Should Stop Unethical A.I.?

Matthew Hutson
The New Yorker
Originally published 15 Feb 21

Here is an excerpt:

Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.

A few years ago, a number of A.I.-research organizations began to develop systems for addressing ethical impact. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (sigchi) is, by virtue of its focus, already committed to thinking about the role that technology plays in people’s lives; in 2016, it launched a small working group that grew into a research-ethics committee. The committee offers to review papers submitted to sigchi conferences, at the request of program chairs. In 2019, it received ten inquiries, mostly addressing research methods: How much should crowd-workers be paid? Is it O.K. to use data sets that are released when Web sites are hacked? By the next year, though, it was hearing from researchers with broader concerns. “Increasingly, we do see, especially in the A.I. space, more and more questions of, Should this kind of research even be a thing?” Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, told me.

Shilton explained that questions about possible impacts tend to fall into one of four categories. First, she said, “there are the kinds of A.I. that could easily be weaponized against populations”—facial recognition, location tracking, surveillance, and so on. Second, there are technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, there is automated-weapons research. And fourth, there are tools “to create alternate sets of reality”—fake news, voices, or images.

Tuesday, December 29, 2020

Internal Google document reveals campaign against EU lawmakers

Javie Espinoza
ft.com
Originally published 28 OCT 20

Here is an excerpt:

The leak of the internal document lays bare the tactics that big tech companies employ behind the scenes to manipulate public discourse and influence lawmakers. The presentation is watermarked as “privileged and need-to-know” and “confidential and proprietary”.

The revelations are set to create new tensions between the EU and Google, which are already engaged in tough discussions about how the internet should be regulated. They are also likely to trigger further debate within Brussels, where regulators hold divergent positions on the possibility of breaking up big tech companies.

Margrethe Vestager, the EU’s executive vice-president in charge of competition and digital policy, on Tuesday argued to MEPs that structural separation of big tech is not “the right thing to do”. However, in a recent interview with the FT, Mr Breton accused such companies of being “too big to care”, and suggested that they should be broken up in extreme circumstances.

Among the other tactics outlined in the report were objectives to “undermine the idea DSA has no cost to Europeans” and “show how the DSA limits the potential of the internet . . . just as people need it the most”.

The campaign document also shows that Google will seek out “more allies” in its fight to influence the regulation debate in Brussels, including enlisting the help of Europe-based platforms such as Booking.com.

Booking.com told the FT: “We have no intention of co-operating with Google on upcoming EU platform regulation. Our interests are diametrically opposed.”


Monday, July 13, 2020

Amazon Halts Police Use Of Its Facial Recognition Technology

Bobby Allyn
www.npr.org
Originally posted 10 June 20

Amazon announced on Wednesday a one-year moratorium on police use of its facial-recognition technology, yielding to pressure from police-reform advocates and civil rights groups.

It is unclear how many law enforcement agencies in the U.S. deploy Amazon's artificial intelligence tool, but an official with the Washington County Sheriff's Office in Oregon confirmed that it will be suspending its use of Amazon's facial recognition technology.

Researchers have long criticized the technology for producing inaccurate results for people with darker skin. Studies have also shown that the technology can be biased against women and younger people.

IBM said earlier this week that it would quit the facial-recognition business altogether. In a letter to Congress, chief executive Arvind Krishna condemned software that is used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

And Microsoft President Brad Smith told The Washington Post during a livestream Thursday morning that his company has not been selling its technology to law enforcement. Smith said he has no plans to until there is a national law.

The info is here.

Monday, April 20, 2020

Europe plans to strictly regulate high-risk AI technology

Nicholas Wallace
sciencemag.org
Originally published 19 Feb 20

Here is an excerpt:

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The info is here.

Thursday, March 5, 2020

Ethical concerns with online direct-to-consumer pharmaceutical companies

Curtis H, Milner J
Journal of Medical Ethics 
2020;46:168-171.

Abstract

In recent years, online direct-to-consumer pharmaceutical companies have been created as an alternative method for individuals to get prescription medications. While these companies have noble aims to provide easier, more cost-effective access to medication, the fact that these companies both issue prescriptions (via entirely online medical reviews that can have no direct contact between physician and patient) as well as distribute and ship medications creates multiple ethical concerns. This paper aims to explore two in particular. First, this model creates conflicts of interest for the physicians hired by these companies to write prescriptions. Second, the lack of direct contact from physicians may be harmful to prospective patients. After analysing these issues, this paper argues that there ought to be further consideration for regulation and oversight for these companies.

The info is here.

Friday, February 21, 2020

Why Google thinks we need to regulate AI

Sundar Pichai
ft.com
Originally posted 19 Jan 20

Here are two excerpts:

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

(cut)

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.


Monday, January 20, 2020

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.

Abstract

CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Friday, January 10, 2020

The Complicated Ethics of Genetic Engineering

Brit McCandless Farmer
cbsnews.com
Originally posted 8 Dec 19

Here is an excerpt:

A 2017 survey at the University of Wisconsin-Madison asked 1,600 members of the general public about their attitudes toward gene editing. The results showed 65 percent of respondents think gene editing is acceptable for therapeutic purposes. But when it comes to whether scientists should use technology for genetic enhancement, only 26 percent agreed.

Going forward, Church thinks genetic engineering needs government oversight. He is also concerned about reversibility—he does not want to create anything in his lab that cannot be reversed if it creates unintended consequences.

"A lot of the technology we develop, we try to make them reversible, containable," Church said. "So the risks are that some people get excited, so excited that they ignore well-articulated risks."

Back in his Harvard lab, Church's colleagues showed Pelley their work on "mini brains," tiny dots with millions of cells each. The cells, which come from a patient, can be grown into many types of organ tissue in a matter of days, making it possible for drugs to be tested on that patient's unique genome. Church aims to use genetic engineering to reverse aging and grow human organs for transplant.

Pelley said he was struck by the speed with which medical advancements are coming.

The info is here.

Monday, January 6, 2020

Pa. prison psychologist loses license after 3 ‘preventable and foreseeable’ suicides

Samantha Melamed
inquirer.com
Originally posted 4 Dec 19

Nearly a decade after a 1½-year stretch during which three prisoners at State Correctional Institution Cresson died by suicide and 17 others attempted it, the Pennsylvania Board of Psychology has revoked the license of the psychologist then in charge at the now-shuttered prison in Cambria County and imposed $17,233 in investigation costs.

An order filed Tuesday said the suicides were foreseeable and preventable and castigated the psychologist, James Harrington, for abdicating his ethical responsibility to intervene when mentally ill prisoners were kept in inhumane conditions — including solitary confinement — and were prevented from leaving their cells for treatment.

Harrington still holds an administrative position with the Department of Corrections, with an annual salary of $107,052.

The info is here.

Saturday, January 4, 2020

Robots in Finance Could Wipe Out Some of Its Highest-Paying Jobs

Lananh Nguyen
Bloomberg.com
Originally poste 6 Dec 19

Robots have replaced thousands of routine jobs on Wall Street. Now, they’re coming for higher-ups.

That’s the contention of Marcos Lopez de Prado, a Cornell University professor and the former head of machine learning at AQR Capital Management LLC, who testified in Washington on Friday about the impact of artificial intelligence on capital markets and jobs. The use of algorithms in electronic markets has automated the jobs of tens of thousands of execution traders worldwide, and it’s also displaced people who model prices and risk or build investment portfolios, he said.

“Financial machine learning creates a number of challenges for the 6.14 million people employed in the finance and insurance industry, many of whom will lose their jobs -- not necessarily because they are replaced by machines, but because they are not trained to work alongside algorithms,” Lopez de Prado told the U.S. House Committee on Financial Services.

During the almost two-hour hearing, lawmakers asked experts about racial and gender bias in AI, competition for highly skilled technology workers, and the challenges of regulating increasingly complex, data-driven financial markets.

The info is here.

Friday, January 3, 2020

Robotics researchers have a duty to prevent autonomous weapons

Christoffer Heckman
theconversation.com
Originally posted 4 Dec 19

Here is an excerpt:

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years. Today, though, the best algorithms as shown in published papers are now at 86% accuracy. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.

But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

The info is here.

Tuesday, December 24, 2019

DNA genealogical databases are a gold mine for police, but with few rules and little transparency

Paige St. John
The LA Times
Originally posted 24 Nov 19

Here is an excerpt:

But law enforcement has plunged into this new world with little to no rules or oversight, intense secrecy and by forming unusual alliances with private companies that collect the DNA, often from people interested not in helping close cold cases but learning their ethnic origins and ancestry.

A Times investigation found:
  • There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
  • When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
  • California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
There are growing concerns that the race to use genealogical databases will have serious consequences, from its inherent erosion of privacy to the implications of broadened police power.

In California, an innocent twin was thrown in jail. In Georgia, a mother was deceived into incriminating her son. In Texas, police met search guidelines by classifying a case as sexual assault but after an arrest only filed charges of burglary. And in the county that started the DNA race with the arrest of the Golden State killer suspect, prosecutors have persuaded a judge to treat unsuspecting genetic contributors as “confidential informants” and seal searches so consumers are not scared away from adding their own DNA to the forensic stockpile.

Monday, December 23, 2019

Will The Future of Work Be Ethical?

Greg Epstein
Interview at TechCrunch.com
Originally posted 28 Nov 19

Here is an excerpt:

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

The info is here.

Thursday, December 19, 2019

Where AI and ethics meet

Stephen Fleischresser
Cosmos Magazine
Originally posted 18 Nov 19

Here is an excerpt:

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

The info is here.

Monday, October 28, 2019

The Ethics of Contentious Hard Forks in Blockchain Networks With Fixed Features

Tae Wan Kim and Ariel Zetlin-Jones
Front. Blockchain, 28 August 2019
https://doi.org/10.3389/fbloc.2019.00009

An advantage of blockchain protocols is that a decentralized community of users may each update and maintain a public ledger without the need for a trusted third party. Such modifications introduce important economic and ethical considerations that we believe have not been considered among the community of blockchain developers. We clarify the problem and provide one implementable ethical framework that such developers could use to determine which aspects should be immutable and which should not.

(cut)

3. A Normative Framework for Blockchain Design With Fixed Features

Which features of a blockchain protocol should or should not be alterable? To answer this question, we need a normative framework. Our framework is twofold: the substantive and the procedural. The substantive consists of two ethical principles: The generalization principle and the utility-enhancement principle. The procedural has three principles: publicity, revision and appeals, and regulation. All the principles are necessary conditions. The procedural principles help to collectively examine whether any application of the two substantive principles are reasonable. The set of the five principles as a whole is in line with the broadly Kantian deontological approach to justice and democracy (Kant, 1785). In particular, we are partly indebted to Daniels and Sabin (2002) procedural approach to fair allocations of limited resources. Yet, our framework is different from theirs in several ways: the particular context we deal with is different, we replace the controversial “relevance” condition with our own representation of the Kantian generalization principle, and we add the utility-maximization principle. Although we do not offer a fully fledged normative analysis of the given issue, we propose a possible normative framework for cryptocurrency communities.

Friday, October 25, 2019

Beyond Crypto — Blockchain Ethics

Jessie Smith
hackernoon.com
Originally posted February 4, 2019

Here is an excerpt:

At its roots, blockchain is an entirely decentralized, non-governed transactional system. It is run through many nodes that all together, result in a blockchain network. Each network contains a ledger. This ledger acts as the source of truth; it stores all of the transactions that have ever happened on the network. Similar to how a bank will store a user’s withdrawal and deposit transactions, a blockchain ledger will store every transaction that has occurred on a network. The ledger is publicly available to all of the nodes in the network.

Bitcoin miners can run their own nodes (computer hardware) in hopes of obtaining a bitcoin through the combination of processing power and a little bit of luck. The difference between a bank’s ledger and a blockchain ledger is that a bank can make changes to their ledger at any point in time, since they hold all of the power. A blockchain ledger on the other hand doesn’t belong to any central entity. It is accessible and owned by every node in the network, and is entirely immutable.

Without a central governing entity over a network, every transaction needs to be verified by a majority of the nodes. Transactions can include transferring cryptocurrency between two people, reversing old transactions, spending coins, and even blocking miners from using their own nodes. For example, if someone wanted to transfer their bitcoins to someone else, they would need their transaction to be verified by at least half of all the nodes in a network.

The info is here.

Saturday, October 12, 2019

Lolita understood that some sex is transactional. So did I

<p>Detail from film poster for <em>Lolita </em>(1962). <em>Photo by Getty</em></p>Tamara MacLeod
aeon.co
Originally published September 11, 2019

Here is an excerpt:

However, I think that it is the middle-class consciousness of liberal feminism that excluded sex work from its platform. After all, wealthier women didn’t need to do sex work as such; they operated within the state-sanctioned transactional boundaries of marriage. The dissatisfaction of the 20th-century housewife was codified as a struggle for liberty and independence as an addition to subsidised material existence, making a feminist discourse on work less about what one has to do, and more about what one wants to do. A distinction within women’s work emerged: if you don’t enjoy having sex with your husband, it’s just a problem with the marriage. If you don’t enjoy sex with a client, it’s because you can’t consent to your own exploitation. It is a binary view of sex and consent, work and not-work, when the reality is somewhat murkier. It is a stubborn blindness to the complexity of human relations, and maybe of human psychology itself, descending from the viscera-obsessed, radical absolutisms of Andrea Dworkin.

The housewife who married for money and then fakes orgasms, the single mother who has sex with a man she doesn’t really like because he’s offering her some respite: where are the delineations between consent and exploitation, sex and duty? The first time I traded sex for material gain, I had some choices, but they were limited. I chose to be exploited by the man with the resources I needed, choosing his house over homelessness. Lolita was a child, and she was exploited, but she was also conscious of the function of her body in a patriarchal economy. Philosophically speaking, most of us do indeed consent to our own exploitation.

The info is here.