Daniel J. Buehrer
Here is an excerpt:
Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world. With this definition, if the programs, neural networks, and Bayesian networks are put into read-only hardware, the machines will not be conscious since they cannot learn. We
would not have to feel guilty of recycling these sims or robots (e.g. driverless cars) by melting them in incinerators or throwing them into acid baths, since they are only machines. However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.
Unsupervised hierarchical adversarially learned inference has already shown to perform much better than human handcrafted features. The feedback mechanism tries to minimize the Jensen-Shanon information divergence between the many levels of a generative adversarial network and the corresponding inference network, which can correspond to a stack of part-of levels of a fuzzy class calculus IS-A hierarchy.
From the viewpoint of humans, a sim should probably have an objective function for its reinforcement learning that allows it to become an excellent mathematician and scientist in order to “carry forth an ever-advancing civilization”. But such a conscious superintelligence “should” probably also make use of parameters to try to emulate the well-recognized “virtues” such as empathy, friendship, generosity, humility, justice, love, mercy, responsibility, respect, truthfulness, trustworthiness, etc.
The information is here.