If we consider our minds to be networks of signals, then we can say that it is better that the signals be more efficient and contain fewer errors.
This might be a good definition of a sound ethical position—to reduce signal error and increase signal efficiency.
In many ways, the two are the same. When we reduce signal error, we increase the efficiency of the entire system.
Thus, for any one system, such that there is a such a thing, the best ethical position would be to reduce signal error while increasing signal efficiency. That one system might stand for one human being.
But what if there are two or more systems that interact with each other?
In one sense we might say they are the “same” system, especially if interaction is imperative. In another sense, we can treat them as different systems.
If they are seen as the “same,” then reducing error and increasing efficiency will benefit the whole system (of two or more).
If they are seen as separate and not the same, there are two possibilities. Separate systems within the whole may decide to lie or cheat or they may decide not to lie or cheat.
If none of the separate systems within the network ever lies or cheats, efficiency will be increased and error will be reduced.
If one or more of the separate systems within the network decides to lie or cheat, efficiency will decrease and errors will multiply.
The separate systems can be understood to be people while the large network can be understood to be human groups. Lying and cheating or refraining from lying or cheating must be conscious acts.
Errors that just happen non-consciously (misspeaking, mishearing, misunderstanding, data mistakes, etc.) are not moral errors unless they could be or could have been avoided by a reliable method.
No network without lying or cheating has ever been achieved by large numbers of human beings. Even very small groups, as few as two people, rarely are able to achieve an ideal ethical state of no lying and no cheating. And even if they do get pretty good at that, it is very difficult for even just two people to remove non-conscious errors from their interactions.
FIML practice can greatly reduce non-conscious error between partners while at the same time providing a robust basis for increased moral awareness and increased understanding that both partners are benefiting greatly from the honesty (or ethical practice) of both of them.
My honesty with you greatly improves my understanding of and honesty within my own network and also gives me much better information about your network. And the same is true for you. Together we form an autocatalytic set that continually upgrades our mutual network and individual systems.
Clarity, honesty, and efficiency in interpersonal communication is satisfying in itself and also it improves efficiency between partners as it upgrades the self-awareness of each.
One partner could lie and cheat while doing FIML practice, but since FIML is fairly involved and somewhat difficult to learn, it is likely that most partners will do their best by each other and that most individuals will come to realize that honesty benefits them much more than lying.
I think it is fair to conclude that the best ethical or moral position to take is one that increases efficiency of signalling (talking, doing, etc.) while also reducing signalling error. The problem with doing that is people can and will lie and cheat and we do not (yet) have a reliable way to tell when they are lying and cheating.
A good way to tell if someone is being honest will be an accurate lie-detector, but even that may not be efficient or work well with the dynamics of real-time human communication.
Thus some other technique is needed. FIML can be that technique and I know of no other one that works as well. Thus a sound ethical position in today’s world would be having the aim of reducing signal error while increasing signal efficiency through the practice of FIML.
Without FIML, interpersonal communications is at least an order of magnitude cruder and thus much less efficient. FIML is not perfect, but it is much better than what we ordinarily do. If you can increase resolution and detail at will within any system, it will improve that system. If you can do that with interpersonal communication, it will improve all aspects of that system.
first posted SEPTEMBER 26, 2014
UPDATE: Notice that the fear people have about AI destroying the world is based on its learning how to deceive us. How to lie to us. When I introduced this idea to my partner this morning, she very convincingly argued that DARPA already has a much more powerful AI that is able to control the GPT programs we are now seeing and that our overlords will use the excuse that AI has gone rogue to further enslave us. That went right onto my Bayesian probability pie-chart as a big slice. ABN
You must be logged in to post a comment.