Disagreements between AIs

Consensus formation between humans is achieved by making arguments supported by facts, observations, and in some cases, emotions (although one could say these are poor arguments). The consequence of disagreement may be:

  1. no party prevails: neither individual changes his/her belief state; no one is convinced
  2. one party prevails: an individual changing his/her opinion and acquiring new beliefs
  3. both parties prevail: both individuals abandon their beliefs in the creation of a new idea

How do AIs move pass these types of disagreements?

In hierarchical groups, this is relatively easy. The machine (or human) in the position of authority holds sway.

In parallel, non-hierarchical groups of three or more machines, consensus can be reached by voting.

However, consequences 1 and 3 above do not seem to be well represented. The example I can think of is two eyewitnesses disagreeing in court over a course of events. Both witnesses are convinced of what they saw or heard, which due to the imperfection of senses, may well be the case. Since the consequence of this disagreement matters (that is, it affects the sentence of a third party, the defendent), there must be a way to decide who’s version of the story is closer to the truth. We employ an “impartial” jury for this task. So perhaps a similar adjudication system is necessary to merge competing AI belief states.

Leave a Reply

Your email address will not be published. Required fields are marked *