In collaborative decision-making, a common protocol consists of participants publishing their preferences asynchronously and solving for maximum utility after all participants have published their preferences. This approach, used in online event schedule polling (e.g. dudle), is widely considered both simple and useful. However, it also incentivizes lying about one's own preferences (i.e. pretending to have preferences different from one's own).
In asynchronous event schedule polling, early participiants can take advantage of anchoring effects: Through underreporting of their own availability, they can narrow the set of choices considered acceptable by later participants. Conversely, later participiants have greater leverage in deciding between a limited set of outcomes than earlier participiants.
As asynchronous polling rewards both underreporting one's own availability and postponing preference reporting, it resembles a multiplayer version of the game of chicken: Underreporting and postponing may provoke an outcome preferred by individual participants, but can also lead to a catastrophic outcome – if several people underreport their availability in an incompatible way, no decision can occur.
Asynchronous polling also promotes structural abuse: While nominally cooperating in finding a common solution, dishonest actors discriminate against those not being able to employ computational resources for a game-theoretic approach.
Even without ill will, as busy people underreport their availability to ensure a conflict-free personal schedule, they externalize the cost of their personal over-commitment. Busy people therefore have more power to influence scheduling decisions even in contexts foreign to them. Purely through protocological control, asynchronous event scheduling polling thus favors a work ethic.
An alternative approach employs a self-enforcing protocol – a system of rules that make cheating impossible. At least one protocol for this is widely known: Sharing a cake fairly can be done by one person dividing it and the other choosing a piece first. Bruce Schneier provides another example:
Here’s a self-enforcing protocol for determining property tax: the homeowner decides the value of the property and calculates the resultant tax, and the government can either accept the tax or buy the home for that price. Sounds unrealistic, but the Greek government implemented exactly that system for the taxation of antiquities. It was the easiest way to motivate people to accurately report the value of antiquities.
As an alternative to asynchronous polling, Katharin Tai suggested an iterative process: In each round, all participants individually commit to a hidden value not chosen by them in earlier rounds, then reveal their choice after all other participant have committed to a value as well. Among values reported by all participants in any round, participants solve for maximum utility, assuming values from earlier rounds having greater desirability. If no value was reported by all participants, participants start the next round.
In the following example, two participants A and B used iterative preference reporting to schedule their next appointment. As a commitment device participants used two pieces of paper on which they wrote their choices in secret, showing their own piece after both committed to a value.
In the first round, both participants commited to and published their highest-ranking value; a deadlock of participants waiting for others to report first (e.g. due to anxiety) can not occur.
After three rounds, participants agreed to meet on Wednesday. A decision was guaranteed as participant B committed to a value that participant A had already published. Generally, in iterative preference reporting with two participants, one participant choosing an option that the other participant chose in previous rounds yields either a collaborative decision or two options with equal utility.
In contrast to asynchronous polling, dishonest preference reporting does not yield a benefit with iterative preference reporting. Notably, dishonest preference reporting may lead to an outcome where dishonest participants know an outcome that they (privately) prefer over the actual outcome. Individual participants suggesting an outcome acceptable to all other participants that is different from the actual outcome thus indicate their own dishonesty.
In the following example, two participants C and D collectively chose which ice cream flavor to buy. When C was unhappy with the outcome (agreeing to buy Banana-flavored ice cream), she explained that she liked Caramel more than Banana, but had not reported this during the process as D would have won.
C explained that she reported values she thought D would not chose after reporting her highest-ranking value. C pursued this strategy to hide that only one value (Chocolate) was acceptable to her. While waiting for D to choose Chocolate in a later round, C enlarges the number of values for which approval from D would immediately yield a collaborative decision.
As iterative preference reporting resembles informal bartering processes to a larger degree than asynchronous polling, it can not guarantee termination. To prevent malicious actors from wasting other's time, participants may agree on limiting the granularity and scope of acceptable values all participants may commit to (e.g. a day in the next week) before the first round.
The number of rounds does not indicate any difficulty in finding a collaborative decision. Participants complaining about the number of rounds are therefore trying to apply pressure upon other participants to support the complainant's preferences. Like participants refusing to commit to or publish a valid value, they must be considered not wanting a collaborative decision.
If a collaborative decision is not possible, iterative preference reporting will not yield a decision, its failure mode giving the reason for that. In case a participant refuses to commit to or publish a value in the first round, he or she might prefer a collaborative decision among a different set of options.
|A participant refuses to commit to or publish a valid value in the first round.||The participant does not want a collaborative decision under the circumstances.|
|A participant refuses to commit to or publish a valid value in the second round.||The participant does not want a collaborative decision and presents a Hobson's choice.|
|A participant refuses to commit to or publish a valid value after the second round.||The participant does not want a decision that yields a value different from the ones already reported by him or her.|
|Participants find several solutions with equal utility.||Due to the voting paradox no single solution can be found.|
Legitimate reasons why collaborative decision-making might not be appropriate do exist, e.g. if a decision does not need unanimous consent or if intensities of preference differ greatly between participants. Whenever ultimately a person in charge (e.g. the host of a venue) decides, asynchronous polling still exhibits some of the problems outlined above, but pretending to want a collaborative decision only wastes time.