During this presentation I was able to reflect on the interplay of algorithms and public participation. And it became even clearer to me that applications like DistrictBuilder exemplify the ability of information science to improve policy and politics.
Redistricting in Mexico is particularly interesting, since it relies heavily on facially neutral geo-demographic criteria and optimization algorithms -- which represents a different sort of contribution from information science. Thus, it was particularly interesting to me to consider the interplay between algorithmic approaches to problem solving and "wisdom of crowd" approaches -- especially for problems in the public sphere.
It's clear that complex optimization algorithms are an advance in redistricting in Mexico, and have an important role in public policy. However, they also have a number of limitations:
Algorithmic optimization solutions often depend on a choice of (theoretically arbitrary) 'starting values' from which the algorithm starts its search for a solution
Quality algorithmic solutions typically rely on accurate input data
Many optimization algorithms embed particular criteria or particular constraints into the algorithm itself
Even where optimization algorithms are nominally agnostic to the criteria used for the goal, some criteria are more tractable than others; and some are more tractable for particular algorithms
In many cases, when an algorithm yields a solution, we don't know exactly (or even approximately, in any formal sense) how good that solution is.
I argue that explicitly incorporating a human element is important for algorithmic solutions in the public sphere. In particular:
Use open documentation and open (non-patented, or open-licensed) to enable external replication of algorithms
Use open source to enable external verification of the implementation of particular algoritms
Incorporate public input to improve the data (especially describing local communities and circumstances) in algorithm driven policies.
Incorporate crowd-sourced solutions as candidate "starting values" for further algorithmic refinement
Subject algorithmic output to crowd-sourced public review to verify the quality of the solutions produced