Closs Enuf 4 Govment Work

  For redistricting, random processes should not be used.  Certain software given the identical census data run on a compatible computer should yield identical results.  That way all parties can verify that the results are not just something cooked up in a smoke filled room.  If the results from different computers aren’t precisely identical, careful analysis will reveal the cause of the discrepancy.  This is better than an independent redistricting commission.  A commissioner is likely to know that San Francisco is liberal and Orange County is conservative.  Even dissecting the brain of a commissioner will not reveal if that knowledge or some other knowledge improperly biased a decision.

 There has always been a trade-off in computing between accuracy and speed.  Double precision takes longer to calculate than single precision.  Consider finding the closer of two points to a center.  If the distance to the two points is almost identical, then greater precision is required to resolve which is closer.   If using 64 bit integers, the squares of each distance can be compared instead of the distances themselves.  Since I am using a borrowed six-year-old home computer, approximations and shortcuts will keep the running time reasonable.  There is always the danger that not being precise enough will result in an incorrect comparison.  A way to save the situation is to measure how identical the distances are.  If too close to identical, the calculation can be redone with higher precision and without so many approximations.  Hopefully, this should rarely happen.  Still, it takes more time to measure how identical, recalculation or not.   Until I actually run some software, I won’t know how long a time is needed for any of this.

 There is no cure for the worst-case scenario.  More cosine terms could always be used, and more decimal places in Pi could forever be added without limit.  I didn’t like the idea of a hidden MSB bit, but IEEE 754 is crucial for portability of software among computers.  Still, there comes a point where the results should be accepted if they were fairly generated.  In gambling lotteries, we analyze the fairness of the selection process, not the precision of the results.  In gerrymandering, the end results justify any shady means.

It is time for me to prepare for the incoming data.  No, not the census block data, but instead, my W-2’s and 1099’s must be collected.  My taxes are unpredictable, as I haven’t figured out yet what Congress has done to us taxpayers this time.  I will have to do my taxes before I can attempt to do  SandBoxWalls to Congress.
 

Some Census data will be released next month. I doubt I will be ready in time.

Advertisements

One Comment

  1. Posted May 14, 2011 at 8:39 am | Permalink | Reply

    There _are_ some places I definitely don’t want randomized processes, like in airplanes and voting, but I’ve found them to be quite effective at redistricting. I view redistricting as an optimization process, with far too many variables. We may be able to define the qualities of a ‘best’ district mapping, but we might not be able to reliably find it. Also, the definition of ‘best’ does not necessarily imply a method of getting there. Many methods might be tried, and the one which yields the measurably best result is the winner. As an optimization problem, the method doesn’t matter. So, I have random starting conditions and randomization throughout my implementation in hopes of getting out of local minima. And after some CPU-months of churning I have pretty good results for most of the states.

Post a Comment

Required fields are marked *

*
*