Bridging the Partisan Divide

This article will focus on the methods and transparency provided by this site,, in regard to our electoral projections. If you have ever questioned the impartiality of our mathematical methods you should continue reading. We are able to separate fact from opinion.

The ultimate purpose of these projections is to predict what will happen in the future based on publicly available information. If personal thought is injected into a mathematical model, the model quickly digresses to a personal model in disguise. The central issue arises when the mathematical facade is used to promote ones own personal agenda, despite the presented intent; this process of deception is wrong.

My opinion may differ from yours, you may think I'm biased, which is fine, but you cannot think my electoral projections are biased. You may question my methodology, but you cannot question the result.

I have created an enormous level of transparency regarding my calculations, beyond that of any other projection site. Our methodology should provide the details necessary to duplicate our results, but I'm going to provide an additional resource. Through the use of KyPlot (Download KyPlot 2 B15) and MathCad I have compiled sample calculation worksheets that validate our approach using polling data from Iowa. The image below shows the output of a Local Regression fit in KyPlot using our parameters. You'll notice that the end points (in the thick black rectangles) exactly match our results.

Obama Iowa Table

Our polling graph from Iowa taken on October 4th; observe the equivalence of our projection and that presented in the KyPlot screenshot above.

Iowa Oct 4

The KyPlot file used to create this table is available for download; screenshots taken directly from KyPlot are available for McCain and Obama.

The MathCad worksheet produces an identical result, but provides a more thorough example of the applied mathematics of Local Regression. The files for both approaches are available for download. The KyPlot approach provides an easier and quicker method for validation, while the MathCad approach provides a more in depth analysis.

There should now be absolutely no question about the accuracy of our projection results; if only other sites provided such transparency.

RealClearPolitics provides polling projections based on simple averages. Their results are very easy to check, but the method by which they collate data and arrive at these results is very questionable. There is no publicly available document that explicitly lists the methodology for poll inclusion into their averages. It took me just 30 seconds to peruse their state tables and find an inconsistency in methodology. Analyzing the Virginia, Minnesota and Michigan pages highlights this inconsistency. On the Minnesota page, four polls are included in the average with the first excluded poll showing Obama with a lead of two and an end date of 9/17. Heading over to the Virginia page reveals that there are five polls included in the average, with the first exclusion ending on 9/25. Michigan deviates even further, with eight polls included and the first excluded poll ending on 9/21. Judging by these three states there is no discernible pattern, I'm not saying there isn't, but I have no idea what is it. Until this information is published the quality of RealClearPolitics' averages should be questioned. includes an excellent methodology page, but neglects to provide information regarding the bandwidth or degree of the Local Regression method. The bandwidth determines the tightness of fit; if the bandwidth is very low the trendline will be adjusted to fit more recent data; if the bandwidth is high a more gradual trend is approximated based on an older subset of polling data. The bandwidth could be altered on a state by state basis to tailor the result to a specific agenda. The degree will have a negligible affect on the result, but is vitally important in duplicating a result. uses a bandwidth of 15 and a degree of 3 for all states with some exceptions. also performs 10,000 simulations to compute win probability based on a mathematically valid Monte Carlo method. The problem however stems from the relatively small number of simulations. In my experience 10,000 simulations will provide relative accuracy when applied to 51 events, but is by no means an authoritative result. The result itself is random, although still situated within a reasonable window of accuracy. I may be wrong on the convergence of the simulation, 10,000 may be enough, but has never directly addressed this situation. To entirely eliminate this issue uses the concept of the Cumulative Distribution Function to arrive at an absolute result. provides the service most similar to our own, but again, like does not disclose information on bandwidth or degree. provides very little transparency on their methods as their FAQ page is still under construction. Overall, suffers from nearly the same short comings as

As a final comment; I do not necessarily believe that any of these sites are manipulating their result to achieve a certain end. I am simply stating that it is impossible to know due to a lack of transparency.

These icons link to social bookmarking sites where readers can share and discover new web pages.
  • bodytext
  • Furl
  • NewsVine
  • Reddit
  • SphereIt
  • Technorati
  • YahooMyWeb
  • Ma.gnolia
  • StumbleUpon
Your Ad Here

0 Response(s) to Bridging the Partisan Divide

Leave a Reply:

Name: (Defaults to Anonymous)
Type the characters you see in the image below:
(Word Verification)
Electoral College Projection Map
Senate Projection Map