Prediction models suffered from narrow data, faulty algorithms and human foibles

Discussion in 'Politics' started by Jerkstore, Nov 15, 2016.

  1. http://home.ubernerdz.com/index.php/2016/11/15/bad-election-day-forecasts-deal-blow-to-data-science/

    It seems the Wall Street Journal has learned to write headlines from the tabloid news networks which shall here remain unnamed. You picture some evil giant called Big Data getting deservedly pummeled by a young handsome hero. To quote a favorite phrase of the president elect … “Wrong”.

    A couple of reasons why:

    1) A fool with a tool is still a fool. This election was unprecedented for negative rhetoric. Real issues facing the American people were brushed aside in favor of each candidate explaining why the other was unsuited for the job. Issue based algorithms were at a disadvantage. The best approach for issue based predictive analysis would have been to step back and admit that a fresh look was required. Of course some analysts did just that, but the majority were committed to a particular software package and were unable or unwilling to make radical modifications.

    2) A few forecasters had better results. The USC/LA Times Daybreak poll and the IBD/TIPP presidential tracking poll both got it right. Obviously they were doing something different than the large group that got it wrong. But was it better? It is tempting to evaluate predictors by how well a particular prediction performed. Tempting and wrong. Let’s compare solving a puzzle to making a prediction. A puzzle will have a solution and if you find it you got it right. A prediction (and many decisions in life) has an associated probability rather than a clear solution. A forecaster could say that a particular candidate has an 80% chance of winning. Assuming the prediction is accurate there is still a one in five chance that the underdog will win. If you only ever see the one unhappy result it is simple to condemn the predictor and whatever technique was being used. Too simple and unsound.

    The WSJ’s title was clearly attention seeking. I guess that sells newspapers. The article itself was fine and did finally get to a solid point – albeit not the one alluded to in the headline. The polling questions being asked around the country seemed to give misleading results. How to ask a question in order to get more accurate answers? That is a puzzle for the psychologists to solve, not the data scientists.
     
  2. achilles28

    achilles28

    What a load of horseshit.

    Anyone that followed this election knows the polling methodology routinely over sampled women, minorities, democrats and Hispanics while under sampling republicans, white men, independent voters.
     
    jem likes this.
  3. Greenie

    Greenie

    Is nate silver out of a job now? He got brexit wrong and said clinton had 70% chance of winning. I guess better than Huffpo poll which put clinton at 98% ROFL
     
    achilles28 likes this.
  4. jem

    jem

    nates website is owned by espn. He had to slant to hillary... but if you read between the lines the last week.. he was letting us know trump could win.

    the left go so pissed at him breaking ranks that they tried to take him down. He called them out as idiots in an almost epic twitter fight.

    I hope he sticks around. I find his writing useful if run through the proper filters to remove the forced leftist bias.
     
    achilles28 likes this.
  5. achilles28

    achilles28

    Ya, where's Tony Stark?

    That idiot posted Nate Silver predictions 6 months straight. Before Trump crushed Hillary. what a joke.
     
  6. achilles28

    achilles28

    Wikileaks confirmed this.

    Democrats caught red-handed telling pollsters to under-sample republican leaning demographic groups. And over sample democratic leaning demographic groups.