Trumpers should read this

Discussion in 'Politics' started by Cuddles, Feb 16, 2021.

  1. Tony Stark

    Tony Stark

    Yet Obama leads Trump by 16 million when it comes to job creation.


    Screenshot_.png
     
    Last edited: Feb 17, 2021
    #41     Feb 17, 2021
  2. While pollsters were off 600 to 700 basis points, even worse than in 2016, and Biden was the beneficiary of about 300 basis points because of net voter suppression because of the polling bias, which alone resulted in Trump unfairly losing the election. I’m not going to look up the University study again on this subject because it is not my main point.

    What is my main point is, the Federal debt, regardless of who or what is responsible for creating is now on Biden’s plate. Biden’s only chance to navigate this treacherous situation is to enlist the cooperation of all businesses and consumers. Maintaining high business and consumer confidence is key. The way to maintain high confidence is for the Biden Administration to lead for all of us and not be too partisan. In fact, it would be wise for Biden to pressure the Democrat Party to stop the persecution of Trump as the confidence and cooperation of a large number of Trump supporting businesses and consumers may hang in the balance.

    A country’s debt level above 100% of GDP has historically been an area of concern, with increasingly severe economic problems manifesting themselves as debt levels continue to exceed 100%. The last reported US Government debt to GDP estimate was 127.2%. While the US is considered a reserve currency, at least for now, it will not make us impervious to monetary debasement. Should the US effectively lose reserve status, even if it is not lost officially, inflation in the US will soar.

    upload_2021-2-17_2-53-56.png
     
    #42     Feb 17, 2021
  3. I don’t recall the world shutting down during Obama’s terms. Do you?
     
    #43     Feb 17, 2021
  4. I see USA Today impeached themselves here. Mismatched basis of comparison. Put them next to CNN for misleading reporting.

    Edit: Obama’s job numbers reflected recovery numbers, which are easier to obtain than real employment growth after the economy nears full employment, like it did under Trump, thus making trump’s performance all that more impressive. Then the virus hit.

    Pretty disingenuous comparing post virus, world shutdown job numbers to numbers that did not include that extraordinary event.

    Creating a business friendly environment leads to higher employment, which in turn leads to more confident consumers, which leads to increased consumer spending, which leads to more business hiring, spending, and increases in tax revenues.

    The reverse can also be true: Higher taxes and regulations lead to businesses cutting back on spending and hiring, leading to lower consumer confidence, leading to lower spending and business confidence, leading to layoffs, and so on.

    Are you grasping any of this?
     
    Last edited: Feb 17, 2021
    #44     Feb 17, 2021
  5. Tony Stark

    Tony Stark


    upload_2021-2-17_3-34-38.png



    upload_2021-2-17_3-35-13.png




    upload_2021-2-17_3-36-18.png
     
    #45     Feb 17, 2021
  6. Tony Stark

    Tony Stark


    [​IMG]
     
    #46     Feb 17, 2021
  7. Tony Stark

    Tony Stark


    Yes,Trump left a 3 trillion dollar+ deficit on Biden's plate(After Obama only left him a 600 billion dollar deficit)

    Like Clinton and Obama,Biden will lower the deficit rather than raise it like Bush and Trump did.
     
    #47     Feb 17, 2021
  8. Tony Stark

    Tony Stark


    No,just 2 pandemics and Bushs great recession,and he still managed to create 12 million jobs and cut the deficit in half.

    Obamas official numbers would be higher if the country wasn't losing 500,000 jobs a month the minute he took office.
     
    #48     Feb 17, 2021
  9. Tony Stark

    Tony Stark

    Those are official federal government numbers homie.
     
    #49     Feb 17, 2021
  10. Ok, you asked for it. Attached below is a university study on the unreliability of polls and a related article by Scientific American:

    [​IMG]
    HOW DID THE POLLS GET IT SO WRONG...AGAIN?

    It may well be days, if not weeks, before the winner of the 2020 presidential race is decided, but one clear lesson from Tuesday night’s election results is that pollsters were wrong again. It’s a “giant mystery” why, says Nick Beauchamp, assistant professor of political science at Northeastern.

    There are a number of possible explanations, Beauchamp says. One is that specific polls undercounted the extent to which certain demographics—such as Hispanic voters in specific states—shifted toward President Trump.

    [​IMG]
    Nick Beauchamp, assistant professor of political science. Photo Matthew Modoono/Northeastern University

    Another is that, just as in 2016, polls undercounted hard-to-reach voters who tend to be less educated and more conservative. Beauchamp is less convinced that “shy” Trump voters deliberately misrepresented their intentions to pollsters.

    “Whatever the cause, it has huge implications not just for polling, but for the outcome of the presidential and Senate elections,” Beauchamp says. “If the polls have been this wrong for months, since they have been fairly stable for months, that means that campaign resources may have also been misallocated.”

    Beauchamp pointed to a tweet by political pollster Josh Jordan, which showed just how much Trump over-performed the FiveThirtyEight averages in nine swing states.

    In Ohio, for example, he ran seven points better. In Wisconsin, it was eight points.

    “Trump over-performed relative to the polls in these states by a median of 6 points,” says Beauchamp. “That’s a shockingly large error, though in other states it may have been smaller.”

    This year’s polling errors, Beauchamp said, were “enormous” even compared to 2016, when polls failed to predict Trump’s defeat of Democratic nominee Hillary Clinton.

    Indeed, just days before the presidential contest, FiveThirtyEight founder Nate Silver predicted that Biden was slightly favored to win Florida and its 29 Electoral College votes. The race was called on election night with Trump comfortably ahead.

    [​IMG]
    WHETHER BIDEN OR TRUMP WINS, IT’S CLEAR THE US IS DIVIDED. IT’S TIME TO TURN TO LOCAL COMMUNITIES FOR SOLUTIONS.


    READ MORE
    “I think Nate Silver and the other pollsters are saying ‘Well that’s just within the bounds of systematic polling,’ but it seems awfully large to me for just that,” Beauchamp says.

    Now, election watchers and media leaders are questioning the value of polling overall. Washington Post media columnist Margaret Sullivan wrote that “we should never again put as much stock in public opinion polls, and those who interpret them, as we’ve grown accustomed to doing.”

    “Polling,” she wrote, “seems to be irrevocably broken, or at least our understanding of how seriously to take it is.”

    Polling misses aren’t unique to the United States, Beauchamp points out. Polls also failed to predict the 2015 elections in the United Kingdom, as well as the UK’s 2016 “Brexit” vote to exit the European Union.

    In those cases, as with the 2016 US election, Beauchamp said, pollsters “made these mistakes, which are relatively small, but in the same direction and with fairly significant effects.”

    To avoid a repeat of 2016 and 2020 in the United States, Beauchamp says, pollsters should shift their tactics—and perhaps attach different weights to factors they’re trying to measure, such as social distrust or propensity to be a non-voter.

    “Hopefully they’re going to start modeling all of that information as a way to better capture these issues in voters,” Beauchamp says.

    For media inquiries, please contact media@northeastern.edu.
    https://news.northeastern.edu/2020/...ction-even-after-accounting-for-2016s-errors/




    Why Polls Were Mostly Wrong

    Princeton’s Sam Wang had to eat his words (and a cricket) in 2016. He talks about the impacts of the pandemic and QAnon on public-opinion tallies in 2020


    [​IMG]
    Credit: Joseph Prezioso Getty Images
    Sign up for Scientific American’s free newsletters.

    " data-newsletterpromo_article-image="https://static.scientificamerican.c...F54EB21-65FD-4978-9EEF80245C772996_source.jpg" data-newsletterpromo_article-button-text="Sign Up" data-newsletterpromo_article-button-link="https://www.scientificamerican.com/...code=2018_sciam_ArticlePromo_NewsletterSignUp" name="articleBody" itemprop="articleBody" style="box-sizing: inherit; margin-top: 30px; outline: 0px; border: 0px; vertical-align: baseline; word-wrap: break-word;">
    In the weeks leading up to the November 2016 election, polls across the country predicted an easy sweep for Democratic nominee Hillary Clinton. From Vanuatu to Timbuktu, everyone knows what happened. Media outlets and pollsters took the heat for failing to project a victory for Donald Trump. The polls were ultimately right about the popular vote. But they missed the mark in key swing states that tilted the Electoral College toward Trump.

    This time, prognosticators made assurances that such mistakes were so 2016. But as votes were tabulated on November 3, nervous viewers and pollsters began to experience a sense of déjà vu. Once again, more ballots were ticking toward President Trump than the polls had projected. Though the voter surveys ultimately pointed in the wrong direction for only two states—North Carolina and Florida, both of which had signaled a win for Joe Biden—they incorrectly gauged just how much of the overall vote would go to Trump in both red and blue states. In states where polls had favored Biden, the vote margin went to Trump by a median of 2.6 additional percentage points. And in Republican states, Trump did even better than the polls had indicated—by a whopping 6.4 points.

    Four years ago, Sam Wang, a neuroscience professor at Princeton University and co-founder of the blog Princeton Election Consortium, which analyzes election polling, called the race for Clinton. He was so confident that he made a bet to eat an insect if Trump won more than 240 electoral votes—and ended up downing a cricket live on CNN. Wang is coy about any plans for arthropod consumption in 2020, but his predictions were again optimistic: he pegged Biden at 342 electoral votes and projected that the Democrats would have 53 Senate seats and a 4.6 percent gain in the House of Representatives.


    ADVERTISEMENT
    Scientific American recently spoke with Wang about what may have gone wrong with the polls this time around—and what bugs remain to be sorted out.

    [An edited transcript of the interview follows.]

    How did the polling errors for the 2020 election compare with those we saw in the 2016 contest?

    Broadly, there was a polling error of about 2.5 percentage points across the board in close states and blue states for the presidential race. This was similar in size to the polling error in 2016, but it mattered less this time because the race wasn’t as close.

    The main thing that has changed since 2016 is not the polling but the political situation. I would say that worrying about polling is, in some sense, worrying about the 2016 problem. And the 2020 problem is ensuring there is a full and fair count and ensuring a smooth transition.


    ADVERTISEMENT
    Still, there were significant errors. What may have driven some of those discrepancies?

    The big polling errors in red states are the easiest to explain because there’s a precedent: in states that are historically not very close for the presidency, the winning candidate usually overperforms. It’s long been known turnout is lower in states that aren’t competitive for the presidency because of our weird Electoral College mechanism. That effect—the winner’s bonus—might be enhanced in very red states by the pandemic. If you’re in a very red state, and you’re a Democratic voter who knows your vote doesn’t affect the outcome of the presidential race, you might be slightly less motivated to turn out during a pandemic.

    That’s one kind of polling error that I don’t think we need to be concerned about. But the error we probably should be concerned about is this 2.5-percentage-point error in close states. That error happened in swing states but also in Democratic-trending states. For people who watch politics closely, the expectation was that we had a couple of roads we could have gone down [on election night]. Some states count and report votes on election night, and other states take days to report. The polls beforehand pointed toward the possibility of North Carolina and Florida coming out for Biden. That would have effectively ended the presidential race right there. But the races were close enough that there was also the possibility that things would continue. In the end, that’s what happened: we were watching more counting happen in Pennsylvania, Michigan, Wisconsin, Arizona and Nevada.

    [​IMG]
    Sign up for Scientific American’s free newsletters.

    How did polling on the presidential race compare with the errors we saw with Senate races this year?

    The Senate errors were a bigger deal. There were seven Senate races where the polling showed the races within three points in either direction. Roughly speaking, that meant a range of outcomes for between 49 and 56 Democratic seats. A small polling miss had a pretty consequential outcome because every percentage point missed would lead to, on average, another Senate seat going one way or the other. Missing a few points in the presidential race was not a big deal this year, but missing by a few points in Senate races mattered.


    ADVERTISEMENT
    What would more accurate polling have meant for the Senate races?

    The real reason polling matters is to help people determine where to put their energy. If we had a more accurate view of where the races were going to end up, it would have suggested political activists put more energy into the Georgia and North Carolina Senate races.

    And it’s a weird error that the Senate polls were off by more than the presidential polls. One possible explanation would be that voters were paying less attention to Senate races than presidential races and therefore were unaware of their own preference. Very few Americans lack awareness of whether they prefer Trump or Biden. But maybe more people would be unaware of their own mental processes for say, [Republican incumbent] Thom Tillis versus [Democratic challenger] Cal Cunningham [in North Carolina’s Senate race]. Because American politics have been extremely polarized for the past 25 years, people tend to [end up] voting [a] straight ticket for their own party.

    Considering that most of the polls overestimated Biden’s lead, is it possible pollsters were simply not adequately reaching Trump supporters by phone?

    David Shor, a data analyst [who was formerly head of political data science at the company Civis Analytics], recently pointed out the possibility that people who respond to polls are not a representative sample. They're pretty weird in the sense that they’re willing to pick up the phone and stay on the phone with a pollster. He gave evidence that people are more likely to pick up the phone if they’re Democrats, more likely to pick up under the conditions of a pandemic and more likely to pick up the phone if they score high in the domain of social trust. It’s fascinating. The idea is that poll respondents score higher on social trust than the general population, and because of that, they’re not a representative sample of the population. That could be skewing the results.


    ADVERTISEMENT
    This is also related to the idea that states with more QAnon followersexperienced more inaccurate polling. The QAnon belief system is certainly correlated with lower social trust. And those might be people who are simply not going to pick up the phone. If you believe in a monstrous conspiracy of sex abuse involving one of the major political parties of the U.S., then you might be paranoid. One could not rule out the possibility that paranoid people would also be disinclined to answer opinion polls.

    In Florida’s Miami-Dade County, we saw a surprising surge of Hispanic voters turning out for Trump. How might the polls have failed to take into account members of that demographic backing Trump?

    Pollsters know Hispanic voters to be a difficult-to-reach demographic. In addition, Hispanics are also not a monolithic population. If you look at some of the exit polling, it looks like Hispanics were more favorable to Trump than they were to Clinton four years ago. It’s certainly possible Hispanic support was missed by pollsters this time around.

    Given that the presidential polls have been off for the past two elections, how much attention should people pay to polls?

    I think polling is critically important because it is a way by which we can measure public sentiment more rigorously than any other method. Polling plays a critical role in our society. One thing we shouldn’t do is convert polling data into probabilities. That obscures the fact that polls can be a few points off. And it’s better to leave the reported data in units of opinion [as a percentage favoring a candidate] rather than try to convert it to a probability.


    ADVERTISEMENT
    It’s best not to force too much meaning out of a poll. If a race looks like it’s within three or four points in either direction, we should simply say it's a close race and not force the data to say something they can’t. I think pollsters will take this inaccuracy and try to do better. But at some level, we should stop expecting too much out of the polling data.

    Rights & Permissions
    ABOUT THE AUTHOR(S)
    Gloria Dickie
    Gloria Dickie is a freelance journalist who writes on science and the environment. Her work appears in the Guardian, National Geographic, Science News, Wired Magazine and Public Radio International.
    https://www.scientificamerican.com/article/why-polls-were-mostly-wrong/



     
    #50     Feb 17, 2021