Summary

  • News media, pundits, political campaigns and voters look to polling results as a guide to understanding how elections will play out. Candidates gain momentum, adjust messages and make spending decisions based on polls; voters may base decisions to contribute to a campaign based on polls; and journalists cover the polls as much as the candidates themselves. Who is behind the polling? What happens when the polls get it wrong? Are the people participating in the polls really representative of the voting public? The FPC hosts a briefing with author and media critic Dr. W. Joseph Campbell, a professor of communication at American University who discusses the evolution of election polling, the challenges that pollsters face and the effects of polls in the run up to the November elections.

THE WASHINGTON FOREIGN PRESS CENTER, WASHINGTON, D.C. 

MODERATOR:  So good morning, everyone.  My name is Doris Robinson, and I am a media relations officer at the Washington Foreign Press Center.  I am also the moderator for today’s on-the-record briefing.  Today’s briefing is on “Gauging Public Opinion: Polling in an Election Year.”  Our briefer today is Dr. Joseph Campbell.  He is a tenured full professor in American University’s School of Communications, Communications Studies Program.  He joined the AU faculty in 1997 after some 20 years as a professional journalist.  Assignments in his award-winning journalism career took him across North America to Europe, West Africa, and parts of Asia.  Campbell is the author of seven soloed-authored books, including most recently, Lost in a Gallup: Polling Failure in U.S. Presidential Elections.  The book addresses prominent cases in which opinion polls misfired in elections from 1936 to 2016 and makes the point that polling failures often correlates to journalists’ failures. 

Okay.  And now for the ground rules.  This briefing is on the record, and the views of today’s briefers do not represent the views of the U.S. Government.  Dr. Campbell will give opening remarks, and then we will open it up for questions.  Please remain muted until I call on you for questions.  And with that, I will turn it over to Dr. Campbell. 

MR CAMPBELL:  Great, Doris.  Thank you very much, and thanks to everyone joining this briefing today.  I’m going to show a brief PowerPoint presentation and then we’ll open it up to Q&A.  So let’s get going with the PowerPoint. 

(A PowerPoint presentation is shown.) 

MR CAMPBELL:  So polling in an election year, and I offer some context, takeaways, and a few suggestions about reporting on polls in an election year.  And we’ll consider a few but certainly not all the cases of polling failure in U.S. presidential elections, and then we’ll discuss a few reminders and takeaways about election polls.  And I’ll also offer a few suggestions about what to look for and where to turn for resources about covering elections and covering polls, and then we’ll go to, as they say, a Q&A. 

The briefing is – as Doris mentioned, my book, my most recent book is Lost in a Gallup: Polling Failure in U.S. Presidential Elections, and the briefing today is drawn in part on that most recently published book.  And in particular, we’re going to take a look at four cases of polling failure, and we’ll do this very briefly, and those are cases from 1948, 1980, 2012, and of course, 2016. 

So let’s start with 1948 and we’ll go in chronological order.  1948 – this was a year of an epic polling failure, the so-called “Dewey Defeats Truman” polling failure – has taken the name “Dewey Defeats Truman” from the famous Chicago Tribune headline that appeared the day after the election that proclaimed Thomas E. Dewey had defeated incumbent President Harry Truman.  And pollsters had forecast just such a victory that Dewey was going to win easily, and all pundits and most journalists believed that as well.   

And so did Thomas Dewey.  He ran a – what can be called a glide-path campaign, in the sense that he took no risks, he took no “shake it up” kind of campaign approach.  He took a glide path to what he thought was going to be a glide path to the presidency.  In fact, during the campaign he told an advisor and he told a close aide, “When you’re ahead, don’t talk.” 

Of course, Harry Truman won the election in 1948 by 4.5 percentage points.  And the shock – the shock in 1948 was probably deeper than that in 2016, because just everyone thought that Harry Truman had no chance of winning re-election, and certainly not by the margin that he did, 4.5 points.  And journalists themselves criticized themselves for having relied too heavily on polls.  They claimed that they – in the aftermath that they had delegated essentially their legwork to the pollsters, they had given that important function over to the pollsters, in effect. 

So what went wrong in 1948 and are there any enduring lessons from that election?  One of those lessons, and it’s a lesson that pollsters still have trouble remembering and applying, is that it’s important to continue the polling right up to Election Day, to as close to Election Day as you possibly can.  And don’t end polling, as they did in 1948, weeks in advance, weeks before the election. 

Another takeaway from that election was that overconfidence is a reality, and many, many Republican voters decided there was no need to go to the polls that year because Thomas Dewey was a shoo-in, that he was going to win easily; there was no need for them to go and vote.  And voter turnout in 1948 dropped off dramatically from previous years, and part of that was due to overconfidence by the Republicans. 

And another factor in 1948 was that undecided voters broke heavily toward the end of the campaign for Harry Truman as well as supporters of a third-party candidate, a Progressive Party candidate named Henry Wallace.  And his support dwindled dramatically as the election approached, and many of those supporters of Henry Wallace, the Progressive Party candidate, went to Harry Truman.   

So by not polling to the end of the election, pollsters did themselves no favor. 

Let’s jump ahead to 1980, another case of polling failure. And by then, by 1980, major news organizations, including The New York Times, CBS News, and others, had themselves gotten into polling.  They were conducting election-time surveys because it was a newsworthy – it was recognized as a newsworthy undertaking. 

And the polls in 1980 suggested an eyelash-close election between the incumbent, President Jimmy Carter, and the Republican rival, Ronald Reagan.  They thought it was going to be very close.  The New Times-CBS poll at the end had a one-point lead for Ronald Reagan.  So too did the Gallup Organization poll.  They thought it was going to be really very close, almost too close to call. 

And in the end, Ronald Reagan wins by nearly a landslide, nearly 10 percentage points, a landslide that no pollster had foreseen.  And this outcome in 1980 was so shocking that pollsters among themselves quarreled openly about who was to blame and why they did not detect this landslide in the making.  And the article to the left – to my left – indicates that that squabbling spilled over into the public realm. 

So what went wrong in 1980?  Again, there was evidence that pollsters failed to conduct their polls up to and very close to Election Day.  Some of them ended on the Friday or Thursday before the election.  That’s four or five days before the voting, and that was a major misstep.   

Also, there was – there were a number of developments toward the end of the 1980 campaign in which – developments that tended to favor Ronald Reagan.  There was a debate.  The only debate between Carter and Reagan in 1980 came during the last week of the election, and in the debate Ronald Reagan clearly outperformed the president, and that was enough for many voters to reassure them that Ronald Reagan, a former actor, was going to be up for the job of president. 

Also, there were developments on the weekend before the election, and those included the movement in Iran towards what looked like the release of U.S. diplomatic personnel who had been held hostage in Iran for more than a year.  And that movement ended without any kind of release of the U.S. hostages, but it reminded people of the greatest foreign policy failing of Jimmy Carter, and that contributed no doubt to the landslide proportions of Regan’s victory. 

And a final factor in 1980 was the likelihood of a shy Reagan vote.  Shy Reagan.  And by that I mean people did not tell the pollsters who they were really going to favor, because Ronald Reagan, a former actor, did not have the best of reputations during the campaign, and many people for socially desirable reasons said they had not made up their mind, they were undecided, or that they weren’t going to vote, when, in fact, they really planned to vote for Reagan all along.  So there may have been – may have been – a shy Reagan dimension to the vote in 1980. 

Jumping ahead several years to 2012, this was Barack Obama’s re-election campaign and against Mitt Romney, and the most venerable, the most recognized, the most internationally renowned polling organization, the Gallup Organization, consistently throughout the 2012 election had Romney ahead of President Obama.  And at the end, at the very end of the campaign, the Gallup Organization, consistently throughout the 2012 election had Romney ahead of President Obama.  And at the end, at the very end of the campaign, the Gallup Organization said the lead was very narrow but Romney was ahead by one point.  They consistently overestimated Romney’s support, and Obama won that election by nearly four percentage points.  It was an embarrassing turn of events for the Gallup Organization, for this venerable polling organization that began in the mid-1930s.  And after the 2012 election, the Gallup Organization never polled – has never polled an election again.  They left election polling. 

Also in 2012, the poll-based data journalist Nate Silver and his 538 prediction model called every state in the country, all 50 states correctly, the outcome of those 50 states correctly in 2012, thereby sealing his reputation as an election oracle, in a sense.  And it was – it marked – it confirmed the rise of data-based journalism, of data journalism in which you could take polling data and other information and run it through a model and make predictions about the outcome of the election.  And as I say, Nate Silver in 2012 got all 50 states correct, an impressive feat. 

And then in 2016, Nate Silver’s model failed as well as the models for many other poll-based prediction forecast approaches, and this was the election that was not supposed to happen, that of Donald Trump.  Trump won the Electoral College but lost the popular vote to Hillary Clinton.  And where the forecasts of Nate Silver and others went awry was that they relied heavily on state polls, on the outcomes in state polls, in places such as Wisconsin and Minnesota and Michigan and Pennsylvania.  And in those three states – Wisconsin, Michigan, and Pennsylvania – the polls were indicating that Hillary Clinton was well ahead.  In fact, Wisconsin had Hillary ahead by six percentage points on the eve of the election, and Trump won all those points – all those states very narrowly, by less than one percentage point.  And those victories in those states were enough with his other states that he won to – for Trump to win an Electoral College victory even though, as I say, he lost the popular vote. 

So what went wrong in 2016?  Well, these state polls went wrong, especially in Wisconsin, Michigan, Pennsylvania, to a lesser extent North Carolina, Florida, and Ohio.  They predicted Hillary Clinton would win, particularly in Wisconsin, Pennsylvania, and Michigan.  And they apparently did not continue polling right to the end, right toward the end of the election, another example of this polling failure, that you’d think that pollsters would have learned that lesson by now, the importance of continuing to poll right to the end of the campaign. 

Also, it was pretty clear that many voters in the swing states, those key states – Michigan, Wisconsin, Pennsylvania – and elsewhere swung to Donald Trump late in the campaign.  In the closing days of the campaign, he picked up more undecided votes than Hillary Clinton did.  And also, the third-party candidate, the Libertarian Party candidate, lost a lot of votes in the last weeks of the election, and many of those votes went to Donald Trump. 

And there may have been – although many polling analysts dispute this – but there may have been a shy Trump vote in 2016.  In other words, people, for socially desirable reasons, would not tell pollsters that they were supporting Donald Trump, even when privately they were.  And the shy Trump phenomenon has been identified by some people as an explanation for the surprise victory in 2016 of Donald Trump.  As I say, though, most polling analysts have not found compelling evidence of a shy Trump phenomenon, but it may be there and it may re-emerge in 2020. 

So what do we take away from this brief rundown about polling failure in presidential elections?  One of those takeaways is that it is very rare for a presidential election campaign not to produce some sort of polling controversy of one kind or another.  Another takeaway is that there are more than one type of polling failure.  There are multiple ways in which polls can go bad, multiple ways in which polls can fail.  As we’ve seen in this brief PowerPoint, there were epic polling failures such as that in 1948, the “Dewey Defeats Truman” election.  Another one is the – is landslides that pollsters do not detect in advance that are unforeseen to pollsters. 

We’ve seen a case, the Gallup Organization in particular in 2012, in which a venerable pollster gets it badly wrong, and in the case of Gallup, leaves election polling right afterward or a few years afterwards.  And also, we’ve seen how state polls – erroneous state polls, misleading state polls – can upset the national outcome and upset expectations on a national level. 

And I think another takeaway is that it is important for pollsters to continue to take their surveys up until the very end of the campaign, as late as they possibly can, and this is a lesson that pollsters seem to have had trouble putting into effect because we have numerous cases in which developments toward the end of the campaign made a big difference. 

And as Doris mentioned at the outset, my argument is that polling failure is often correlated to journalistic failure.  And what do I mean by that?  What does that mean, polling failure is correlated to journalistic failure?  Well, it means simply that polls set the narrative.  They help establish conventional wisdom about the competitiveness of a presidential campaign as to who’s ahead, who’s likely behind, how competitive the race is.  That determination, that interpretation tends to be derived from polling data.  And journalists invariably take their lead from the pollsters in terms of the narrative, the dominant narrative about a campaign.  And so when the dominant narrative about a campaign is wrong, journalists wind up looking embarrassed.  So polling failure can equate to journalistic failure. 

And what to look for in the 80-some days that remain before Election Day in 2020 in this country?  Polls are more numerous than ever.  There is no doubt about that.  We’ll see more polls probably than ever before.  And these polls are being done by an astonishing variety of methodologies.  They’re not all done by telephone – operator-assisted telephone calls in which random digit dialing is used.  That used to be the gold standard for polling, but telephone polling anymore is on the way out.  It’s on the way out.  Many pollsters still use some telephone polling, but they also combine it with other methods and modes, including internet panels in which people are recruited online to participate in polls from time to time.  There’s also robocalling, or other attempts to gauge social media and how that can give us indications of what people are thinking and how elections might turn out.   

So there’s a great deal of churn in the field of opinion research these days, and election polling is caught up in that too.  So there is no single one standard by which polls are conducted these days.  There’s an impressive variety.  There is a lot of churn in the field reflected in the methodologies and the experimentation in those methodologies. 

I think it’s also important to keep in mind and to expect that the summertime polls, the polls that we’re seeing now, are not necessarily predictive.  These are not predictive instruments.  They’re not prophecies.  They’re not telling us now, in the second half of August, what’s going to happen in early November.  If the history of election polling tells us anything, it’s that big polls – leads – big polling leads in the summer can dissipate by fall, and we’ve seen this time and again.  Just ask Michael Dukakis in 1988.  He had a double-digit lead over George H.W. Bush in 1988, in the summer of 1988, in July and August.  And Dukakis, the Democratic candidate from Massachusetts, wound up losing that election by seven or eight percentage points.  That is one example among many of polling leads in the summertime that can dissipate by fall.  So I think that it’s important to keep that in mind.  Late summer polls are not prophecies. 

And related to that is we can expect surprises.  It’s almost unheard of not to have a late-in-the-campaign surprise.  In 2000, the year 2000, 20 years ago, George Bush was ahead of Al Gore – narrowly, but ahead – in most of the polls as the election campaign drew to a close.  Three or four days before the election, a reporter, a young reporter, a young television reporter in Maine, uncovered the news that George W. Bush had had a drunken driving arrest on his record.  In fact, he had pleaded guilty to driving under the influence of alcohol many years earlier.  And this was an arrest and a conviction that George Bush had not disclosed.  And the disclosure of that information just days before the election had the effect, I’m pretty sure, of shifting some votes away from Bush to Al Gore, to the Democratic candidate – enough so that the election ended almost in a tie and was decided 37 days after the election by a Supreme Court decision on the outcome in Florida.  So we can expect surprises late in the campaign.  And they happen, and they happen almost predictably.   

And polls are not always wrong.  In fact, pollsters work hard to get it right.  There is no incentive for them to get it wrong.  But polls have been wrong often enough, and they have a checkered history, a checkered past, as we’ve seen some examples in this PowerPoint presentation.  Polls have been wrong often enough so that we ought to treat them with caution, a bit warily, and not embrace them as the oracles that some people think they are. 

And where to turn in covering the 2020 campaign?  The resources are many; here are a few of my recommendations.  And I lead with – I start with RealClearPolitics.com.  For my money, there is no better, more evenhanded site – aggregation site, politics site – online.  I think it’s outstanding.  It’s full of polling data; it’s full of commentary and analysis.  And they do a great job of updating their site regularly, at least twice a day in terms of content, and I think Real Clear Politics is a great place to go to. 

Nate Silver’s FiveThirtyEight.com is another great site, and he is really into the weeds in polling, and his site, FiveThirtyEight.com, has evaluations, grades from A to F – which is the grading scale in American education.  He grades them from A to F, and there are more than 400 polling organizations that have been graded by Silver and his colleagues at FiveThirtyEight.com.  So I think that’s a very useful resource to keep in mind as well. 

Nick Cohen of The New York Times does very interesting work on a day-in and day-out basis about polling.  He’s very thoughtful and looks to explain polling in ways that are almost unprecedented or novel.  Four years ago, in 2016, he recruited four pollsters to take a look at data from the state of Florida, polling data.  And he asked each of the four pollsters to review and analyze and weight – which means statistically adjust – the data, and then come up with a winner.  And each of the four pollsters in 2016 for these data in Florida came up with a different interpretation.  In fact, the range was that Bush was – that Trump was head by one point to Clinton was ahead by three.  So there was a four-point range among the four pollsters. 

I cite that example as a case in which Nick Cohen has used, very imaginatively, polling data to present, to tell readers, to tell his audience how difficult it is to conduct polls these days and how polling can be open to interpretation by the way in which the data are weighted, in which they are statistically adjusted.  And it’s intriguing – it’s an intriguing study that he did, or story that he did, and he does a lot of good stuff. 

I also would suggest keeping an eye on the Pew Research Center.  They do some interesting polling work.  They tend to be pretty pro-poll, if you will, but they are in many ways in their research I think kind of at the cutting edge of what we’re looking for in terms of this churn in the polling industry that I have referred to earlier.  So Pew Research Center is a good resource to keep in mind as well. 

And also AAPOR is a also good source.  Now what, you might ask, is AAPOR?  AAPOR is the American Association for Public Opinion Research.  It’s an organization of pollsters, academics, government workers who are into polling, and it’s a fairly large organization, and they have resources available for the media at their website.  And I think they will even offer to put you in touch with a polling expert if you get in touch with AAPOR at the URL that’s shown here. 

And, of course, you can always get in touch with me.  My email address: wjc@american.edu.  Thanks very much for your attention.  I look forward to your questions. 

MODERATOR:  Thank you, Dr. Campbell.  So with that, we will open for questions.  If you have a question, please hit the raised hand icon, and we will call on you.  And please be sure that your name and your media outlet are showing so that we can call on you and see it in your profile.  So with that, we will start our first question.  And – and just a reminder to hit the raised hand icon.  And I will start with Alex.  Alex, please go ahead with your question, and please state your name and your media outlet. 

Alex, can you hear us?  Okay.  So it looks like Alex is having some technical difficulties, so we will go to Zhaoyin Feng.  Zhaoyin, go ahead with your question. 

QUESTION:  Thank you.  Thank you, Doris.  I hope you can hear me.  I also — 

MODERATOR:  Yes. 

QUESTION:  — typed my question in the chat box.  So thank you so much, professor, for your impressive presentation.  I have several questions.  Since last election’s polling failures, have pollsters adopted new methods to avoid making the same mistakes?  And what are the criteria for a journalist to watch out for more trustworthy polls?  Do you expect the shy Trump voters this time around, four years in his presidency?  Thank you. 

MR CAMPBELL:  Great questions.  Thank you.  As for shy Trump, I have given that an awful lot of thought, just lately.  And it could be that even now, four years into his presidency, as you say, or almost four, that there could be people out there who are not going to tell pollsters or even openly say that they’re going to be supporting Trump.  It’s – as I say, for socially desirable reasons, that may not be the thing to do.  But I suspect that enthusiasm among Trump supporters is probably greater than the enthusiasm among Joe Biden supporters.  Again, that’s hard to pin down sometimes, but I don’t know.  I should think that there’s the potential, if not strong potential, for a shy Trump vote in 2020.  And if that’s the case, then it’s really going to be a difficult election to forecast, a difficult election to predict. 

And as to your questions about the most trustworthy pollsters, I would really encourage you to turn to FiveThirtyEight.com and their evaluations of pollsters.  And it’s an impressive, lengthy list, and with criteria laid out, including the accuracy of the polls over time.  And as I said in my presentation, Nate Silver ranks them from A to F.  And you can see in there that the high-quality pollsters are rated A, A-.  And it’s – I think that’s a useful guide.  It’s not necessarily infallible, but I would encourage you to take a look at that. 

And as for the pollsters and what they’ve been doing, the recognition that state polls went off the rails in key places in 2016 was identified fairly quickly after the election.  But it’s not clear as to just how, whether, and to what extent state-level pollsters have made adjustments in their methodologies, in their procedures.  And there is a debate going on as to whether adjusting the data for education levels – in other words, weighting by education – is really the key to getting a better result in these states.  Some state-level pollsters say yes, you have to weight for education; and others say no, if you weight for education the results don’t change that much anyway.   

So there’s – at this fundamental level, with this fundamental issue, there’s still dispute among pollsters as to whether weighting for education levels – i.e., if you have a college degree or not – whether that really makes the difference in the overall results of your state poll is really up in the air.  And the reason why that’s so important is that Trump supporters tend to be heavily on the side of people who have – do not have a college education, have not received a college degree.  And the Democratic supporters, at least in 2016, were those heavily who did have a college degree or more.  And by not factoring that into the equation, it is said that pollsters – some pollsters, particularly at the state levels – missed an important criterion.  As I say, not all pollsters agree with that; not all pollsters say that weighting for education is your answer.  So even there is a fundamental dispute going on that hasn’t yet been resolved.   

And maybe the 2020 election will resolve it, but there are —another cause for concern at the state level is that it takes money to do a poll.  It takes money to do a good pre-election survey.  And resources these days at the state level for pollsters tends to be hard to come by – not impossible, but it’s a harder road to hoe, in terms of getting the money, getting the resources to be able to conduct these high-quality polls.  And that’s another factor.  We just don’t see necessarily that there’s a lot of resources being made available to pollsters in key states so that we have a high level of – with the polls that they are going to produce are going to be reasonably accurate and close to the outcome.   

And we only know that when polls are taken closer to the election.  A few days before the election, we get a better sense of whether the poll is on target or not, whether it’s a prophecy or not.  Anything before that tends to be subject to considerable change.  And in fact, there’s a lot of pollsters who say polls are snapshots; they’re not predictions.  And again, there’s dispute about how extensively one should believe that, but nonetheless, it’s a refrain that you hear from pollsters, that they’re snapshots; polls are not predictive models, are not predictive instruments.  But I think the closer you get to the election, the more predictive value they do have.   

MODERATOR:  Thank you.  Our next question goes to Pearl Matibe.  Pearl, did you have a question?  

QUESTION:  Can you hear me?  

MODERATOR:  Yes.  

QUESTION:  Okay.  Thank you very much for the information and all of that data.  My question to you is:  Right now, with the U.S.’s image diminishing in the world, internationally, so everyone looking in, what may be one, two, three things could everybody else learn from these pollsters, given the fact that they’ve always missed predicting who would be the ultimate winner in the polls?  Thanks.  

MR CAMPBELL:  Thank you.  Let me just emphasize that pollsters don’t always get it wrong.  They get it wrong often enough for us to be wary about the results, but it’s not the case that polls inevitably are going to be in error.  I just think we should treat them with caution, with a degree of skepticism, and realizing some of the dynamics that influence the reliability and veracity of polls.  

And the question of the U.S. image abroad is a real fascinating one.  And it – there is some polling data out there that suggests that, indeed, during the Trump administration there has been a drop-off in the U.S. – I don’t know – recognition or appreciation or confidence, I guess, in the United States abroad.  And it’s a sharp drop-off, according to some polls, from the Obama administration.  But if you take a longer look at these polls, you can see that they do tend to have that same kind of fluctuation.  If a Democratic president is in office, it tends to be recognition or support or confidence in the U.S. tends to be higher, and then if a Republican is in office it tends to be lower.  And you can go back for several presidencies and see this pattern, back at least to the Clinton presidency in the 1990s.  There was a lot of support or a lot of confidence in the U.S. back then after the Cold War had ended and so forth.  And then once Bush W. – George W. Bush took over, that confidence dropped off. 

Now, of course, the wars in Afghanistan and Iraq certainly contributed to that drop-off in confidence abroad in the United States, but it picked up pretty dramatically after Obama took office, and during his eight years, it was fairly high.  And then once Trump took over, plunged again.  So it’s an interesting dynamic there that seems to be recognition – or the confidence in the U.S. abroad, U.S. image abroad, tends to be really linked to who’s in the White House.  And these polling data are increasingly expansive.  I think it’s the Pew Research Center who’s done them, but I could be wrong about that.  But it’s not just Western Europe; it’s expanded to other parts of the world, and I think that same dynamic of ebb and flow depending on who’s in the White House is an intriguing one, one to keep in mind.  And maybe that’s a non-election polling story that could be pursued.  

MODERATOR:  Thank you.  We have some questions in our chat box.  And the first question is from Joyce Karam from The National.  And she asks:  What do you make of the first day of the DNC and attempts to rebuild the Obama coalition?  Is that necessary for the Dems to win in November?   

MR CAMPBELL:  Interesting question.  I think my first instinctive response would be yes, that it is important to restore that coalition for the Dems to win in 2020.  But whether that can be accomplished is still up in the air.  I mean, Joe Biden is running a very strange campaign from the basement, essentially, of his house in Delaware.  And of course, the pandemic is an explanation for doing so, but it is a strange looking campaign, and he hasn’t ventured out very much from the basement, nor has he given too many news conferences.  And he hasn’t had much of a give and take with journalists, which is what we expect during a presidential campaign.  We expect the candidates to be open and accessible and inclined to answer questions.  And so far, Biden has run a campaign that kind of reminds me in some respects of the glide-path campaign that Thomas E. Dewey ran in 1948 – not shake things up, just kind of see if we can run out the string here.   

And that’s a concern.  And that’s a concern.  And whether that – how visible that’s going to become and whether that’s going to become an issue is up in the air.  But – and maybe the next nights of the Democratic Convention will sort of paper over that, that concern, and offer evidence of a restoration of the Obama coalition.  But it is a – it’s a different time – no two elections are ever the same in this country.  There’s no carbon copy, if you will, of a presidential election.  They all have their same dynamics; they all have their same ebbs and flows and issues and concerns and unexpected developments.  So I think we can expect something different from that, and not just a reprise of the Obama coalition.   

And it was an effective coalition that he put together, that’s for sure.  Whether there’s the same enthusiasm for Joe Biden remains to be seen, I think.  And we might an indication in the next few nights of the Democratic Convention.  I think last night was pretty much the scripted and non-dramatic event, so it’s still too early to say.  But thank you for your question.   

MODERATOR:  Thank you.  Our next question – I do see a raised hand from Peter Winkler.  Peter, go ahead with your question.  

QUESTION:  Thank you, everybody.  Thank you, professor, for doing this.  I also wrote my question in the chat.  I want to talk about the spread between likely and registered voters.  It seems that – I’m sorry – it seems that in 2016 a lot of people finally decided to stay at home.  How big is the confidence that likely voters really are going to vote?  And is that something that has to be weighted too?   

MR CAMPBELL:  It’s a great question, and thank you very much.  Thanks for joining us.  I spent a number of years in Switzerland with the Associated Press in Geneva, and Zed-Zed was always the go-to paper, even though I couldn’t read much German.  (Laughter.) 

Anyway, to your question, to your good question, the likely voter conundrum is one that pollsters have long struggled with:  How do you determine, of all the respondents to a poll, who is going to vote?  Who actually is going to go ahead and cast a ballot on Election Day or in advance of Election Day in early voting?  And pollsters have come up with, over the years, a number of different measures called screens, likely voter screens, in which they try to essentially weed out from the respondents those who are not going to vote and those who are. 

So that differentiation is very difficult.  It’s more art than science.  It’s certainly more art than science.  And no pollster has really ever come up yet with an answer to that question: How do we – how do we know for sure whether this person who’s responding to our poll is actually going to go ahead and vote on Election Day?   

The Gallup Organization tried for close to 50 years and had a model, a likely voter model that it revised and reworked and altered, and it still didn’t work terribly well.  One of the reasons that its poll in 2012 misfired was that the likely voter screen was too tight, and it screened out voters who actually went ahead and voted, and it threw off Gallup’s numbers in 2012.  And it said that there are as many likely voter models as there are pollsters in this country.  Now, that may be a tad exaggerated, but I don’t think it’s much exaggerated because each pollster has its own way of determining who to weed out, of who to say, yes, this is more likely than not – this voter is more likely than not to go ahead and cast a ballot on Election Day or before. 

But figuring that out – figuring that out remains one of those great unresolved issues, a great unresolved conundrum of election polling in this country, and I think overseas as well.  That’s a great question, and there is a difference between likely voters and registered voters, and if you look closely at some of the polls that are being conducted these days, many of them are of registered voters.  It may be a little too early to try to weed them out and say, okay, who is a most likely voter here?  But as we get closer to the election, we’ll see more polls with likely voters included in their samples rather than just registered voters. 

MODERATOR:  Thank you.  And our next question goes to Olivia Zhang. 

QUESTION:  Hi, can you hear me? 

MR CAMPBELL:  We can. 

QUESTION:  Well, thank you very much, Professor Campbell.  I have two questions.  Firstly, you were saying the lead in late summer could be disappearing, but the question is how likely that would disappear, specifically in terms of Biden’s lead.  What are the factors that may make it disappear?  Is it, like, Trump’s more measured tone on pandemic, or more tough rhetoric on China or anti-immigration, or maybe some misacting on Biden’s side? 

And also, secondly, we talked about shy Trump voters, and last night on DNC there was a segment on Republicans for Biden.  So I’m wondering, like, how about moderate Republicans who may be shy Biden supporters?  Do you think that would be, like, a big number to be considered?  Thanks. 

MR CAMPBELL:  Thanks.  Great questions.  And as for the last part of your question, on the Republicans for Biden, I don’t think that’s going to be a very large contingent.  I don’t think that’s an important segment of the Republican base.  It’s certainly not part of Trump’s base.  These are – these are politicians in some cases who have been – were in office but are out of office.  John Kasich, the former governor of Ohio, for example, who ran against Trump in 2016 and has very little regard for the President.  And there are – there are Republicans like that, but I just don’t think that they represent a major part of Trump’s base, they probably never did, and they didn’t vote for him heavily in 2016. 

So, I don’t know, they’re out there but they’re not – I wouldn’t say that they’re going to be a major force in this election.  Could be wrong.  Could be wrong.  But I doubt if Republicans for Biden are going to be numerous.  If they are – if they are numerous, then we’re probably looking at a landslide election, and for Joe Biden.  I just don’t – I just don’t think this election is going to turn out to be a landslide.  So the polls that show a 10-point lead for Joe Biden nowadays, or a 12-point lead for Biden, are probably going to narrow – are probably going to narrow considerably in the weeks before the election.   

In fact, we see some narrowing already.  CNN the other day came out with a poll showing that Biden had a four-point lead over Trump.  At the end of July, the respected Emerson College poll had a four-point lead for Biden over Trump, a similar advantage to the poll that they reported at the end of June.  And the Rasmussen poll, which is done, I believe, with mostly IVR or robocalling, with some internet panels, the Rasmussen poll has – the most recent one I saw had Biden ahead by three percentage points.  So we have polls out there that are showing a tight race, and I suspect the race is probably tighter than a 10 percent – 10 percentage point lead.   

Rasmussen, by the way, I think was the one that called the 2016 popular vote closest of anyone.  I think it came in at two percentage points for Hillary Clinton, and she won by 2.1 percentage points.  So, again, these are – these numbers can be – especially at the margins, are very, very elusive, but nonetheless I think that a tightening rather than an explosion of Biden’s lead is more likely.   

And you referred to Biden’s candidacy, and I still think he has something to prove.  He’s got to show the American people that he’s up for the job.  I mean, if he wins and takes office, he’ll be the oldest president on Inauguration Day to take office.  And I think he’s got to show that he’s indeed ready for the presidency, up for the job.  He’s been – he’s run for president two or three other times, and now this is his chance.  But he is pretty old.  And Trump is in his 70s as well, but Biden – Biden is not coming off as a real vigorous campaigner, at least not yet. 

MODERATOR:  Thank you for that.  And just a reminder to our journalists to hit the raised-hand icon and we will call on you for questions.   

We did receive a few more questions.  One of those questions is: “Is COVID-19 affecting any polls this year?”  

MR CAMPBELL:  Thanks.  That’s a great question, and I think the COVID-19 pandemic has certainly scrambled politics in this country to an unprecedented extent.  There’s no doubt about that.  And that’s stating a truism, I believe.   

There is an intriguing, unexpected, and minor effect of the pandemic on polling.  And not to put too fine a point on this, but there is some evidence that people having to stay home on lockdown or in quarantine are more inclined to – since they’re home and want to do something different, they’re more inclined to pick up the phone and answer a pollster’s inquiry than they would otherwise, because most people in this country don’t pick up their phone.  Many of them don’t even have landlines anymore.  But they don’t pick up their phone because it’s going to be telemarketing or going to be some sort of spammer or something they don’t even want to mess around with.  And so polling has suffered because of that phenomenon, because people just don’t want to pick up the phone. 

Given the pandemic and given the fact that people are home more often nowadays or for longer periods than in the past, than in the recent past, there is some evidence that people are responding to pollsters’ phone calls a little more regularly, a little more routinely.  Now, as I say, not to put too fine a point on this, but that is an unintended consequence of the COVID-19 pandemic. 

I think what a real and perhaps more serious component is going to be – is just whether people are going to be getting to the polls to vote in person or whether – or whether there’s going to be a mail-in ballot of some extent that will really challenge if not compromise the United States Postal Service, and that’s a contemporary issue.  Whether it proves to be a reality or not, it’s really too early to say.  There are some people who say that, no, the Postal Service can handle it, they really can, just as long as the ballots are applied for and received and sent in in advance.  And there are a couple of states in the country that have – that say you can apply for an absentee ballot on the Friday before the election – the Friday before the election, so that’s, what, four or five days before the election.  There is no way that an application can be submitted, processed, the absentee ballot sent to the voter who requested it, and then completed and sent back by Election Day.  It’s just not possible.  So think there’s got to be some realistic approaches that some states should take in terms of giving people time and the Postal Service time to receive, process, and turn around these ballots.   

I saw a question on here about the – whether it’s a likely event that Trump would refuse to leave office if he loses.  I just don’t see that happening.  I just – I can’t see that he would stick around after a clear defeat to Joe Biden.  I mean, if it’s a close race and we’re unsure, who knows?  But I just don’t think that in the end he’s going to say, “No, I’m going to stick around.”  I’m going to – in effect pull off a bloodless coup.  I mean, it’s just – he’s an unpredictable guy, but I just can’t see that happening.  And we have had cases in which elections have been unresolved for weeks.  The election 20 years ago in 2000 went on for 37 days after Election Day until it was determined by the Supreme Court in a 5 to 4 vote that George W. Bush was the winner of the state of Florida and therefore of the presidency.   

So the country has gone through those ordeals in the past, not that they’re pleasant or not that they should be looked forward to, but nonetheless these are – there is some history of waiting and going past Election Day to know who has won the election.  But as for Trump to barricade himself in the Oval Office, I don’t know, I have a hard time seeing that.  I really do.  Good question, though.  

MODERATOR:  And we do have a follow-up question from Pearl Matibe.  Pearl, go ahead with your question, and after Pearl we will go to you, Alex.   

Pearl, go ahead with your question.  She may have some technical difficulties so we will go to Alex.  Alex, go ahead with your question, please.   

QUESTION:  Can you hear me? 

MR CAMPBELL:  I think that’s Pearl there.  

QUESTION:  Yes.  Thank you very much, professor.  I think my question is a three-part question.  I want to pick up on the issue that you mentioned regarding Biden perhaps not being quite yet clear.   

Do you think that it is only because he hasn’t shown himself as a frontrunner, or are there any polls out there where you might think – and I think this runs into my second part of the question – is how many days out, because we’re already well past the 100-day mark before November 3rd.  Are there polls that become truer closer to November 3rd at all?   

And is there any polling data out there that you may have that may show the – how likely phone banks turn out the vote given the fact that voter turnout and the numbers that turn out can decide an election?  So do we have any polling data on phone banks and how successful those are?  Thanks.  

MR CAMPBELL:  Thank you.  I think to your second question, the second part of your question, the closer to Election Day the better in terms of polls as predictive instruments.  I think that the closer we get, the better they perform.  And another measure is that the closer we get, how are they performing in aggregate?  And that’s where RealClearPolitics.com will average them all together and say, okay, this is the average of all of these polls, and this is who’s ahead.  And I think the closer we get to elections, as I say, the better those polls are going to be, the more reliable, the more confidence that we should have in them.   

I think that “turn out the vote” efforts are commendable.  I just don’t see that they necessarily always make a difference.  And the reason I say this, in 2016 Hillary Clinton supposedly had a great turnout machine, and that was going to be enough to push her past 270 electoral votes and beyond.  And she may have had a great turnout operation, ground operation, but it didn’t win her the election.   

And so it’s hard to know for sure just how effective these “get out the vote” efforts are.  I mean, they’re essential.  Turning out to vote is the key, isn’t it?  I mean – and in this country, in a presidential election, we might find only 55 percent of eligible adult voters actually go and vote, even though it’s been made easier these years with early voting and voting by mail and other measures to facilitate getting to the polls.  We still have about – what, 55 percent, maybe a little higher, maybe a little lower – and that’s not a great impressive number. 

And so it really does boil down as to who is motivated enough to come out and vote.  Now, whether that motivation comes from “turn out the vote’ operations, I think that’s partly it, but I think also it’s the voters themselves seeing a vested interest in the outcome and then they go ahead and – go ahead and vote.   

And your Biden question.  Biden is – he’s still a mystery to many people.  And is he a progressive candidate?  Is he a moderate candidate?  I mean, he’s tried to cast himself as both, and that uncertainty is yet to be resolved in the minds of many voters.   

And his promise to raise taxes is a concern to many voters, too.  Walter Mondale, the Democratic candidate in 1984 when Ronald Reagan ran for re-election, said openly that he was going to raise taxes, and Mondale lost in a landslide to Ronald Reagan in 1984.  I’m not saying that Biden’s pledge to raise taxes is going to make a difference in terms of a landslide for Donald Trump – I just don’t see that happening either.  But nonetheless, the history of U.S. presidential elections suggests that promises to do what Biden says he’s going to do, i.e. raise taxes, is not always a recipe for success.   

And you’re right, we’re less than 100 days – I think we’re less than 90 days, probably in the 80s somewhere, and it’s getting closer, but we still have a lot that’s going to play out.  And we can have important developments playing out right at the end.  I remember in 2016 when James Comey announced – what, eight or nine days before the election – that he had reopened the investigation into Hillary Clinton’s private e-mail server that she had used as secretary of state or related to that topic.  And that late-breaking development may have been enough to tip the election to Donald Trump.   

So I think that especially in tight elections, late developments make a major difference and we should – we don’t know what they are, but we can expect them to happen.  

MODERATOR:  Thank you for that.  And it looks like Alex has his hand raised, so we will go to Alex.  Alex, can you go ahead with your question?   

QUESTION:  Can you hear me yet?  Hello, can everyone hear me? 

MODERATOR:  Alex, your audio appears to be breaking up and we can’t hear you.  

QUESTION:  Hello?  

MODERATOR:  Hello.  Go ahead with your question, please.   

QUESTION:  Can you hear me?  

MODERATOR:  Yes, we can.  Can you say your name and your outlet, please?   

QUESTION:  Yeah, of course.  Hello, everybody.  I’m Sandra Muller.  I’m working for La Lettle de l’audiovisuel in France.  I have like maybe two or three questions for you.  The first one is that you spoke about resources, and I remember we had another meeting with another professor.  Again, I think, about resources for poll.  My question is because of the COVID, I am not sure that poll institutes will much more — 

QUESTION:  Hello, can you hear me?   

QUESTION:  Yeah, and you?  

MODERATOR:  Alex, we will call on you next.  

QUESTION:  Okay, yes. 

QUESTION:  Yeah, can you hear me? 

MODERATOR:  Alex, can you mute your phone?  We will call on you next.  

QUESTION:  Oh my God, sorry.  Okay.  I think I have some trouble with my technics, sorry.   

MR CAMPBELL:  Go ahead, Sandra.  

QUESTION:  Yes.  Do you hear me?  Can you hear me now?  

MODERATOR:  Yes, we can hear you.  

QUESTION:  Okay, sorry.  So, so sorry, I am not that familiar with Zoom.  So my first question is that there is COVID consequences?  And you spoke about resources for poll institutes.  And do you think that the COVID will infect – will affect the resources for poll institutes?  I mean, they will have less money, so if they have less money maybe the polls won’t be like reliable at all, and it can be worse than the previous years?   

The second question I have, like, is I am French, so in French we have two things.  The thing is electoral silence.  We are not supposed to speak about election two days before voting.  It’s not the case in America, and most of the time it’s exactly the time of – where you have like hacking, like especially maybe Russian hacking – it’s almost two days before.  It was the case in France with WikiLeaks, and we have a lot of the information for our – on our – about our president, that’s Macron.  And it happens here in America, and you mentioned that with Hillary Clinton a couple of days before there was a lot of WikiLeaks information. Do you think the silence – maybe the silence, electoral silence – can be a method for America to be protected?   

And my – yes, my third question is like how do you think we can keep protected by hackers in this period?  I know it’s not exactly your specialty, but I would like to have your point of view about the hacking during the election and the result on the polls.  Thank you.  Sorry, I hope it’s okay now.   

MR CAMPBELL:  Yeah, thanks, Sandra.  Very good questions.  I love the one about electoral silence a couple days or so before the election.  It’s an intriguing phenomenon.  Other countries – some other countries have that requirement as well.  I think in this country it would run headlong into the First Amendment, and there’s just no way that polling organizations or news organizations would be inclined to accept a news blackout about polls two or so days before the election.  I mean, that’s when the interest probably peaks, and so it would be very difficult for anybody to say, no, we’re not going to talk about polls, we’re not going to have any poll results.   

And as I say, there are First Amendment implications here, too, that this really can’t – the word I’m thinking of is “stifle” – but you can’t muzzle the press.  You can’t muzzle news organizations.  You can’t muzzle the pollsters in advance because you don’t want their – the decision making to be influenced by late-breaking polls.  And so it’s just – it’s intriguing.  I find it very fascinating, particularly the French model, but I just don’t think it’s possible in the United States.    

And the accuracy issue about the election, it’s intriguing as well because in the United States a presidential election is not a national election.  It’s 52 or more state or local elections.  In other words, all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and other possessions have their own elections.  They’re run by the state and even in some cases at the local level, so it’s really hard to have a whole national apparatus.  In fact, there is no national apparatus that oversees polling and the voting in this country.  And some states have their own requirements and their own peculiarities.  In Oregon, I believe, it’s all mail-in votes.  All balloting is done by mail, and it has been that way for a number of years.  In other states that’s not the way it’s done.  It’s typically go to the polls.  And in fact, that’s one of the more intriguing national institutions in the country, to go to the polls on Election Day and line up and get ready to vote and cast your ballot.   

Now, whether that’s going to be influenced greatly by the COVID-19 pandemic, it probably will.  It probably will.  There will be more people voting by mail, voting by absentee ballot, than ever before, and that’s what the whole brouhaha about the Postal Service and whether the Postal Service can handle that volume, that expected increase in volume.  It probably can.  It probably can.  And in fact, attention is being paid to it now sufficiently in advance to make sure that it goes fairly smoothly.  I’m not going to say it’s going to go perfectly, but it’s likely to be less of an issue than it appears to this day.  Now, of course, I could be wrong again, but nonetheless I think it’s something that we’re going to be focusing on and are focused on, if not resolving, in advance of the election. 

And the resources for state-level polls, I think that’s a concern that goes beyond the COVID-19 pandemic that – just whether there are resources available to state polls.  In other words, are the universities or the polling organizations or the news organizations – do they have – are they in a position to want to devote that kind of – those kinds of resources to polling and to make sure that they’re as good as possible?  Because there’s a lot of slipshod polling being done out there.  I mean, you take a look at Nate Silver’s ranking of 400-some pollsters, and there’s several that get an F because they don’t poll very well or accurately, or they make stuff up.  Now, this is not many pollsters.  This is a very, very small number comparatively, but it’s not easy to do, and it’s easy to do poorly, and that’s why the resource issue is one that pollsters and polling organizations and people who pay attention to this stuff are very concerned about how states in the 2020 election in key places – again, Michigan, Wisconsin, Pennsylvania, Ohio, North Carolina, Florida, Iowa maybe, even Georgia, Arizona for sure – how are they going to perform?  And so it’s a real conundrum.  It’s a real conundrum.  Polling costs a lot of money to do well, and not all organizations who purport to poll have the resources to do it well.   

And then on top of everything else, I should just throw this in.  Is there – there’s a phenomenon that pollsters refer to as “herding” – herding, h-e-r-d-i-n-g.  And that’s an ethically suspect phenomenon in which polling numbers toward the end of an election come closer and closer to the same results.  It’s kind of like putting your thumb on the scale, and pollsters are disregarding their own data but looking at the data of others and trying to kind of hew towards the – to the average, to what everyone else is reporting.  It’s a hard – a hard phenomenon to prove, but pollsters and polling experts believe it happens, and it may be another phenomenon to keep in mind in 2020: herding.   

MODERATOR:  Thank you so much, Dr. Campbell.  We are just about out of time, and we would like to thank you for taking the time to brief with us today.  We would like to thank our media for participating.  We hope to have our transcript available and posted on the DC/FPC website later today, and with that, our briefing is now concluded.  Thank you, everyone.  

MR CAMPBELL:  Thank you, and thank you, Doris, and thank you, Stephanie, and thank you for all the participants, too.  Great questions, and it was enjoyable to speak with you today.  Thanks a lot.  

MODERATOR:  Thank you.  

U.S. Department of State

The Lessons of 1989: Freedom and Our Future