Christian Grose

Christian Grose is a longtime USC pollster and political expert. (Photo/Courtesy of Christian Grose)


Polls get a lot of hype, but what do they really tell us? A Q&A with polling pro Christian Grose

Christian Grose, a seasoned USC pollster and political scientist, offers a deep dive into the world of public opinion polling.

March 07, 2024 By Nina Raffio

Another Super Tuesday has come and gone, leaving a fresh tidal wave of polling data in its wake. From news reports to social media, this torrent of numbers and charts is inescapable. But how much weight should we give them, and what do they really tell us?

For answers, we turn to Christian Grose, professor of political science and public policy at the USC DornsifeCollege of Letters, Arts and Sciences and academic director of the USC Schwarzenegger Institute at the USC Price School of Public Policy. Grose, a longtime USC pollster and political expert, has conducted numerous polls in California and beyond, involving partnerships with Long Beach and Los Angeles County and in other regions.

How accurate are polls, really? And what are the limitations of using them to predict election outcomes?

Grose: It’s important for consumers of polls to remember they are simply snapshots in time, and that the poll results can change over time as undecided voters make up their mind and if voters are persuaded by campaigns.

What are some common misconceptions about polls, and how can we better interpret them?

Grose: One of the most important things I wish people knew was that the margin of error is critically important. I often see reporting on polls where a candidate has a “lead” of only 1 or 2 percentage points in a poll. But this really means the first- and second-place candidates are tied. Polls can tell us about the electorate, but because we poll a representative sample of the electorate, this means very close polling margins suggest statistical ties.

Another misconception among those unfamiliar with polling and statistics is that you can’t measure the electorate’s attitudes and preferences with a sample of only a few hundred or even 1,000 voters. If the poll is conducted scientifically, has high-quality sampling strategies, if proper statistical weighting techniques are used and if the poll has a good likely voter model, statistically, we can measure an electorate’s views with just a few hundred people or even thousands of people. In fact, for the overall electorate’s preferences, the larger samples start to reach diminishing returns as you add more respondents. More respondents is always better, as it increases the precision of the polling prediction. However, if appropriate scientific techniques are used, just several hundred people can be sampled and polled to measure the preferences of the electorate.

What is important about larger samples (say 2,000 respondents versus 400 respondents) is that subgroups are better measured. In a smaller sample survey that nevertheless is still accurate of the overall electorate, such a sample may mismeasure harder-to-reach groups like Black voters, Latino voters, rural non-college-educated white voters, or younger voters. So larger samples — or even oversampling of some hard-to-reach groups and then adjusting these oversampled groups down via weighting — will tell us more about different slices of the electorate.

However, a poll of 300 to 1,000 people is sufficient to measure the overall electorate’s preferences. The smaller the sample, the greater the margin of error, meaning we aren’t as confident about the polling data even if we are somewhat confident. And the larger the sample, the more confident we are about the polling results.

With so much data in the media and online, how can people tell the difference between a good poll from a bad one?

Grose: A strong poll is scientific and seeks to objectively measure the preferences of the electorate. It also seeks to make sure it is representative of the electorate. For polls attempting to predict the potential outcome of an election, it is important the poll has a good likely voter model, which means it asks those who are likely to participate in the upcoming election, and it also will use information about who participates in an election. A good poll also has a careful sampling strategy and a sensible weighting strategy.

There are at least two types of problematic polls. First, there are fake “polls” that are not representative. These are any kind of “poll” where you can participate and cast multiple votes or where the sample is not at all reflective of the overall electorate. These are Twitter/X polls or other surveys where the people taking it do not reflect the overall populace.

A second type is one produced internally by campaigns. These polls are usually pretty good, but for the public, we don’t often see those polls unless they are good for the candidate sponsoring the polls. In this instance, there might be several private public opinion polls conducted over the course of a candidate’s political campaign, but only one poll that happens to have the candidate leading is released to the media or to the public. The poll isn’t necessarily wrong or unscientific, but public release of internal candidate/campaign polls will be selectively biased in favor of polls that help the candidate. This is why nonpartisan and university polls can be so important for an objective measure of what is happening. Further, polling averages of those who are high-quality pollsters are excellent ways to get a good snapshot of the electorate.

The poll that matters the most is voters going to cast ballots. Polls help us do research, help us understand the electorate, and get a snapshot in time. But the only poll that really matters is the one on Election Day.