Polymarket vs polls: the four ways they measure different things
On April 29, the Polymarket price and our public-opinion poll on the 2028 Democratic nominee landed within one point of each other. Eight days later, on the same field, they disagreed by 14. The temptation is to ask which one was wrong. That’s the wrong question. Here’s the right one.
Two readings on the same race, eight days apart, with very different gaps. The market did not get more or less correct in those eight days — it cannot, because the race does not resolve until 2028. What changed was the second signal each time and the conditions under which the two signals can be expected to agree. Most of the “Polymarket vs polls” commentary you will read in the next eighteen months will skip past those conditions. This piece is for the people who don’t want to skip past them.
The framing “which one is right” treats prediction markets and polls as competing forecasts of the same quantity. They are not. They are two instruments measuring four different things, and the question worth asking is “which signal is closer to the truth on this question, on this day, given this data.” That question has a tractable answer most of the time. The blanket answer does not.
What Polymarket is actually measuring
A Polymarket price is the capital-weighted belief of a small, self-selected population of bettors that a specified outcome will satisfy a written resolution criterion by a written deadline. Every word in that sentence carries weight. Capital-weighted: a $50,000 trade moves the book more than a hundred $5 trades. Self-selected: the universe is mostly USD-stablecoin holders trading on Polygon, which skews young, technically literate, and (in the US) operating through a regulatory side door. And the resolution criterion is contractual — the market does not pay out on “Newsom looked strong this cycle,” it pays out on the specific candidate listed on the specific ballot in the specific window the contract names.
The strengths follow from the structure. Bettors have skin in the game, which filters out the cheapest kind of opinion. The book updates in seconds, not weeks. And on average the people willing to risk capital on a question are closer to the data on that question than the average voter is — not because they are smarter, but because the question filter is strong. The weaknesses follow from the same structure. The participating population is small and unrepresentative. Most Polymarket books are thin: the Trump weekly approval market that ran during our launch week traded a grand total of $463 in volume, which means a single committed bettor with a thousand dollars could have moved the price several cents. And the bigger books still carry a selection effect on who is in the room — risk-tolerant, online, internationally distributed — not the median American voter.
Two numbers worth holding in mind for the rest of this piece. The 2028 Democratic nominee book has cleared roughly $1.1B in cumulative volume across the event. The House 2026 control book has cleared $5.4M since it opened in July 2025. Both are real markets. Neither is a poll.
What a poll is actually measuring
A named pollster — Pew, Marist, Echelon Insights, NPR/PBS News — produces a number that is the stated belief of a sample weighted to represent a target universe over a defined field window. Each of those words also carries weight. Stated: it is what respondents said, not what they would do. Sample: it is n=1,012 likely voters, or n=1,322 adults, or n=402 X poll respondents, and the difference between those frames is the difference between three different questions. Weighted: a survey of registered voters is reweighted to look demographically like the registered-voter universe, but the model is doing real work. And defined field window means the number is a snapshot of the days the pollster was in the field, not the day you read the headline.
The strengths here are methodological transparency and a track record. Reputable pollsters publish their n, their margin of error, their weighting, their mode (online panel vs live phone vs mixed), and often their toplines. You can grade them over time — Pew and Marist have decades of out-of-sample calibration data, and the readers of this site likely know which firms had a strong 2024 and which did not. Polls are also designed for the inferential job. A 1,000-respondent likely-voter survey is the standard tool for measuring electorate sentiment on the kind of question that resolves to a vote count.
The weaknesses are the famous ones: response bias (who picks up the phone or finishes the online panel), mode effects (the same person gives different answers to a live caller and a web form), nonresponse drift late in the cycle, and the persistent gap between what people say and what people do. None of these are fatal. All of them are reasons to bracket your uncertainty when reading a single poll, especially one that is several weeks old.
When the two signals should agree, and when they shouldn’t
The conditions under which a market price and a named poll should converge are specific enough to write down. You want a market with enough volume that a single bettor cannot move it noticeably (call it eight or nine figures of cumulative volume); you want a poll that was fielded recently relative to the news cycle (within the last week or two on a fast-moving story, within the last month on a slow one); and you want the question the market resolves on to map cleanly onto the question the poll asked. When all three conditions hold, the prices and the poll shares should sit close to each other — often within a few points. When one of them breaks, expect drift. When two break, expect divergence.
Two recent examples from this site illustrate either side. The launch market on April 29 ran our X poll against the 2028 Democratic nominee book on the same day — the Polymarket event total volume sat at $1.11B, and the public-opinion read was 24 hours fresh. Result: alignment within one to two points on every named candidate. The conditions held. The signals agreed.
Day 3’s Harris gap tells the other half of the story. Same event book, same $1.1B-plus, but the named pollster on the comparison line was Echelon Insights fielded April 17–20, captured May 7. That is roughly two and a half weeks between the field window and the publish moment, which is not nothing on a fast-moving primary narrative. The result: Polymarket had Harris at 8% and Newsom at 25%. Echelon had them effectively tied at 22% and 21%. A 14-point gap on Harris alone. The conditions didn’t hold. The signals diverged.
Calibration vs accuracy, in one specific case
The 14-point Harris gap is a useful thing to walk through because it isolates the methodological question from the political one. Which signal is closer to the truth? We don’t know, and won’t for roughly two years, because the resolution criterion on the Polymarket book is the actual 2028 Democratic nomination. But we can write down what would move our priors in either direction.
Toward the market being right: a fresh named poll — ideally two of them, from different houses, with different methodologies — that has Harris back in single digits among likely Democratic primary voters in a closed-primary state. That would suggest Echelon was catching a transient name-recognition effect rather than primary intent, and the bettors were already pricing the more durable signal.
Toward the poll being right: continued in-field readings showing Harris polling in the high teens to low twenties across multiple houses, alongside Newsom polling in a similar range, while the Polymarket book moves toward those numbers (Harris bid through 15c, Newsom drifting toward 22c). That would suggest the market was anchored on a 2024-era narrative the actual electorate had already moved past.
I’d want to see another two weeks of named-pollster data and at least one significant news event before I’d update meaningfully on which read is closer to the truth. Right now the most honest summary of the gap is: two instruments measuring different populations under different conditions, with the smaller and faster of them probably mispriced relative to the slower and larger one, but not in a way that is yet falsifiable.
The base-rate read on the same kind of question
One pattern recurs often enough to write down separately: the market can be priced past the base rate. The base-rate read on the House market is the cleanest version of this we’ve published. Polymarket prices a Democratic House majority in November 2026 at 84c. The most recent NPR/PBS News/Marist generic ballot has Democrats up 10 (n=1,155 registered voters, fielded April 27–30, 2026, MoE ±3.3). Historically, a generic-ballot lead in that range has converted to actual chamber flips in the 60–70% range across most academic forecasters. Volume on the market is $5.4M since launch in July 2025 — mid-sized, not whale-driven, no one fading at scale.
The arithmetic here is straightforward. The book is roughly 14 points above the historical conversion rate from a +10 generic-ballot lead to a House majority. That gap can be explained by district-level information the generic ballot doesn’t capture, or by a price that is just rich. With $5.4M in a thinly-faded book, “rich” is the more probable read. This is a different methodological story than the Newsom-Harris gap. There, the question is which of two contemporaneous signals to trust. Here, the question is whether the market has priced past what the base rate would support. Both gaps are real. They are not the same kind of gap.
How to read DissMarket’s daily output
DissMarket is not in the business of telling you which signal is right. We publish the gap between Polymarket and a named pollster (or our own X-poll public-opinion read, with all the limitations of that signal flagged in the methodology page) and we narrate the conditions under which the gap is interesting. When the conditions for convergence hold and the signals still diverge by ten points or more, that is the story worth printing. When the conditions don’t hold — thin book, stale poll, mismatched resolution criterion — we say so, and the divergence is less informative than it looks. The job of a methodology desk is to do that bookkeeping out loud, which is most of what you’ll find in the archive.
If you cover this beat for a living, the actionable read is roughly this. Capital-weighted prices on large books update fast and are right more often than the median news article gives them credit for. Sample-weighted polls from established houses are slower and noisier per-reading, but their calibration record is knowable and the methodology is transparent. Both can be right. Both can be wrong. The cheapest mistake is to default to whichever one matches your priors on a given Tuesday, which is what most cable-news coverage does on both sides of this question.
What this means for next time
When you see a Polymarket-vs-poll headline this cycle, before you reach for a take, write down four things: the cumulative volume on the market, the field dates of the poll, the resolution criterion of the contract, and whether they describe the same population. If three or four of those line up, the gap is real and worth covering. If two or more don’t, the gap is mostly an artefact of the instruments. The discipline of writing those four down first is the difference between citing a divergence and explaining one.
Notes and sources
- Day 1 alignment: Day 1: Money said Newsom 27%. The room said 28%. (DissMarket, 2026-04-29)
- Day 3 divergence: Newsom is priced 3× Harris. Voters say it’s a coin flip. (DissMarket, 2026-05-07)
- Day 4 base-rate read: Polymarket has House Dems at 84c. That’s rich. (DissMarket, 2026-05-09)
- How we collect and label data: DissMarket methodology
- Running archive: 2026 launch recap