Insights · Public opinion · By Mira Espinoza · Polling desk

What 402 X voters got right about the 2028 Dem primary

A 402-respondent X poll landed within one or two points of a $1.1B Polymarket book on every named candidate. That is either a finding or a coincidence, and the difference between the two is a question about who was in the room — not a question about the headline numbers.

The cleanest way into this is to take the alignment seriously without taking it at face value. On April 29, our 24-hour X poll on the 2028 Democratic nominee returned Newsom 28%, AOC 10%, Harris 9%, and a residual “someone else” bucket at 53%. Polymarket’s prices on the same field on the same day were Newsom 27%, AOC 8.6%, Harris 7.5%, and an implied residual near 57%. Two instruments, four numbers each, every named candidate inside a couple of points. The temptation is to read that as a finding about the 2028 primary. The more useful reading is that it is a finding about two specific rooms — the room that voted in our poll and the room that priced the book — and that those two rooms have more demographic overlap than either has with the eventual Democratic primary electorate. The piece below is the demographic post-mortem of the launch market, and it is also a piece about what we can and cannot do with the signal we collected.

Who was actually in the room

The honest answer is: we don’t know, in any precise way, and that is the first sentence of any methodology footer we’ll write for this kind of poll. X polls do not capture age, location, party identification, vote propensity, or anything else a reputable survey house would weight on. What we have is a vote count and an impression count, and from those two numbers we can describe the shape of the room only by inference. We had 99,000 impressions and 402 votes, which is a 0.4% participation rate — low enough that the people who voted were aggressively self-selected from the people who saw the question. The selection effect on a launch-week X poll is roughly: people who already follow the @DissMarket account or one of the accounts that retweeted us, who were on X during the 24-hour field window, who care about a 2028 primary speculation in May 2026, and who treat clicking a poll option as a small social gesture rather than a chore. That is not a demographic profile, but it is a shape.

The shape is recognisable from the broader X-political-data subculture. DissMarket’s followers in the first week skewed political-data-Twitter — younger than the median voter, college-educated more often than not, news-engaged at a level several standard deviations above the public, mostly US-based with a coastal lean, and predominantly Democratic-leaning independents who follow the topic for the same reason they follow any other discrete numerical contest. That is a description, not a measurement, and I would not put weights on it. But it is the description that’s consistent with what we observed: a 24-hour poll on a 2028 primary question that returned a four-option distribution with no obvious brigading and a residual bucket that is internally coherent. People who don’t care about 2028 in May 2026 don’t click on this kind of poll. People who do are not a random sample of the country.

The other relevant room is Polymarket’s. The 2028 Democratic nominee book has cleared roughly $1.1B in cumulative volume across the event, which is enormous by prediction-market standards, but the participating population is still small and still self-selected. The bettors who price the book are stablecoin holders trading on Polygon, which screens hard for crypto fluency, and that screen produces a population that is also young, also technically literate, also overrepresented on the coasts and underrepresented everywhere else. Both rooms — the X poll room and the Polymarket bettor room — share more demographically with each other than either does with the typical 2028 Democratic primary voter, who will be older, less online, and (in most states) less ideologically engaged with the named field this far out.

Why the alignment is more boring than it looks — and more interesting than that

The cleanest finding from Day 1, allowing myself the one wry observation this piece is going to get, isn’t that polls and markets agree. It’s that “X polls” and “Polymarket bettors” are roughly the same room with two different microphones. Two pools of politically engaged, online, mostly-young respondents are reading the same news cycle in real time, applying the same filters to the same candidate names, and returning numbers within the noise floor of each other. That should not be surprising. If anything, surprising would have been a structural disagreement, and the absence of structural disagreement on a launch poll is a clean negative result.

It is also more interesting than that, in a narrow sense. Both pools are tracking the news cycle on the same beat, and the alignment confirms that — for the politically engaged Twitter-adjacent public — the named field is being read the same way by people willing to vote a poll and people willing to risk capital. If the two rooms had disagreed sharply on Newsom or Harris, the disagreement would have meant something specific: that the question wording or the resolution criterion was producing different cognitive frames, or that one room had information the other didn’t. The alignment rules that out for the launch day. It tells us the two instruments are picking up the same signal in the same room. That’s a useful baseline to have before we go looking for the divergences that the rest of the cycle will produce. Marcus has the longer version of this on what each instrument is structurally measuring; this piece is about who is doing the measuring.

The residual gap, and what it might mean

The only number in the table that moved more than the X poll’s noise floor was the residual. Money said “someone else” was a 57% probability. The politically engaged Twitter-adjacent public said 53%. Four points isn’t much — it’s right at the edge of where we can say anything at all, given that an n=402 unweighted poll has a sampling band that swallows most of it — but it’s the only gap on the table that isn’t obviously a rounding error. Two plausible reads on what that four-point spread is doing.

Read one: the politically engaged young online cohort is slightly more confident in the named field than the bettor pool is. They overweight name recognition, they spend more time inside the daily news cycle, and they treat the three names that are currently legible — Newsom, AOC, Harris — as a stronger account of the eventual field than the bettors do. The voters are saying, in effect, that the answer is probably already on the ballot. Read two: the bettor pool has slightly more field-optionality priced in. They are holding marginally more capital against the chance that someone not currently on the radar — a dark-horse governor, a senator who hasn’t declared, a candidate whose 2026 results reshape the field — emerges by the resolution window. Both reads are defensible. Neither is provable from the data we have. The honest summary of the four-point gap is that it is the kind of residual you should hold lightly and watch over time, rather than the kind of residual you should mine for a thesis on Day 1.

What we’d need to do better

A verified panel with demographic capture would let us answer questions the X poll cannot. The Phase 2 plan in our methodology is exactly that: a recruited panel where we capture age, region, party identification, and recalled 2024 vote, weight to a target frame, and report subsample breaks. With that infrastructure, the 28% Newsom number from Day 1 stops being a single point and starts being five or six: Newsom among under-30s versus Newsom among over-50s, Newsom in the Northeast versus Newsom in the Midwest, Newsom among self-identified Democrats versus Newsom among independents who plan to vote in the primary. The decomposition is the interesting analytical object. The topline is mostly the headline you put on top of the decomposition.

The methodological benchmark here is the named pollsters who already do this work — Pew on adults, YouGov and Echelon Insights on likely voters, the slow but stubbornly calibrated houses we read because their weighting choices are documented and their track record is gradable. Echelon’s April 17–20 likely-voter poll (n=1,012, MoE ±3.5) put Harris at 22% and Newsom at 21%. Our X poll, fielded a week and a half later, had Newsom at 28% and Harris at 9%. The Newsom delta is inside what you’d expect from two different rooms reading the same week of news; the Harris delta is not, and that is the kind of subsample-level question we can’t resolve from an X poll alone. Either we’re underestimating Harris because our room skews away from her natural constituency, or the likely-voter cut is overestimating her because name recognition is doing work that won’t hold up under primary intent. We can’t say which without demographics, and that uncertainty is the structural limit of the unweighted launch poll.

How to read Day 1, two weeks later

Two weeks of additional data has not aged Day 1’s alignment finding badly. On Day 3 we ran the same Polymarket book against the Echelon likely-voter poll and got a much larger Harris gap (8% on the market, 22% on Echelon, a 14-point spread); on Newsom, the three readings — Polymarket 25–27%, Echelon 21%, our X poll 28% — sit inside the bands of error you’d expect from three instruments measuring three different populations on three different field windows. The Newsom number is the one that has held. The Harris number is the one that is doing the most work in the divergence story, and it is also the number that most depends on which room you ask. Day-by-day comparison is finally interesting now that we have four markets to look back on, and the cleanest pattern across them is that the residual gaps move more than the named-candidate gaps do — which is what you’d expect if both rooms agreed on who’s legible and disagreed slightly on how much of the field hasn’t shown up yet.

None of this should make anyone read the 402-respondent number as a verified electorate snapshot. It is, at best, a high-engagement sentiment reading from a self-selected slice of the politically engaged public, taken on a single day with no demographic capture. What it is good for is being one of several readings on the same question, where the disagreements between readings are diagnostically more useful than any one reading by itself. That is also the bet of the launch recap and the model the rest of the cycle will run on: triangulate, narrate the residuals, name the rooms, and resist the impulse to crown a winner among instruments that aren’t measuring the same population.

What would change my mind

I’d update on this if two things happen. First, if our next X poll on the same field returns a Newsom number outside the 24–32% band while a named likely-voter poll fielded in the same week still has him near 21%, the alignment story breaks and the room-shape question gets sharper. Second, if Harris’s Polymarket price drifts up toward the named-poll readings (15c and rising) while our X poll continues to put her below 12%, then I’d start to believe our room is systematically under-counting her support and the Day 1 alignment was partly a Newsom-driven coincidence rather than a structural pattern. Either signal would be informative. Neither has shown up yet.

Methodology footer

DissMarket X poll fielded 2026-04-28 to 2026-04-29, 24-hour close, four options, n=402 votes, 99K impressions, 28 replies, 12 RTs, 29 likes. Unweighted. No demographic capture. Polymarket implied probabilities captured 2026-04-29, event total volume $1.11B. Echelon Insights numbers from publicly released topline, fielded April 17–20, n=1,012 likely Democratic primary voters, MoE ±3.5. Full data and the comparison table on Day 1; broader signal-vs-signal framing in the methodology piece. Selection effects discussed above are inferential, not measured.