Which Party is on Track to Win the House in 2026?
The Smoke Filled Room's polling average for the 2026 Generic Congressional Ballot
The Latest
The polling drought is over with J.L Partners (D+7), RMG Research (Tied), Quantus Insights (Tied), and Cygnal (D+.3) all releasing new polls. Definitely signs of improvement by Republicans, as the Democrats’ lead shrinks to 2.5%. - Saturday, June 14 at 8:00pm.
Seat Projection
Okay, polling numbers are one thing, but how many seats is each party on track to win? Well, it’s far too early to make any realistic model, but if you apply a uniform swing based on the current average you end up with the following seat counts.
Democrats have a much easier chance to win the House off the bat, as a uniform swing from the 2024 results would imply they could still win even if they lose the popular vote by 1%.
Average Methodology
In a Nutshell
All publicly available polls are weighted according to their recency, quality, sample size, and population type, and are adjusted for their house effects. The core of this methodology is inspired by polling averages done by organizations such as (the now defunct) FiveThirtyEight, RacetotheWH, and others. The aim of this average is to remove as much bias as possible and deliver an objective insight into what the polls can tell us about the state of the 2026 Congressional elections.
In the average visualization, the shaded area represents the mean, absolute error, of generic ballot polling going back to to 1996. This does not mean the full range of potential error, but merely the average deviation that could be expected based on historical accuracy.
How Polls are Weighted
All polls included in the average are weighted using the following formula:
∆tₚ, represents the weighting given to the poll based on its recency. More recent polls are given higher weighting in the average. The equation for calculating this recency bias changes based on the phase of the campaign, represented by the subscript “p.”
The time weight for the rest phase of campaigning is defined by the following equation:
\(\displaystyle{\displaylines{\Delta t_{r} =1-\frac{d_{m}}{30}}}\)dₘ, represents the time elapsed since the median field date for the given poll. 30 days is the set “'time window” for this phase, meaning only polls fielded within the last 30 days are included in the average.
The rest phase of the campaigning cycle runs precisely from the day after the election of 2024 until February 1st of 2026.
Starting on February 1st, the time window of 30 days is slowly decreased at a constant rate until it stop at 14 days on March 1st.
The period after this transition is called the sluggish phase and the time weight is then defined by this equation:
\(\Delta t_{s} =1-\frac{d_{m}}{14}\)This weight is maintained until Labor day.
On Labor day, the official campaign phase begins, and the time weight is defined by this equation:
\(\displaystyle{\displaylines{\Delta t_{s} =1-\frac{d_{m}}{\frac{d_{e}}{10}+7}}}\)The variable dₑ represents the days until the election. Dividing this by ten and adding 7 (which is the size of the time window on election day) represents the constantly changing time window for this final phase.
α, represents the quality weight given to the pollster. Pollster quality is determined based on past performance and accuracy of pollsters. For this average I use the updated 2025 grading system developed by the Silver Bulletin’s Nate Silver.
A weight of 1 is given to pollsters with a grade of A+, and then in descending order each step in grade level is given .05 less weight. So, grade A pollsters receive a .95 weighting, grade A- receive a .90 weighting, and so on1.
s, represents the sample size weight of the poll. This weight is determined by dividing the sample size of the poll by 2,000. Sample sizes that exceed 2,000 respondents are automatically given a weight of 1.
This is a purely editorial decision based on the statistical principle that the increase in value of an increased sample size diminishes as the sample gets larger. A poll with a sample size of 2,000 is much more valuable than one of 500, but a sample size of 3,000 is barely more valuable than one of 2,000. For practical reasons this was capped at 2,000 because very few polls ever exceed this sample size.
ρ, represents the population weight given to the poll. This weight is determined based on whether the sample population falls into one of three categories: likely voters, registered voters, or all adults. Likely voter polls are given a weight of 1, because that is the universe of voters whose responses correlate most highly with the actual results of the election. Registered voter polls are given a weight of .9, and all adult polls are given a weight of .5.
rᵢ, represents the initial unadjusted polling results. Polls are sourced from a public database created by Mary Radcliffe (formerly, a 538 Researcher) and others.
h, represents the individual “house effects” for the pollster. House effects are the percentage amount the pollster tends to over represent one party or the other consistently.
This is calculated by finding the average difference between the pollster’s results and the average of other polls for various races. The specific house effects used in this average were calculated by Eli McKown-Dawson and Nate Silver.
Additional Methodological Notes
Many clarifications need to be made to address instances in which somewhat annoying editorial decisions have to be made.
Pollsters with limited history (such as Yale Youth Poll and Quantus Insights), cannot be adequately graded nor given accurate house effects. For this reason all “ungraded” polls are automatically weighted as if they were B+ pollsters. Additionally, they are given neutral house effects.
If a pollster has conducted more than one poll within the current time window only their most recent poll is included.
If pollster releases multiple polls at one time, preference is given to likely voters polls over registered voter polls, and to registered voters polls over all adults polls.
Grades such as A/B and B/C are placed below A- and B- respectively in grade tier. Same logic applies to other such transitory grades.
I understand the case for the uniform swing model, but I remain skeptical when applying it to the new map. Based on what I’ve seen, only about 5–6% of House seats are truly in play. Factor in the demographic and partisan "enclaves" baked into many of those districts, and the predictive power of a uniform swing begins to break down—at least when I try to simulate it.
Realistically, Democrats start the night with around 205 safe or favorable seats to Republicans’ 212. To reach 225 or more, they’d need an exceptional night—certainly possible, but far from probable given the structure of the current map. This just isn’t 2010, where you could argue that 12–14% of seats were genuinely competitive.
That said, I appreciated this analysis. It was a thoughtful breakdown, and even if the model oversimplifies, it’s a solid base for debate.