How we select our sources, how we build consensus from them, and how we measure our results. Published because a premium product earns its price through verifiable methodology — not claims.
Most fantasy draft tools compete on volume: more data, more projections, more expert voices, more features. The implicit promise is that if you consume enough information, the right answer will emerge.
We believe the opposite. The right answer doesn't come from consuming more — it comes from synthesizing better.
This page explains how we select our sources, how we build consensus from them, and how we measure our results. We publish this not because transparency is fashionable, but because a premium product earns its price through verifiable methodology — not claims.
Every ranking in the Gambitry Draft Kit is built from four expert sources, selected to represent distinct methodological approaches to player evaluation. We do not chase volume — we curate for diversity of perspective. Each source earns its place because it brings something the others don't.
The "wisdom of crowds" baseline. This source aggregates rankings from dozens of individual analysts into a single consensus view, smoothing out individual bias and identifying where the broader expert community has converged. It serves as our anchor — the gravitational center around which our other sources orbit.
We draw from two major fantasy hosting platforms, each with distinct ranking methodologies shaped by their proprietary data — including ADP trends from millions of real drafts, platform-specific scoring models, and editorial teams with institutional depth. These sources capture what the market is actually doing, not just what analysts are predicting.
A legacy sports journalism source with deep editorial staff and a methodology grounded in traditional scouting and reporting, not algorithm-driven projections. This source provides the independent editorial counterweight to platform-driven and crowd-sourced rankings — a perspective shaped by journalism rather than data science.
Research on expert consensus demonstrates diminishing marginal accuracy beyond a modest number of methodologically diverse sources. Four sources from three distinct methodological traditions — aggregation, platform intelligence, editorial journalism — provides genuine diversity of signal without the noise that comes from piling on dozens of undifferentiated voices.
Our consensus-building process is designed around one principle: let the sources speak equally, and let agreement reveal signal.
We pull full positional rankings from each source at the same point in the draft cycle. Because sources use different ranking scales, scoring assumptions, and player pools, we normalize each ranking set to a common positional framework before any synthesis begins. This ensures we're comparing like with like — not mixing a source's 12-team PPR rankings with another's 10-team standard rankings.
Each source receives equal weight in our consensus model. This is a deliberate choice, not a default. We considered weighting sources by historical accuracy, recency, or methodology type — but equal weighting enforces a critical discipline: it prevents us from over-indexing on any single analytical lens, and it ensures that our consensus captures genuine diversity of perspective.
The result is a consensus ranking for every relevant player at every position, for each scoring format we support. Where sources agree, we have high-confidence rankings. Where they diverge, we have useful information about uncertainty — which we surface in the Draft Kit through our value indicators.
Scoring format changes player value. In football, a running back's rank in PPR leagues can differ materially from their rank in standard scoring. We build separate consensus rankings for each supported format by pulling format-specific data from each source where available, and adjusting where a source provides only a single ranking set. The Draft Kit you receive is calibrated to your league's scoring system — not a one-size-fits-all list.
Our methodology is defined as much by what we exclude as by what we include. Three intentional constraints shape the product:
The Draft Kit is a consensus instrument, not a personal opinion column. We do not override the consensus model with our own player evaluations. Our editorial judgment is exercised in source selection and methodology design — not in moving a player up or down because we have a hunch. You're getting a disciplined synthesis of expert opinion, not one more person's opinion dressed up as consensus.
Adding a fifth, sixth, or twentieth source would not meaningfully improve our rankings. It would add noise, dilute the methodological diversity we've curated, and make it harder to identify where real signal lives. Four sources from three distinct traditions is by design, not a limitation.
The Draft Kit is a pre-draft strategic instrument, not a live news feed. We publish updated editions at key points during draft season — pre-season, post-major-events, and a final pre-draft edition — but we do not attempt to react to every beat reporter's tweet. For real-time news, there are excellent tools that serve that purpose. Our product serves a different one: giving you a considered, stable strategic foundation for your draft.
We build consensus rankings for the most widely played scoring formats. Each format receives its own fully calibrated ranking set — not a generic list with a footnote.
Three distinct views of player value, each reflecting how your scoring format changes what matters.
Same methodology, same craft. Baseball kit launches March 2026.
If your league uses a format we support, the Draft Kit you receive reflects that format's specific value calculations from the ground up.
We believe a premium product must earn its premium with results — not just aesthetics. Here is our commitment to accountability:
After each season concludes, we will publish a transparent accuracy analysis: how our consensus rankings performed against actual player outcomes, where the methodology succeeded, and where it fell short. We will report the numbers honestly, including the misses.
One season's results are a data point. Multiple seasons are a track record. We are building for the long term: each year's accuracy report adds to a cumulative record that either validates our methodology or tells us what needs to change. We will publish this record openly.
If post-season analysis reveals that a source consistently underperforms or that our consensus method can be improved, we will adjust. Our methodology is a living system, not a fixed formula. We will document significant changes transparently so returning customers understand how the product evolves.
We will never claim to be "the most accurate rankings in fantasy sports." Accuracy in fantasy is noisy — injuries, breakouts, and randomness make single-season accuracy a blunt instrument. What we will claim, and prove over time, is that our curated consensus methodology produces reliably strong outcomes season after season. Consistency, not perfection, is the standard.
The Draft Kit is not a data dump. It is not an aggregation of everything available. It is an intentional synthesis built from carefully selected sources, disciplined methodology, and a belief that your time is too valuable for noise.
We built this product because we believe the most intentional preparation is the smartest preparation. Every decision in our methodology — from the sources we select to the weight we assign to the formats we support — reflects that conviction. The result is a draft instrument you can trust: not because we told you to, but because you know how it's made.
The 2026 Football Draft Kit. Consensus rankings from four expert sources, calibrated to your scoring format, delivered in Excel and Google Sheets.
See the 2026 Draft Kit →