HobbyQuantHQ
Home

How every score on this site is computed.

Open methodology is the whole product. If we publish a number we publish the formula. If we don't have the data we don't pretend.

Last refresh: May 13, 2026 · Catalog: 24,001 pops

Rarity Index — 0 to 100

Deterministic score derived from three real fields on every pop. Same inputs always produce the same output. No randomness, no external data, no time decay.

vaulted +30
exclusive (convention) +35 SDCC, NYCC, ECCC, FunKon, WonderCon, C2E2, D23…
exclusive (shared retailer) +20 Target, Walmart, Hot Topic, GameStop, Amazon…
exclusive (other) +10
variant prototype +40
variant chase +30
variant glow / holographic +20
variant flocked / metallic / diamond +15
variant blacklight / glow +12
variant transparent / chrome / gold +10
variant silver / pearl +8
variant sepia / tie-dye / scented +5
score = clamp(sum, 0, 100)
Distribution across the live 24,001-pop catalog:
70-100Grail tier2
50-69Highly rare187
30-49Above average2,344
10-29Some scarcity4,805
1-9Light signal61

Honest limitations: Our source dataset has incomplete vault flags. Only 8 pops are currently marked vaulted, which is far below the true number. As the data layer improves, more pops will rise into the higher bands. The formula stays fixed; the inputs sharpen.

Category Heat — 0 to 100

Heat measures how much weight a category carries in the catalog. Three real signals, all from the pops table:

size_score = log(1 + total) / log(1 + max_total) × 40
exclusive_score = exclusive_ratio × 30
vault_score = vault_ratio × 30
heat = clamp(size_score + exclusive_score + vault_score, 0, 100)

Why log? So a 100-pop category isn't crushed by a 3,000-pop one, but the giant categories still register as bigger. Why size + exclusive + vault? Because those are the only three real demand proxies we have until sales data arrives.

Honest limitations: The vault component is suppressed right now because vault flags are sparse in the source dataset. Heat scores currently cluster between 30 and 50. When the vault data improves, the spread widens.

Trend Score — 0 to 100

A momentum proxy combining the two above with a recency component. Until sales data arrives this is the closest we get to "is this pop on a run."

0.40 × rarity_index
+ 0.40 × category_heat
+ 0.20 × recency
recency: current year → 100, then −10/yr, floored at 0. Unknown year → 50.

Honest limitations: Only 0% of pops carry a confirmed release year in our current dataset. For the rest, recency contributes a neutral 50. Trend Score still works but the recency lever sits idle.

Card Rarity — 0 to 100 (DBZ vertical)

Trading cards are different from Funko Pops in one important way: rarity is printed on the card. We don't have to infer it. So the card formula reads the tier and adds vintage, foil, and ban-list bonuses on top.

Common (C) +0
Uncommon (U / UC) +5
Rare (R) +20
Leader (L) +30
Promo +30
Super Rare (SR) +35
Tournament Promo (TPR) +35
Special Rare (SPR) +45
Ultra Rare (UR) +45
Super Special Rare (SSR) +50
Secret Rare (SCR) +55
Champion Card +70
+ vintage premium (Score 1999-2003: +15, Panini 2014-2017: +10, Bandai: +0)
+ foil / alt-art +10
+ banned (current ban list) +10
+ limited (restricted list) +5
score = clamp(sum, 0, 100)

Distribution across the live 6408-card DBZ catalog (108 sets):

70-100Grail tier14
50-69Highly rare100
30-49Above average1,809
10-29Some scarcity820
1-9Light signal1,075

Honest limitations: The DBZ catalog is a hand-curated v1 of 6408 representative cards — not the full catalogue. Full-set ingestion from Bandai TCG+ is the next step. The scoring formula will stay; the input set widens.

Prices — sourced, dated, labeled

When a card or pop has a market price displayed, here's exactly where it came from and what it means.

Source TCGplayer (via the public TCGCSV mirror)
Captured Daily snapshots into our price_snapshots table
Sub-types Normal / Foil priced separately when both exist
Fields shown market, low, high, plus 30-day min / avg / max
Sparkline window 30 days (configurable per request)
Change % (last_market − first_market) / first_market × 100

What "market price" actually is. TCGplayer's market price is a weighted average of recent listing prices, not a record of final sales. It moves with current asking prices — quickly when supply or demand shifts, slowly when the market is quiet. For most use cases ("is this card going up?", "how much do I list mine for?") it's the right signal.

Honest limitations: listing prices can lag — or front-run — actual completed sales, especially during hype spikes. eBay Marketplace Insights (final sale prices) is a separate data feed we're working to add as a complement. Until then we deliberately call this "market price", not "sold price."

What we do not publish yet

The following metrics are easy to fake and easy to spot. We will not ship them until the underlying data is real.

  • Final sale prices (sold listings)

    TCGplayer gives us listing-market prices — what people are asking right now. eBay Marketplace Insights gives final sale prices and is on our partner queue. Until then we label what we show as "market price," not "sold price."

  • Value projection ($1m / $3m / $6m)

    Goes live once we have ~90 days of price history per product. We already capture daily snapshots; the historical archive replay (Feb 2024 → today) is queued to backfill 15 months in one batch.

  • Undervalued / overvalued alerts at scale

    The analytics engine is built (see backend/analytics/price_signals.py). It activates per product once that product has a 30-day window of snapshots. Activates progressively as the catalog price coverage grows.

  • Portfolio P&L

    Needs per-user collection tracking + user-submitted cost basis. Not a priority until the marketplace half of the product is live.

Data sources

  • Catalog seed: kennymkchan/funko-pop-data on GitHub — a community-maintained dataset of 24,001 pops. Cleaned and enriched during ingestion.
  • Enrichment: regex-based extraction of variant (Chase, Glow, Prototype…), exclusive (SDCC, Target, Hot Topic…), and category from product names and taxonomy tags.
  • Score computation: the formulas above are committed to backend/analytics/ and re-runnable via scripts/backfill_rarity.py and scripts/cache_category_heat.py.