METRIC | CAT | DOG | OTHER |
---|---|---|---|
Performance % | 145% | 89% | 102% |
Δ (Delta) Last Hour | +127 | +23 | +89 |
Baseline Rate /hr | 42 | 38 | 15 |
← Swipe to see more →
See the About section for full details and formulas.
--
CI: -- – --
--
CI: -- – --
--
CI: -- – --
Normal pump.fun activity levels
How it works: We track every pump.fun token launch in real-time. When the 24-hour launch rate exceeds the lifetime baseline by ≥10% (with at least 50 samples), we declare a 'pump.fun season' indicating elevated activity.
This detects periods of heightened pump.fun activity compared to historical norms.
(Please be patient. The statistics will become more accurate as the server collects more data.)
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
Cat: 0 Stable 🔄
Dog: 0 Stable 🔄
TL;DR: We track every new Solana meme token launched on pump.fun in real time, classify it as Cat 🐱, Dog 🐶, or Other, and declare "seasons" when one type significantly exceeds its historical baseline. Think of it as a real-time cultural trend detector for meme token launches.
Real-time WebSocket updates deliver stats every minute. Multi-timeframe analysis from 15 minutes to 1 month shows exactly when patterns are emerging. No refresh needed, no lag, just live data streaming to your dashboard ⚡
Powered by ML models running 24/7 at catvsdog.live — scanning, classifying, never sleeping.
Meme tokens often move in waves. One week cat-themed tokens dominate, the next it's dogs. Is it random noise or are there detectable patterns? When cat token launches spike 50% above baseline while dog launches drop, that's a measurable shift in what creators are building and traders are buying.
Whether these patterns are meaningful or just statistical noise is up to interpretation. What matters is that we can measure and visualize them in real time.
Token trends can shift rapidly — today's popular theme becomes tomorrow's forgotten trend. A single viral post can trigger a wave of similar launches 🌊 This dashboard tracks those shifts as they happen, showing you emerging patterns before they become obvious. Watch the data unfold and draw your own conclusions.
Each stat card displays the actual number of Cat and Dog tokens that have been detected in that time period. These are real counts, not estimates or projections. Behind the scenes, I also calculate expected counts based on historical rates, which I use to determine trends and seasonal shifts.
actual.cat
: actual number of cat tokens in the time windowactual.dog
: actual number of dog tokens in the time windowexpected.cat
: expected number based on historical baselineexpected.dog
: expected number based on historical baseline
Each time window includes trend indicators (Trending Up 🚀, Trending Down 📉, or Stable 🔄) that show how Cat and Dog tokens are performing against their expected baselines.
ratio = actual / expected
if ratio ≥ 1.1
: Trending Up 🚀if ratio ≤ 0.9
: Trending Down 📉otherwise
: Stable 🔄
The Season Details panel shows three key performance metrics for each category.
Performance % indicates how each category is performing relative to its historical baseline, where 100% means exactly as expected. Values above 100% indicate above‑average activity, while values below 100% indicate below‑average activity.
Δ (Delta) Last Hour shows the difference between the actual count in the past hour and the expected count based on the baseline rate. Positive values mean more tokens than expected, negative values mean fewer.
Baseline Rate /hr represents the historical average tokens per hour for each category, calculated from all data collected since the system started.
The season is computed from the last complete 24 hours only. For each category
k ∈ {cat, dog, other}
we compare the actual count against its session‑aware expectation
using a quasi‑Poisson standardisation (accounts for over‑dispersion).
[end − 24 h, end)
, where end
is the start of the current 5‑minute bucket.μk = ratek × activeHours
(active hours are clipped to uptime).Δk = actualk − μk
.φk
from the last 48 h via Pearson residuals.zk = Δk / √(φk·μk)
.zk
above 1.0 becomes the season; otherwise neutral.
Intensity is the 24 h relative lift of the winning category:
intensity = clamp(Δwin / μwin, 0, 1)
shown as a percentage bar.
Methodology follows standard count‑data practice (quasi‑Poisson variance scaling and over‑dispersion handling). Reference: Cameron, A.C. & Trivedi, P.K. (2013) Regression Analysis of Count Data, 2nd edn, Cambridge University Press.
Once per hour at :02
, we forecast the next 24 complete hours. For each category we sum the
24‑step additive Holt–Winters forecasts (with proper damped trend per Gardner & McKenzie) and
compare them to a baseline expectation scaled by predicted uptime for each hour of the day.
Σ ŷk,h·uh
where uh∈[0,1]
are uptime fractionsμfk = ratek × Σuh
ẑk = (Σ ŷk,h·uh − μfk) / √(φk μfk)
ẑk ≥ 1.0
becomes the predicted season
The forecast appears as a badge on the season banner and updates hourly. A "static" daily forecast
is also computed once at 00:02 UTC
for consistent day‑ahead planning.
Note: multi‑step forecast uncertainty is approximated via the quasi‑Poisson denominator only; it is an operational signal, not investment advice.
Every hour at two minutes past, we run an additive Holt–Winters model (with damped trend) over the past 7 days of hourly counts to predict the next hour's token volume for each category. This lets you see which type is likely to spike before it happens.
data = last 168 hours of actual hourly counts (complete hours only)
model = HoltWinters(data, α=0.3, β=0.1, γ=0.3, φ=0.95, period=24)
μ = model.forecast(1)
(next hour's point estimate)φ = estimateDispersion(category, 48h)
(quasi-Poisson dispersion)PI = [qpois(0.025, μ), qpois(0.975, μ)]
if φ≈1, else NB quantiles
Prediction intervals use count-appropriate distributions: Poisson when φ≈1 (no overdispersion), or Negative Binomial when φ>1 (overdispersed). This properly reflects the uncertainty structure of count data, avoiding artificially wide intervals for lower-volume categories.
Every classified token tracks which launchpad it came from. The Pump.fun Activity Monitor shows real-time launch activity compared to its own historical baseline, displaying the lifetime average launch rate and how the last 24 hours deviates from that baseline.
baseline_rate = total_pump_tokens / total_active_hours
rate_24h = tokens_last_24h / active_hours_last_24h
lift% = ((rate_24h - baseline_rate) / baseline_rate) × 100
Season: declared when lift ≥ 10% with n ≥ 50 samples
The big number shows total lifetime launches from pump.fun. When the 24-hour launch rate exceeds the lifetime baseline by ≥10% (with at least 50 samples), we declare a "pump.fun season" indicating elevated activity. The highlighted card and intensity percentage show the size of that lift above normal activity levels.
Forecast data is first fetched when the page loads, then kept fresh automatically in the background — new predictions flow directly into the dashboard without you having to reload the page.
Lifetime baselines are recalculated every minute using all predictions so far. We compute each category's total count divided by the total active hours elapsed.
catRate = total cat tokens / total active hours elapsed
dogRate = total dog tokens / total active hours elapsed
otherRate = total other tokens / total active hours elapsed
The Season Banner at the top of the page displays the current season with a visual emoji indicator
and themed background. The banner updates every minute. The "Season forecast (next 24 h)"
badge updates hourly at :02
.