CONVEX
Methodology / Repricing Gauge

Repricing Gauge

The Repricing Gauge answers: “Is NOW the right time to trade this scenario, and how urgent is it?” It produces a 0–100 urgency score with confidence intervals for each scenario × instrument pair.

1. Signal Lead Detection

Cross-correlation analysis between the evidence series (interpolated daily scenario probabilities) and the price series at lags 0–30 days. Two windows are used:

  • Long window (120d): structural lead relationship
  • Short window (20d): recent lead dynamics

Bootstrap confidence intervals (200 resamples) on the lead estimate. Stationarity is checked via variance ratio; non-stationary series are first-differenced before correlation.

A collapsing lead warning fires when the short-window lead drops below 50% of the long-window lead, indicating the market is catching up to the evidence signal.

2. Repricing Gap

The gap between the model-implied fair value and the current market price. Built from:

  • Blended probability:model_prob × confidence + market_implied_prob × (1 − confidence), where confidence starts at 0.5 and is adjusted by the accuracy tracker over time.
  • Logistic repricing curve: maps probability to expected price move using S-shaped transform with configurable steepness and midpoint.
  • Sensitivity estimation: Bayesian-shrunk median of historical precedent price moves toward unconditional asset volatility (prior weight = 6).
  • Confidence intervals: derived from sensitivity and probability ranges.

3. Carry Cost Adjustment

Net gap = gross gap − carry drag. Carry drag is the annualized carry cost scaled to the expected lead time and weighted by lead confidence. Instruments with high carry (e.g., VXX at 30%/yr) require larger gaps to justify holding.

Break-even days = the number of days the position can be held before carry erodes the gap entirely. Missing carry profiles are flagged.

4. Repricing Velocity

How fast has this type of gap historically closed? Uses mechanism-specific precedent trajectories when available, otherwise falls back to asset-class × heat-level defaults. Outputs:

  • Half-life (days to 50% gap closure) with confidence interval
  • Overshoot estimate and revert time

5. Catalyst Impact

Upcoming data releases from FRED that could accelerate or reverse the repricing. For each watch metric, estimates the probability change per ±1σ surprise using calibrated log-odds sensitivity (0.30 per z-score). Produces three scenario outcomes (supports, neutral, contradicts) with gap closure estimates.

6. Urgency Score

Combines all components into a single 0–100 score using conservative estimates (lower end of confidence intervals):

raw = gapZ × velocityFactor × leadFactor × collapsePenalty × 100
urgency = logistic(0.03 × (raw − 50))

Monte Carlo CI via 500 resamples from component confidence intervals. Reliability is classified as:

  • Actionable: high lead + medium/high gap + positive net gap + not collapsing
  • Indicative: mixed signals but net positive gap
  • Speculative: low data quality, negative net gap, or failed components

7. Accuracy Tracker

Retrospective evaluation of gauge signals 30 days after issuance. Measures:

  • Direction correctness by urgency band (80–100, 60–80, 40–60, 0–40)
  • Median gap closure percentage

When the 80–100 urgency band has 10+ evaluated outcomes, the tracker recommends model confidence adjustments: +5pp if direction correct >70%, −5pp if <50%, bounded to [0.30, 0.85].

Known Limitations

  • Signal lead detection requires 30+ daily probability observations; new scenarios start with degraded estimates.
  • Carry cost profiles are seeded from representative instruments; exotic or newly-listed instruments use defaults.
  • Velocity defaults assume asset-class × heat-level typicality; black-swan events may close much faster.
  • The accuracy tracker needs 30+ days of gauge history to produce meaningful calibration.
  • Monte Carlo CI assumes uniform sampling within component ranges; fat tails are not modelled.