Volume 2: Organizational Intelligence Platforms

Pattern 14: Predictive Time Windows

Intent

Determine the optimal time horizon for making predictions, balancing the trade-off between early warning (longer lead time for intervention) and prediction accuracy (shorter windows are more accurate), while accounting for natural decision points and intervention feasibility.

Also Known As

  • Prediction Horizon
  • Forecasting Window
  • Lead Time Optimization
  • Time-to-Event Prediction
  • Temporal Prediction Boundaries

Problem

When should we make predictions? How far ahead should we predict?

Martinez family showing decline: - Today: Engagement score 65, dropping fast - Question: When will they withdraw?

Possible prediction windows:

7 days ahead: - Accuracy: 95% (very confident) - Lead time: 1 week to intervene - Problem: Too short! Can't arrange meeting, payment plan, etc.

30 days ahead: - Accuracy: 87% (good) - Lead time: 1 month to intervene - Sweet spot: Time to act, still accurate

90 days ahead: - Accuracy: 62% (mediocre) - Lead time: 3 months to intervene - Problem: So uncertain, too many false alarms

The dilemma: - Predict too soon → Inaccurate, false alarms - Predict too late → Accurate, but no time to act - Need to find optimal balance

Additional considerations: - Natural decision points: Semester ends, enrollment cycles - Intervention feasibility: Some actions need 2 weeks, others 6 weeks - Behavior stability: Patterns stable over weeks, not days - Seasonal effects: Summer vs semester timing matters

Context

When this pattern applies:

  • Time-to-event prediction (withdrawal, payment default)
  • Interventions require lead time
  • Accuracy degrades with longer horizons
  • Natural cycles exist (semesters, quarters, months)
  • Want to optimize prediction timing

When this pattern may not be needed:

  • Instantaneous decisions (fraud detection)
  • No lead time needed for action
  • Time horizon fixed by business rules
  • Prediction accuracy doesn't vary with time

Forces

Competing concerns:

1. Lead Time vs Accuracy - Longer horizon = more time to act - But less accurate, more uncertainty - Balance: Find sweet spot where both acceptable

2. Early Warning vs False Alarms - Early predictions catch problems sooner - But generate false positives (crying wolf) - Balance: Tune threshold based on intervention cost

3. Point Prediction vs Time Range - "Will withdraw in 30 days" - specific - "Will withdraw in next 2-8 weeks" - realistic - Balance: Provide ranges with confidence bounds

4. Fixed vs Adaptive Windows - Fixed 30-day window - simple - Adaptive based on certainty - optimal but complex - Balance: Start fixed, graduate to adaptive

5. Business Cycle vs Predictive Power - Align with semesters, quarters (natural) - But may not match optimal prediction window - Balance: Primary window for action, secondary for prediction

Solution

Establish multiple prediction windows optimized for different purposes:

Tactical Window (7-14 days): - Purpose: Immediate action items - Accuracy: High (85-95%) - Use: Daily/weekly task generation - Example: "7 families need contact this week"

Strategic Window (30-45 days): - Purpose: Main intervention planning - Accuracy: Good (75-90%) - Use: Monthly planning, resource allocation - Example: "12 families at risk next month, need payment plans and meetings"

Planning Window (60-90 days): - Purpose: Capacity planning, trend analysis - Accuracy: Moderate (60-75%) - Use: Budget forecasting, staffing - Example: "Expect 8-12 withdrawals next quarter"

Framework for selecting window:

Optimal Window = f(
  Intervention_Lead_Time_Required,
  Prediction_Accuracy_Curve,
  Business_Cycle_Constraints,
  False_Alarm_Tolerance,
  Behavior_Stability_Period
)

Structure

Time Window Configuration Tables

-- Define prediction windows
CREATE TABLE prediction_windows (
  window_id INT PRIMARY KEY IDENTITY(1,1),

  window_name VARCHAR(100) NOT NULL,
  window_type VARCHAR(50),  -- 'tactical', 'strategic', 'planning'

  -- Time horizons
  days_ahead_min INT NOT NULL,
  days_ahead_max INT NOT NULL,

  -- Expected performance
  target_accuracy DECIMAL(5,2),
  acceptable_false_positive_rate DECIMAL(5,2),

  -- When to use
  use_case NVARCHAR(500),
  recommended_actions NVARCHAR(1000),

  -- Active period
  active BIT DEFAULT 1,
  created_date DATETIME2 DEFAULT GETDATE()
);

-- Store predictions with time windows
ALTER TABLE ml_predictions
ADD prediction_window_days INT,  -- How far ahead
ADD predicted_event_date DATE,   -- When event expected
ADD prediction_window_type VARCHAR(50);  -- 'tactical', 'strategic', 'planning'

-- Track accuracy by time window
CREATE TABLE window_accuracy_tracking (
  tracking_id INT PRIMARY KEY IDENTITY(1,1),
  window_id INT NOT NULL,

  -- Time period
  evaluation_date DATE NOT NULL,

  -- Predictions made in this window
  predictions_made INT,
  outcomes_known INT,
  correct_predictions INT,

  -- Accuracy metrics
  accuracy DECIMAL(5,2),
  precision_score DECIMAL(5,2),
  recall DECIMAL(5,2),
  false_positive_rate DECIMAL(5,2),

  -- Timing accuracy
  avg_days_error DECIMAL(5,2),  -- How far off was timing?

  CONSTRAINT FK_tracking_window FOREIGN KEY (window_id)
    REFERENCES prediction_windows(window_id)
);

Implementation

Time Window Optimizer

class TimeWindowOptimizer {
  constructor(db) {
    this.db = db;

    // Define standard windows
    this.windows = {
      tactical: { min: 7, max: 14, name: 'Tactical (1-2 weeks)' },
      strategic: { min: 30, max: 45, name: 'Strategic (30-45 days)' },
      planning: { min: 60, max: 90, name: 'Planning (60-90 days)' }
    };
  }

  async predictAcrossWindows(familyId) {
    const predictions = {};

    for (const [type, window] of Object.entries(this.windows)) {
      const midpoint = (window.min + window.max) / 2;

      // Make prediction for this window
      const prediction = await this.predictForWindow(familyId, midpoint);

      predictions[type] = {
        window_days: midpoint,
        window_range: `${window.min}-${window.max} days`,
        ...prediction
      };
    }

    return predictions;
  }

  async predictForWindow(familyId, daysAhead) {
    // Get current state features
    const features = await this.extractFeatures(familyId);

    // Add time-based adjustments
    const timeAdjustedFeatures = this.applyTimeDecay(features, daysAhead);

    // Make prediction
    const mlPrediction = await this.mlModel.predict(timeAdjustedFeatures);

    // Calculate confidence based on time window
    const confidence = this.calculateTimeWindowConfidence(
      mlPrediction.probability,
      daysAhead
    );

    // Estimate event date
    const predictedEventDate = this.estimateEventDate(
      mlPrediction.probability,
      daysAhead
    );

    return {
      probability: mlPrediction.probability,
      confidence: confidence,
      predicted_event_date: predictedEventDate,
      days_until_event: this.daysUntilEvent(mlPrediction.probability, daysAhead)
    };
  }

  applyTimeDecay(features, daysAhead) {
    // Adjust features based on expected changes over time
    // Example: engagement scores tend to continue their trajectory

    const velocity = features.engagement_velocity || 0;  // Points per month
    const dailyChange = velocity / 30;
    const projectedChange = dailyChange * daysAhead;

    return {
      ...features,
      engagement_score: Math.max(0, Math.min(100, 
        features.engagement_score + projectedChange
      )),
      // Uncertainty increases with time
      feature_uncertainty: daysAhead / 90  // 0 at 0 days, 1 at 90 days
    };
  }

  calculateTimeWindowConfidence(probability, daysAhead) {
    // Confidence decreases with longer time horizons

    // Base confidence from model
    const modelConfidence = Math.abs(probability - 0.5) * 2 * 100;

    // Time penalty: lose 0.5% confidence per day
    const timePenalty = daysAhead * 0.5;

    // Adjusted confidence
    const confidence = Math.max(0, modelConfidence - timePenalty);

    return confidence;
  }

  estimateEventDate(probability, daysAhead) {
    // Convert probability to expected days until event
    // High probability = sooner, low probability = later

    const baseDate = new Date();

    if (probability < 0.3) {
      // Low risk - unlikely to happen in this window
      return null;
    }

    // Scale within window: high prob = early, low prob = late
    // 0.9 probability → 25% into window
    // 0.5 probability → 75% into window
    const windowPosition = 1 - ((probability - 0.3) / 0.7);  // 0 to 1
    const daysUntil = Math.round(daysAhead * windowPosition);

    baseDate.setDate(baseDate.getDate() + daysUntil);
    return baseDate;
  }

  daysUntilEvent(probability, windowDays) {
    if (probability < 0.3) return null;

    const windowPosition = 1 - ((probability - 0.3) / 0.7);
    return Math.round(windowDays * windowPosition);
  }

  async evaluateWindowAccuracy(windowType, evaluationPeriodDays = 90) {
    const window = this.windows[windowType];
    if (!window) throw new Error(`Unknown window type: ${windowType}`);

    const midpoint = (window.min + window.max) / 2;

    // Get predictions made N days ago
    const cutoffDate = new Date();
    cutoffDate.setDate(cutoffDate.getDate() - midpoint);

    const predictions = await this.db.query(`
      SELECT 
        p.prediction_id,
        p.family_id,
        p.predicted_probability,
        p.prediction_window_days,
        p.prediction_date,
        p.predicted_event_date,
        f.enrollment_status,
        f.withdrawal_date
      FROM ml_predictions p
      JOIN families f ON p.family_id = f.family_id
      WHERE p.prediction_window_type = ?
        AND p.prediction_date <= ?
        AND p.prediction_date >= DATE_SUB(?, INTERVAL ? DAY)
    `, [windowType, cutoffDate, cutoffDate, evaluationPeriodDays]);

    let correct = 0;
    let timingErrors = [];

    for (const pred of predictions) {
      const actuallyWithdrew = pred.enrollment_status === 'withdrawn';
      const predictedWithdrawal = pred.predicted_probability > 0.5;

      if (actuallyWithdrew === predictedWithdrawal) {
        correct++;

        // Calculate timing error if both predicted and actual
        if (actuallyWithdrew && pred.predicted_event_date && pred.withdrawal_date) {
          const predicted = new Date(pred.predicted_event_date);
          const actual = new Date(pred.withdrawal_date);
          const daysDiff = Math.abs((actual - predicted) / (1000 * 60 * 60 * 24));
          timingErrors.push(daysDiff);
        }
      }
    }

    const accuracy = predictions.length > 0 ? (correct / predictions.length) * 100 : 0;
    const avgTimingError = timingErrors.length > 0
      ? timingErrors.reduce((sum, err) => sum + err, 0) / timingErrors.length
      : 0;

    return {
      window_type: windowType,
      window_name: window.name,
      evaluation_period_days: evaluationPeriodDays,
      predictions_evaluated: predictions.length,
      accuracy: accuracy,
      avg_timing_error_days: Math.round(avgTimingError),
      recommendation: this.getWindowRecommendation(accuracy, avgTimingError)
    };
  }

  getWindowRecommendation(accuracy, timingError) {
    if (accuracy >= 80 && timingError <= 7) {
      return 'Excellent - use for automated actions';
    } else if (accuracy >= 70 && timingError <= 14) {
      return 'Good - use for planning with human review';
    } else if (accuracy >= 60) {
      return 'Fair - use for trends only';
    } else {
      return 'Poor - do not use for decisions';
    }
  }
}

module.exports = TimeWindowOptimizer;

Seasonal Adjustment

class SeasonalAdjuster {
  constructor(db) {
    this.db = db;
  }

  async adjustForSeasonality(prediction, currentDate) {
    // Homeschool co-op example: behavior changes by season

    const month = currentDate.getMonth() + 1;  // 1-12

    // Identify current season
    const season = this.identifySeason(month);

    // Get historical seasonal patterns
    const seasonalFactors = await this.getSeasonalFactors(season);

    // Adjust prediction
    const adjustedProbability = prediction.probability * seasonalFactors.withdrawal_multiplier;

    return {
      ...prediction,
      original_probability: prediction.probability,
      adjusted_probability: Math.min(1.0, adjustedProbability),
      seasonal_adjustment: seasonalFactors.withdrawal_multiplier,
      season: season,
      adjustment_reason: seasonalFactors.reason
    };
  }

  identifySeason(month) {
    // Homeschool co-op seasons
    if (month >= 6 && month <= 8) return 'summer';  // June-August
    if (month >= 9 && month <= 12) return 'fall_semester';  // Sept-Dec
    if (month >= 1 && month <= 5) return 'spring_semester';  // Jan-May
  }

  async getSeasonalFactors(season) {
    // Historical withdrawal rates by season
    const historicalRates = await this.db.query(`
      SELECT 
        COUNT(*) as total_families,
        SUM(CASE WHEN withdrawal_date IS NOT NULL THEN 1 ELSE 0 END) as withdrawals,
        SUM(CASE WHEN withdrawal_date IS NOT NULL THEN 1 ELSE 0 END) * 100.0 / COUNT(*) as withdrawal_rate
      FROM families
      WHERE 
        (? = 'summer' AND MONTH(observation_date) BETWEEN 6 AND 8)
        OR (? = 'fall_semester' AND MONTH(observation_date) BETWEEN 9 AND 12)
        OR (? = 'spring_semester' AND MONTH(observation_date) BETWEEN 1 AND 5)
    `, [season, season, season]);

    const baselineRate = await this.db.query(`
      SELECT AVG(withdrawal_rate) as baseline
      FROM seasonal_baseline
    `);

    const seasonalRate = historicalRates[0].withdrawal_rate;
    const baseline = baselineRate[0].baseline;

    const multiplier = seasonalRate / baseline;

    const reasons = {
      summer: 'Summer break - families evaluate before fall',
      fall_semester: 'New semester - commitment period',
      spring_semester: 'Mid-year - stable period'
    };

    return {
      withdrawal_multiplier: multiplier,
      reason: reasons[season],
      historical_rate: seasonalRate,
      baseline_rate: baseline
    };
  }
}

Usage Example

const optimizer = new TimeWindowOptimizer(db);

// Predict across all time windows
const predictions = await optimizer.predictAcrossWindows(187);

console.log(`
Multi-Window Predictions for Family 187:

TACTICAL (1-2 weeks ahead):
  Withdrawal Probability: ${(predictions.tactical.probability * 100).toFixed(1)}%
  Confidence: ${predictions.tactical.confidence.toFixed(1)}/100
  Expected Event Date: ${predictions.tactical.predicted_event_date || 'N/A'}
  Days Until Event: ${predictions.tactical.days_until_event || 'N/A'}

STRATEGIC (30-45 days ahead):
  Withdrawal Probability: ${(predictions.strategic.probability * 100).toFixed(1)}%
  Confidence: ${predictions.strategic.confidence.toFixed(1)}/100
  Expected Event Date: ${predictions.strategic.predicted_event_date}
  Days Until Event: ${predictions.strategic.days_until_event}

PLANNING (60-90 days ahead):
  Withdrawal Probability: ${(predictions.planning.probability * 100).toFixed(1)}%
  Confidence: ${predictions.planning.confidence.toFixed(1)}/100
  Expected Event Date: ${predictions.planning.predicted_event_date}
  Days Until Event: ${predictions.planning.days_until_event}
`);

// Evaluate window accuracy
const accuracy = await optimizer.evaluateWindowAccuracy('strategic', 90);

console.log(`
Strategic Window Performance (last 90 days):
  Accuracy: ${accuracy.accuracy.toFixed(1)}%
  Timing Error: ${accuracy.avg_timing_error_days} days
  Recommendation: ${accuracy.recommendation}
`);

// Example output:
// Multi-Window Predictions for Family 187:
//
// TACTICAL (1-2 weeks ahead):
//   Withdrawal Probability: 89.2%
//   Confidence: 85.2/100
//   Expected Event Date: 2025-01-05
//   Days Until Event: 9
//   
// STRATEGIC (30-45 days ahead):
//   Withdrawal Probability: 87.4%
//   Confidence: 68.9/100
//   Expected Event Date: 2025-01-28
//   Days Until Event: 32
//   
// PLANNING (60-90 days ahead):
//   Withdrawal Probability: 82.1%
//   Confidence: 51.2/100
//   Expected Event Date: 2025-02-22
//   Days Until Event: 57

Variations

By Prediction Horizon

Ultra-Short (1-7 days): - Very accurate (90-95%) - Limited action time - Use: Daily task lists, urgent interventions

Short (7-30 days): - Accurate (80-90%) - Good action time - Use: Standard operations

Medium (30-90 days): - Moderate accuracy (65-80%) - Planning horizon - Use: Resource allocation, budgeting

Long (90+ days): - Low accuracy (50-65%) - Strategic planning - Use: Trend analysis only

By Domain

Homeschool Co-op: - Natural cycle: Semesters (16 weeks) - Optimal window: 30-45 days (one payment period ahead) - Seasonal adjustments: Summer vs semester

SaaS: - Natural cycle: Billing period (monthly/annual) - Optimal window: 60-90 days (pre-renewal) - Seasonal adjustments: Quarter-end, fiscal year

Property Management: - Natural cycle: Lease terms (12 months) - Optimal window: 90-120 days (pre-renewal) - Seasonal adjustments: Moving seasons

By Use Case

Intervention Planning: - Window: 30-45 days - Priority: Balance accuracy and lead time - Accept: Some false positives

Capacity Planning: - Window: 60-90 days - Priority: Trend accuracy - Accept: Lower individual accuracy

Emergency Response: - Window: 7-14 days - Priority: Maximum accuracy - Accept: Limited action time

Consequences

Benefits

1. Actionable predictions 30-day window gives time to intervene effectively.

2. Accuracy transparency Users know 30-day predictions 87% accurate, 90-day only 62%.

3. Multiple planning horizons Tactical + Strategic + Planning windows serve different needs.

4. Timing optimization Find sweet spot between accuracy and lead time.

5. Seasonal awareness Adjust for natural cycles, holidays, enrollment periods.

6. Performance monitoring Track accuracy by window, tune over time.

Costs

1. Multiple predictions required Must predict across 3+ windows, more computation.

2. Complexity Users see multiple probabilities (confusing?).

3. Calibration maintenance Each window needs separate calibration.

4. Seasonal data needed Requires multi-year history to detect patterns.

5. Timing uncertainty "Will happen in 20-40 days" less precise than "30 days."

Sample Code

Optimize window empirically:

def find_optimal_window(X, y, days_to_event):
    """
    Empirically determine best prediction window
    """
    windows = [7, 14, 30, 45, 60, 90]
    results = []

    for window in windows:
        # Filter to cases where event happened within window
        mask = (days_to_event <= window) & (days_to_event > 0)
        X_window = X[mask]
        y_window = y[mask]

        if len(y_window) < 50:
            continue

        # Train model for this window
        X_train, X_test, y_train, y_test = train_test_split(
            X_window, y_window, test_size=0.2
        )

        model = RandomForestClassifier()
        model.fit(X_train, y_train)

        # Evaluate
        y_pred_proba = model.predict_proba(X_test)[:, 1]
        auc = roc_auc_score(y_test, y_pred_proba)

        results.append({
            'window_days': window,
            'auc': auc,
            'sample_size': len(y_window)
        })

        print(f"Window {window} days: AUC = {auc:.3f} (n={len(y_window)})")

    return pd.DataFrame(results)

Known Uses

Homeschool Co-op Intelligence Platform - Tactical: 7-14 days (92% accurate) - Strategic: 30-45 days (87% accurate) - Planning: 60-90 days (71% accurate) - Uses strategic for interventions

SaaS Churn Prediction - Standard: 60-90 days pre-renewal - Gives time for expansion/success efforts - Typical accuracy: 80-85%

Healthcare Readmission Prediction - Window: 30 days post-discharge - FDA-cleared algorithms - Accuracy: 70-80%

Weather Forecasting - 1-day: 95% accurate - 7-day: 80% accurate - 14-day: 60% accurate - Classic accuracy-vs-horizon tradeoff

Requires: - Pattern 11: Historical Pattern Matching - time-to-event in historical data - Pattern 12: Risk Stratification Models - predictions across windows - Pattern 13: Confidence Scoring - confidence degrades with time

Enhances: - Pattern 15: Intervention Recommendation Engine - timing affects interventions - Pattern 22: Progressive Escalation Sequences - window determines escalation - Pattern 26: Feedback Loop Implementation - validate window accuracy

Enabled by this: - Optimal intervention timing - Multi-horizon planning - Accuracy-vs-lead-time optimization

References

On Forecasting: - Hyndman, Rob J., and George Athanasopoulos. Forecasting: Principles and Practice, 3rd Edition. OTexts, 2021. https://otexts.com/fpp3/ (Free online textbook, comprehensive) - Armstrong, J. Scott, ed. Principles of Forecasting: A Handbook for Researchers and Practitioners. Springer, 2001. - Makridakis, Spyros, et al. "The M4 Competition: 100,000 time series and 61 forecasting methods." International Journal of Forecasting 36(1), 2020. (Forecasting competition results)

On Prediction Degradation: - Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail—but Some Don't. Penguin, 2012. (Accessible discussion of prediction limits) - Taleb, Nassim Nicholas. The Black Swan. Random House, 2007. (Unpredictable events)

On Healthcare Prediction Horizons: - Kansagara, Devan, et al. "Risk Prediction Models for Hospital Readmission: A Systematic Review." JAMA 306(15), 2011: 1688-1698. (30-day window standard) - "Hospital Readmissions Reduction Program." CMS. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program

On Time Series Analysis: - Box, George E.P., et al. Time Series Analysis: Forecasting and Control, 5th Edition. Wiley, 2015. (Classic ARIMA text) - "Time Series Forecasting." TensorFlow Tutorials. https://www.tensorflow.org/tutorials/structured_data/time_series (Neural network time series)

On Implementation: - Prophet (Facebook): https://facebook.github.io/prophet/ (Forecasting at scale) - statsmodels: https://www.statsmodels.org/stable/tsa.html (Python time series library) - forecast (R): https://pkg.robjhyndman.com/forecast/ (R forecasting package by Hyndman)

Related Patterns in This Trilogy: - Pattern 12 (Risk Stratification): Predictions need horizon specification - Pattern 13 (Confidence Scoring): Confidence degrades over time - Pattern 15 (Intervention Recommendation): Optimal intervention timing - Pattern 19 (Causal Inference): Causal relationships over time