Integrated Cost and Schedule Risk: Building a 3D Confidence Model

Most project leaders hear two different risk stories.

One story is about time: Are we likely to finish by the target date? The other is about money: Are we likely to stay inside the approved budget? Those conversations often happen in separate meetings, with separate charts, and sometimes with separate assumptions. That is exactly where trouble starts.

Because in real projects, schedule risk and cost risk are rarely independent. If a key work package takes longer, labor costs rise. If a supplier slips, expediting costs appear. If testing runs long, both the finish date and the burn rate move in the wrong direction together. Looking at time risk and cost risk separately can create false confidence.

A better approach is to model them together and ask a more useful question: How confident are we that the project will finish by this date and under this cost at the same time? That is the idea behind an integrated cost and schedule risk model. And once you visualize it as a surface with time, cost, and confidence, you get a practical 3D view that leaders can actually use.

Why separate risk views often mislead

If you tell a sponsor, “We have an 80% confidence in the schedule” and then say, “We also have an 80% confidence in the budget,” it sounds reassuring. But that does not automatically mean you have an 80% confidence of achieving both together.

That matters because projects are judged on combined performance. Finishing on time but far over budget is not a win. Finishing under budget but months late usually is not either.

Here is the plain-language issue:

  • A schedule forecast gives you the chance of hitting a date.
  • A cost forecast gives you the chance of hitting a budget.
  • Leadership usually needs the chance of hitting both.

Even in a simplified world where time and cost were fully unrelated, 80% confidence on each side would only translate to 64% confidence of getting both. In real projects, they are usually linked, which is exactly why the combined picture matters more than the separate ones.

This is not about making project controls more academic. It is about making the answer match the real decision.

What “joint confidence” means in plain language

Joint confidence is simply the probability that two things are true at the same time:

  1. Total duration is at or below a target date.
  2. Total cost is at or below a target budget.

If you are used to percentile language, you can think of it like this:

  • A P80 schedule date means there is an 80% chance of finishing on or before that date.
  • A P80 cost means there is an 80% chance of staying at or below that cost.
  • But the joint confidence of being both on time and on budget might be much lower.

That is why leaders care. A portfolio review, steering committee, or capital approval board is not really asking for two separate promises. It is asking for one practical outcome: How likely is this project to land successfully within the approved box?

Imagine a simple target box:

  • Finish by Day 120
  • Spend no more than $900,000

A joint confidence calculation answers: What percentage of simulated project outcomes fall inside that box?

Once you frame the problem that way, the logic becomes much clearer.

What a 3D confidence model actually shows

The term “3D confidence model” sounds more technical than it needs to be.

Think of it as three dimensions:

  • X-axis: total project duration
  • Y-axis: total project cost
  • Z-axis: confidence, or probability

In practice, that means you build many possible project

outcomes through simulation, then map how often each combination of duration and cost occurs.

You can picture it in a few different ways:

  • As a 3D surface where higher areas represent more likely combinations of time and cost
  • As a heat map showing dense and sparse regions of possible outcomes
  • As contour lines that connect points with similar confidence levels
  • As a target box overlay that highlights whether the approved date and budget sit inside a strong or weak confidence region

The point is not the graphics by themselves. The point is that a single view can show:

  • the most likely finish-and-cost combinations
  • the spread of possible outcomes
  • the relationship between schedule slip and cost growth
  • the confidence associated with any chosen date-and-budget pair

That is far more useful than placing one schedule S-curve on one slide and one cost S-curve on another and hoping people mentally combine them correctly.

How the model is built

At a high level, an integrated cost and schedule risk model combines three things:

  1. A logical schedule model
  2. A cost model tied to project work
  3. Uncertainty and correlation assumptions

Then it runs repeated simulations to produce a cloud of possible outcomes.

1. Start with the schedule logic

The schedule needs more than task dates in a spreadsheet. It should reflect actual delivery logic:

  • activities and milestones
  • dependencies
  • critical and near-critical paths
  • resource-sensitive work where delays can propagate
  • external drivers such as permits, interfaces, or vendor delivery dates

For risk modeling, each key activity or work package typically gets an uncertainty range, such as:

  • optimistic duration
  • most likely duration
  • pessimistic duration

Or, if the data is mature enough, a fitted probability distribution.

2. Link cost to the work

The cost model should be connected to how the project actually spends money. Typical categories include:

  • labor
  • materials
  • equipment
  • subcontractors
  • indirect costs
  • owner’s costs
  • escalation
  • contingency draw mechanisms

Some costs behave mostly as fixed amounts. Others are strongly time-dependent.

For example:

  • Site management cost often rises when duration rises.
  • Construction supervision cost may burn weekly.
  • Rental equipment cost grows with extended use.
  • Delay can trigger rework, standby cost, or acceleration spending.

This is where integration becomes important. If cost is modeled as though time changes do not matter, the model misses the way projects really behave.

3. Add uncertainty and correlation

This is the part that separates a realistic model from a decorative one.

Uncertainty means key inputs can vary. Correlation means some of those variations move together.

Examples:

  • Poor ground conditions can extend excavation and increase cost together.
  • Design immaturity can cause both schedule delay and change-related cost growth.
  • A supplier issue can delay delivery and create expediting cost.
  • Weather can reduce productivity and increase labor hours.

Without correlation, a simulation may underestimate how often bad outcomes cluster together.

What gets simulated

A typical Monte Carlo simulation runs thousands of trials.

In each trial, the model does things like:

  • draw random durations for uncertain activities
  • draw random values for uncertain cost elements
  • apply defined correlations between related variables
  • recalculate the project finish date
  • recalculate total project cost

The result from one trial is a pair:

  • total duration
  • total cost

After thousands of trials, you have a large set of possible project outcomes. That outcome set can then be plotted as:

  • a scatter cloud
  • a density map
  • cumulative curves
  • a joint confidence surface

That surface is just a way of summarizing the same simulation evidence in a form that supports decisions.

Reading the surface without getting lost in statistics

A good risk model should make decisions easier, not harder.

Here are the main questions leaders can ask when looking at an integrated confidence view.

1. Where is the highest-density region?

This shows the combinations of duration and cost that occur most often in the simulation.

It is often more informative than a single deterministic baseline because it reveals the likely neighborhood, not just one point estimate.

2. Is the approved target inside a strong-confidence area?

Suppose the approved target is:

  • finish by Day 120
  • stay under $900,000

The model can show whether that point sits in:

  • a high-confidence zone
  • a marginal zone
  • a low-confidence zone

If it sits in a low-confidence area, the project may be carrying more commitment risk than leadership realizes.

3. How steep is the tradeoff?

In some projects, a small gain in schedule confidence may require a large increase in expected cost. In others, modest cost relief may create a large increase in schedule exposure.

The shape of the surface helps reveal whether tradeoffs are gentle or severe.

4. How tightly are time and cost linked?

If the outcome cloud stretches diagonally, that often indicates a strong relationship between delay and cost growth.

If it is more circular or diffuse, the link may be weaker.

This matters when choosing mitigation actions. A strongly linked project may benefit from interventions that reduce root-cause uncertainty rather than treating time and cost separately.

A simple example

Assume a project simulation produces the following separate results:

  • P80 finish date: Day 125
  • P80 total cost: $950,000

A sponsor might be tempted to think, “Fine, let us approve Day 125 and $950,000.”

But the integrated model may show something like this:

  • Probability of finishing by Day 125: 80%
  • Probability of staying under $950,000: 80%
  • Probability of achieving both together: 68%

That changes the conversation.

If the sponsor wants a joint confidence of 80%, the target box may need to move to something like:

  • Day 130
  • $990,000

The exact numbers depend on the modeled relationship between time and cost, but the principle stays the same: separate P-values do not answer the combined question.

Why this matters for governance

Executive decisions are usually binary in practice:

  • approve or do not approve
  • commit or do not commit
  • release funding or hold
  • promise the date or keep it tentative

Those decisions should be based on the confidence of the actual commitment, not on two disconnected confidence statements.

An integrated model helps governance in at least five ways.

More honest approvals

Approvals can be tied to the confidence of the combined target, not just optimistic baseline numbers.

Better contingency setting

Contingency can be discussed in relation to both time and cost together. This is often more practical than holding a cost contingency that assumes no schedule movement.

Clearer escalation triggers

If the approved box corresponds to low joint confidence, that can trigger stronger review, phased authorization, or added mitigation before commitment.

Better portfolio comparison

Two projects may each claim “P80” status, but one may have much lower joint confidence once cost and schedule are integrated. That gives portfolio boards a more consistent basis for comparison.

Stronger communication with stakeholders

Leaders can say, in plain terms, “At the approved date and budget together, we currently have a 62% confidence,” which is far clearer than presenting two separate charts and leaving the audience to infer the rest.

The role of correlation: the part many teams understate

If there is one concept that deserves extra attention, it is correlation.

Many project models include uncertainty, but not enough meaningful dependency between variables. That can create a misleadingly smooth picture.

Common sources of correlation

Correlation appears when one underlying cause affects multiple outcomes at once.

Examples include:

  • design maturity affecting both quantities and durations
  • labor productivity affecting both crew hours and progress rates
  • supplier reliability affecting both material timing and logistics cost
  • regulatory delay affecting both approval milestones and holding costs
  • adverse weather affecting productivity, access, and equipment utilization

Why weak correlation assumptions are risky

If the model assumes that schedule and cost move mostly independently, it may generate too many mixed outcomes like:

  • late but cheap
  • early but expensive

Some projects do produce those patterns, but many do not. In many delivery environments, bad schedule outcomes and bad cost outcomes tend to arrive together.

When that is true, a weakly correlated model can overstate joint confidence.

Practical guidance

Do not add correlation everywhere just to appear sophisticated. Add it where there is a credible causal connection and where the effect is material.

A few strong, well-justified dependencies are usually better than dozens of arbitrary correlations.

What data quality is good enough?

Teams often hesitate because they think integrated modeling requires perfect data. It does not.

What it requires is structured judgment plus traceable assumptions.

You can build a useful model if you have:

  • a logic-driven schedule
  • a cost estimate mapped to work scope
  • identified risk drivers
  • reasonable duration and cost ranges
  • a clear explanation of major dependencies

Of course, better data improves the model. But the absence of perfection is not a reason to keep making commitments from disconnected views.

A simple integrated model with explicit assumptions is usually better than two polished but unrelated forecasts.

Common modeling mistakes

Integrated models can be very powerful, but only if they are built carefully.

Mistake 1: Treating the schedule as a static date list

If the logic is weak, simulation quality will also be weak. The model must reflect how work really flows.

Mistake 2: Using cost ranges that ignore schedule effects

Time-dependent cost needs to react to time movement.

Mistake 3: Double counting risk

If a risk is already represented through uncertain activity durations and also added again as a separate cost event without adjustment, the model can overstate exposure.

Mistake 4: Ignoring risk ownership and response plans

A model should reflect current mitigation and response assumptions. Otherwise it may estimate a risk profile that no longer matches the actual delivery strategy.

Mistake 5: Confusing precision with accuracy

A 3D chart can look impressive, but visual polish does not guarantee validity. The assumptions and logic still matter more than the graphics.

How to explain the result to non-technical stakeholders

One of the best ways to present integrated confidence is to avoid leading with math.

Instead, use a sequence like this:

  1. State the commitment box.
  2. Show where that box sits in the simulated outcome space.
  3. State the percentage of outcomes that land inside it.
  4. Explain the main drivers that push outcomes out of the box.
  5. Show what actions would improve that percentage.

For example:

Our current approved target is June 30 and $12.5M. Based on the integrated simulation, 58% of outcomes achieve both together. The main reasons outcomes miss the box are vendor package delay, late design release, and field productivity uncertainty. If we lock the vendor earlier and reduce design interface uncertainty, the joint confidence improves materially.

That is a decision-ready message.

Useful outputs beyond a single confidence number

A mature integrated model can produce more than just one headline figure.

Confidence for any target pair

You can test many combinations of date and budget, not just the currently approved one.

Target pairs for a required confidence level

Instead of asking, “What confidence do we have at this target?” you can ask, “What target pair gives us 70% or 80% joint confidence?”

Conditional views

You can also ask practical questions such as:

  • If we must hold the date fixed, what budget is needed for 75% confidence?
  • If the budget cannot move, what date gives us 75% confidence?
  • If a specific risk is mitigated, how does the confidence surface change?

Driver analysis

You can identify which uncertainty sources do the most to reduce joint confidence. That helps focus mitigation money where it matters.

From model to management action

The value of integrated risk modeling is not the model itself. It is what management does because of it.

Typical actions include:

  • revising commitments to a more realistic target pair
  • adding schedule contingency and cost contingency in a coordinated way
  • funding targeted mitigations
  • changing execution strategy
  • resequencing work
  • increasing procurement urgency on long-lead items
  • introducing stage gates before full commitment
  • reserving management contingency for correlated downside scenarios

In other words, the model should shape decisions, not just reporting.

A lightweight implementation approach

If your organization is new to integrated cost and schedule risk analysis, you do not need to start with a massive enterprise model.

A practical first version can follow these steps:

Step 1: Select the right level of detail

Model the major work packages and major risk drivers. Do not try to simulate every minor task.

Step 2: Identify cost elements that move with time

Separate:

  • mostly fixed cost
  • variable cost driven by duration
  • event-driven cost
  • escalation-sensitive cost

Step 3: Define uncertainty ranges

Use historical data where available, and expert elicitation where needed.

Step 4: Capture

Step 4: Capture the major dependencies

Document the few relationships that matter most, such as:

  • productivity affecting both duration and labor cost
  • supplier delay affecting both milestone dates and expediting cost
  • design maturity affecting both rework exposure and schedule performance
  • weather affecting both field progress and equipment cost

Start simple. A small set of credible dependencies is enough to make the model meaningfully more realistic.

Step 5: Run enough simulations to stabilize the picture

Use enough trials to make the confidence estimates reasonably stable. The exact number depends on model complexity, but the principle is straightforward:

  • too few trials can create noisy results
  • more trials usually produce a smoother and more reliable view
  • the model should be checked for stability before using it in governance

Step 6: Test the approved target and a few alternatives

Do not stop at one answer. Compare:

  • the current approved date and budget
  • a slightly relaxed target pair
  • a more conservative target pair
  • one or two mitigation cases

That gives leadership options, not just a warning.

Step 7: Use the outputs in real decisions

The first lightweight model should support specific actions, such as:

  • whether to approve a commitment now
  • whether to add contingency
  • whether to delay commitment until key uncertainties reduce
  • which mitigation deserves immediate funding

That is enough to create value without overengineering the first effort.

How often the model should be updated

An integrated model is most useful when it is treated as a living decision tool, not as a one-time study.

Typical update points include:

  • after baseline approval
  • at major design maturity gates
  • after key procurement awards
  • when critical risks are retired or intensified
  • before external commitments to date or budget
  • when forecast performance diverges materially from plan

The purpose of updating is not to produce endless analysis. It is to keep the commitment view aligned with current reality.

Who should be involved

The best integrated models are not built by one specialist working alone.

Useful participation usually includes:

  • project controls
  • scheduling
  • cost engineering or estimating
  • risk management
  • engineering or design leads
  • procurement
  • construction or delivery leads
  • project management and sponsorship

Why this matters:

  • schedulers understand delivery logic
  • cost teams understand spending behavior
  • delivery leads understand what really drives performance
  • sponsors help define the decision thresholds that matter

Cross-functional input improves assumptions and increases trust in the result.

What a good leadership discussion sounds like

Once an integrated model exists, the meeting itself gets better.

Instead of asking:

  • “What is the schedule confidence?”
  • “What is the cost confidence?”

the conversation becomes:

  • “What is our confidence in the approved target pair?”
  • “What are the top drivers pulling us out of the box?”
  • “Which actions improve joint confidence the most?”
  • “What target pair matches the confidence level we are willing to commit to?”

That is a much stronger governance discussion because it is tied directly to the actual decision.

Practical interpretation tips

A few simple habits can make the outputs easier to use.

Do not obsess over a single percentage point

The difference between 67% and 69% joint confidence is rarely the main issue. The important question is whether the commitment is:

  • clearly weak
  • borderline
  • reasonably robust

Use the model to understand the risk position, not to pretend that every decimal place is meaningful.

Compare scenarios, not just baselines

Integrated models are especially valuable when comparing choices:

  • current plan versus accelerated procurement
  • current plan versus added design resources
  • phased authorization versus full commitment now
  • supplier A versus supplier B

Often the clearest value is not the absolute confidence number, but how much a decision changes it.

Focus on the out-of-box drivers

If outcomes miss the target box, ask why. Usually a small number of causes explain a large share of the misses.

That is where management attention should go.

Match confidence to the decision type

Not every decision needs the same confidence level.

For example:

  • an internal planning target may tolerate lower confidence
  • a public commitment may require much higher confidence
  • a regulatory milestone may justify conservative assumptions
  • an early concept estimate may use broader ranges and less rigid thresholds

The right level depends on the consequence of being wrong.

Limitations to acknowledge

Integrated modeling is powerful, but it is not magic.

It still depends on:

  • the quality of schedule logic
  • the realism of the cost model
  • the credibility of uncertainty ranges
  • the validity of dependency assumptions
  • the discipline used in updating the model

It also cannot fully predict:

  • unprecedented external shocks
  • policy changes outside project control
  • strategic decisions that radically alter scope
  • organizational behavior that is not reflected in the model

That does not reduce its value. It simply means the model should inform judgment, not replace it.

A useful maturity path for organizations

Organizations do not need to become experts overnight. A practical maturity path often looks like this:

Level 1: Separate schedule and cost risk views

This is where many teams begin. It is better than deterministic planning alone, but still leaves the combined commitment question unresolved.

Level 2: Basic integrated target-box testing

At this stage, the team can estimate joint confidence for a chosen date and budget pair.

Level 3: Explicit dependency modeling

The organization starts modeling a limited set of meaningful correlations and time-driven cost behavior.

Level 4: Scenario-based decision support

The model is used to compare mitigation options, procurement strategies, and execution plans.

Level 5: Embedded governance use

Joint confidence becomes part of routine approvals, portfolio reviews, and commitment decisions.

Even moving from Level 1 to Level 2 can materially improve decision quality.

A concise way to summarize the concept

If you need to explain the whole idea in one minute, this usually works:

Separate schedule confidence and cost confidence do not tell us the confidence of achieving both together. Because time and cost often move together, we need an integrated model. By simulating duration and cost at the same time, we can estimate the probability of finishing by a chosen date and under a chosen budget simultaneously. That gives leadership a more honest basis for approvals, contingency, and mitigation decisions.

Final thought

Projects are not delivered in two separate worlds, one for dates and one for dollars. They are delivered in one world where time and cost interact constantly.

That is why the real management question is not:

  • “What is our schedule confidence?”
  • “What is our cost confidence?”

It is:

  • “What is our confidence in meeting the commitment we are actually making?”

A 3D integrated cost and schedule confidence model helps answer that question directly. And when leaders can see time, cost, and confidence together, the conversation shifts from isolated forecasts to realistic commitments.

Python Implementation Example

import numpy as np
import pandas as pd
import plotly.express as px
from datetime import datetime, timedelta

np.random.seed(42)

# ----------------------------
# Sample integrated project data
# ----------------------------
work_packages = pd.DataFrame({
    "work_package": ["Requirements", "Design", "Build", "Testing", "Training", "Go-Live Prep"],
    "optimistic_days": [5, 8, 15, 10, 4, 3],
    "most_likely_days": [7, 10, 20, 14, 6, 5],
    "pessimistic_days": [10, 15, 30, 22, 9, 8],
    "fixed_cost": [12000, 20000, 45000, 18000, 8000, 6000],
    "daily_cost": [1800, 2500, 4200, 3000, 1500, 1200]
})

project_start = datetime(2026, 5, 1)

# ----------------------------
# Simulation settings
# ----------------------------
n_simulations = 10000
target_finish_date = datetime(2026, 6, 30)
target_budget = 260000

# Shared risk drivers to create realistic correlation
# These affect both duration and cost together
driver_names = ["productivity", "vendor_delay", "testing_rework"]
driver_std = np.array([0.12, 0.10, 0.15])
driver_corr = np.array([
    [1.00, 0.35, 0.25],
    [0.35, 1.00, 0.30],
    [0.25, 0.30, 1.00]
])

# Cholesky decomposition for correlated random variables
L = np.linalg.cholesky(driver_corr)

# Exposure of each work package to each shared risk driver
# Columns: productivity, vendor_delay, testing_rework
driver_exposure = {
    "Requirements": np.array([0.30, 0.05, 0.00]),
    "Design":       np.array([0.40, 0.10, 0.05]),
    "Build":        np.array([0.70, 0.60, 0.20]),
    "Testing":      np.array([0.35, 0.15, 0.80]),
    "Training":     np.array([0.20, 0.05, 0.10]),
    "Go-Live Prep": np.array([0.20, 0.20, 0.15])
}

def sample_correlated_drivers():
    z = np.random.normal(size=len(driver_names))
    correlated = L @ z
    scaled = correlated * driver_std
    return dict(zip(driver_names, scaled))

def simulate_integrated_model(df, n_runs, start_date):
    results = []

    for _ in range(n_runs):
        drivers = sample_correlated_drivers()
        driver_vector = np.array([drivers["productivity"], drivers["vendor_delay"], drivers["testing_rework"]])

        total_days = 0
        total_cost = 0

        for _, row in df.iterrows():
            base_duration = np.random.triangular(
                row["optimistic_days"],
                row["most_likely_days"],
                row["pessimistic_days"]
            )

            exposure = driver_exposure[row["work_package"]]
            combined_driver_effect = np.dot(exposure, driver_vector)

            adjusted_duration = max(1, base_duration * (1 + combined_driver_effect))

            fixed_cost_shock = np.random.normal(0, row["fixed_cost"] * 0.03)
            adjusted_fixed_cost = max(0, row["fixed_cost"] + fixed_cost_shock)

            adjusted_daily_cost = row["daily_cost"] * (1 + 0.6 * combined_driver_effect)
            adjusted_daily_cost = max(0, adjusted_daily_cost)

            package_cost = adjusted_fixed_cost + adjusted_duration * adjusted_daily_cost

            total_days += adjusted_duration
            total_cost += package_cost

        finish_date = start_date + timedelta(days=int(round(total_days)))

        results.append({
            "total_duration_days": total_days,
            "finish_date": finish_date,
            "total_cost": total_cost
        })

    return pd.DataFrame(results)

def percentile_finish_date(series, percentile):
    sorted_dates = pd.Series(pd.to_datetime(series)).sort_values().reset_index(drop=True)
    index = int(np.ceil(percentile / 100 * len(sorted_dates))) - 1
    return sorted_dates.iloc[max(index, 0)]

def percentile_cost(series, percentile):
    return np.percentile(series, percentile)

def joint_confidence(df, target_date, target_cost):
    return ((df["finish_date"] <= pd.to_datetime(target_date)) & (df["total_cost"] <= target_cost)).mean()

# ----------------------------
# Run simulation
# ----------------------------
results_df = simulate_integrated_model(work_packages, n_simulations, project_start)

# ----------------------------
# Headline outputs
# ----------------------------
p50_finish = percentile_finish_date(results_df["finish_date"], 50)
p80_finish = percentile_finish_date(results_df["finish_date"], 80)
p50_cost = percentile_cost(results_df["total_cost"], 50)
p80_cost = percentile_cost(results_df["total_cost"], 80)

finish_confidence = (results_df["finish_date"] <= pd.to_datetime(target_finish_date)).mean()
cost_confidence = (results_df["total_cost"] <= target_budget).mean()
combined_confidence = joint_confidence(results_df, target_finish_date, target_budget)

print("Integrated Cost and Schedule Risk Results")
print("-" * 50)
print(f"Target finish date: {target_finish_date.strftime('%Y-%m-%d')}")
print(f"Target budget: ${target_budget:,.0f}")
print(f"Probability of finishing by target date: {finish_confidence:.1%}")
print(f"Probability of staying under target budget: {cost_confidence:.1%}")
print(f"Joint confidence of achieving both: {combined_confidence:.1%}")
print()
print(f"P50 finish date: {p50_finish.strftime('%Y-%m-%d')}")
print(f"P80 finish date: {p80_finish.strftime('%Y-%m-%d')}")
print(f"P50 total cost: ${p50_cost:,.0f}")
print(f"P80 total cost: ${p80_cost:,.0f}")

# ----------------------------
# 2D scatter plot of outcomes
# ----------------------------
plot_df = results_df.copy()
plot_df["finish_date_str"] = pd.to_datetime(plot_df["finish_date"]).dt.strftime("%Y-%m-%d")
plot_df["inside_target_box"] = np.where(
    (plot_df["finish_date"] <= pd.to_datetime(target_finish_date)) &
    (plot_df["total_cost"] <= target_budget),
    "Inside target",
    "Outside target"
)

fig_scatter = px.scatter(
    plot_df,
    x="total_duration_days",
    y="total_cost",
    color="inside_target_box",
    opacity=0.55,
    title="Integrated Cost and Schedule Risk Outcomes",
    labels={
        "total_duration_days": "Total Duration (days)",
        "total_cost": "Total Cost"
    },
    hover_data=["finish_date_str"]
)

fig_scatter.add_vline(
    x=(target_finish_date - project_start).days,
    line_dash="dash",
    line_color="red"
)

fig_scatter.add_hline(
    y=target_budget,
    line_dash="dash",
    line_color="red"
)

fig_scatter.add_annotation(
    x=(target_finish_date - project_start).days,
    y=target_budget,
    text="Target Box Corner",
    showarrow=True,
    arrowhead=1
)

fig_scatter.show()

# ----------------------------
# 3D confidence surface
# ----------------------------
duration_grid = np.linspace(
    results_df["total_duration_days"].min(),
    results_df["total_duration_days"].max(),
    35
)

cost_grid = np.linspace(
    results_df["total_cost"].min(),
    results_df["total_cost"].max(),
    35
)

Z = np.zeros((len(cost_grid), len(duration_grid)))

for i, cost_limit in enumerate(cost_grid):
    for j, duration_limit in enumerate(duration_grid):
        Z[i, j] = (
            (results_df["total_duration_days"] <= duration_limit) &
            (results_df["total_cost"] <= cost_limit)
        ).mean()

surface_df = pd.DataFrame(
    Z,
    index=np.round(cost_grid, 0),
    columns=np.round(duration_grid, 1)
)

fig_surface = px.imshow(
    surface_df,
    origin="lower",
    aspect="auto",
    title="Joint Confidence Surface",
    labels={
        "x": "Duration Threshold (days)",
        "y": "Cost Threshold",
        "color": "Confidence"
    }
)

fig_surface.add_vline(
    x=np.abs(duration_grid - (target_finish_date - project_start).days).argmin(),
    line_dash="dash",
    line_color="red"
)

fig_surface.add_hline(
    y=np.abs(cost_grid - target_budget).argmin(),
    line_dash="dash",
    line_color="red"
)

fig_surface.show()

# ----------------------------
# Contour chart alternative
# ----------------------------
fig_contour = px.density_contour(
    results_df,
    x="total_duration_days",
    y="total_cost",
    nbinsx=30,
    nbinsy=30,
    title="Density Contour of Simulated Time and Cost Outcomes",
    labels={
        "total_duration_days": "Total Duration (days)",
        "total_cost": "Total Cost"
    }
)

fig_contour.add_vline(
    x=(target_finish_date - project_start).days,
    line_dash="dash",
    line_color="red"
)

fig_contour.add_hline(
    y=target_budget,
    line_dash="dash",
    line_color="red"
)

fig_contour.show()

Output Example

Interpretation

Your current target is too aggressive, and the model says the chance of meeting both the date and the budget together is low.

Here is how to read each chart.

1. Scatter plot: Integrated Cost and Schedule Risk Outcomes

This is the most important chart.

  • Each dot is one simulation run.
  • The x-axis is total project duration in days.
  • The y-axis is total cost.
  • The red vertical line is your target duration.
  • The red horizontal line is your target budget.
  • The bottom-left area under both red lines is your success zone, meaning on time and on budget.

What it shows:

  • Most of the simulated outcomes are clustered to the right of 60 days and above $260k.
  • Your target corner at about 60 days and $260k sits near the edge or outside the main cloud.
  • The cloud slopes upward, which means when duration increases, cost also increases.
  • That tells you schedule and cost are strongly linked in this model.

Plain-English meaning:

  • If the project takes longer, it usually costs more.
  • Your chosen target pair looks tighter than what the simulation considers likely.
  • So joint confidence is probably quite low.

2. Density contour chart

This is the same story, but cleaner.

  • The innermost contour lines show the most likely combinations of time and cost.
  • The outer contours show less likely, but still plausible, outcomes.
  • Your red crosshair shows the target date and budget.

What it shows:

  • The highest-density area appears around something like:
    • mid-60s to low-70s days
    • roughly $290k to $330k
  • Your target at 60 days / $260k is near the lower-left edge of the distribution.

Plain-English meaning:

  • The project is more likely to land later and more expensive than your target.
  • Your target is not impossible, but it is clearly optimistic relative to the modeled range.

3. Joint confidence surface

This chart is supposed to show:

  • for any duration threshold and cost threshold,
  • what fraction of simulations fall under both at the same time.

So, for example:

  • if you allow more days and more budget, confidence should go up,
  • if you tighten both, confidence should go down.

But your output looks visually wrong or at least misleading.

Why:

  • The dark block only appears in the upper-right region.
  • The red target lines seem disconnected from the actual plotted surface.
  • This usually happens because px.imshow() is using matrix index positions while your red lines are being added in the original value scale, or because the axis labels are not aligned properly with the underlying grid.

So the correct interpretation of the concept is:

  • higher up and farther right should mean higher joint confidence,
  • lower left should mean lower joint confidence,
  • your target point should likely be in a relatively low-confidence region.

But this particular heatmap is not trustworthy for precise interpretation in its current form.

Overall business interpretation

Your model is telling a very clear story:

  • There is a positive correlation between delay and cost growth.
  • The most likely outcomes are later than 60 days and more expensive than $260k.
  • So the approved target box is probably too tight.
  • If leadership wants a stronger confidence level, they likely need:
    • a later target date,
    • a higher budget,
    • or meaningful mitigation that shifts the cloud down and left.

What to say in project language

You could summarize the result like this:

The simulation suggests our current target of 60 days and $260,000 is aggressive. Most modeled outcomes fall beyond one or both limits, and time and cost move together. This means delays are likely to drive additional cost, not just schedule slippage. To improve confidence, we would need either more contingency, a relaxed target, or actions that materially reduce the main risk drivers.

Practical takeaway

Use the scatter plot and contour plot as your primary visuals here. They are both telling a consistent story:

  • current target = low confidence
  • time and cost are linked
  • most likely outcome = later and more expensive than target

The heatmap needs fixing before you rely on it.

One thought on “Integrated Cost and Schedule Risk: Building a 3D Confidence Model

Leave a Reply

Your email address will not be published. Required fields are marked *