Interactive Risk Dashboards: Creating Live S-Curves for Your Stakeholders

If you have ever walked into a steering committee with a risk report full of red, amber, and green flags, you already know the limitation. Stakeholders rarely want a static snapshot. They want to ask questions. What is the chance we finish by the board date? How much confidence do we have in that forecast? What dates correspond to P50 and P80? A PDF can report risk, but it does not support a live conversation very well.

That is where a Jupyter Notebook can be surprisingly useful. It may not be as polished as a full web dashboard, but it gives you a practical way to simulate outcomes, visualize an S-curve, and answer common schedule risk questions quickly. For many project managers, that is more than enough to move from static reporting to better decision support.

One of the best visuals for this type of discussion is the S-curve. In project risk terms, an S-curve shows the cumulative probability of finishing by different dates. It is simple enough for non-specialists to follow, but powerful enough to anchor serious conversations about confidence, contingency, and target dates.

Why Jupyter Notebook Is a Practical Starting Point

When people hear “interactive dashboard,” they often imagine a custom web application. That can be useful, but it is not always necessary. A Jupyter Notebook gives you a much faster way to test ideas, validate assumptions, and build a working prototype.

A notebook is especially useful when you want to:

  • Run Monte Carlo simulations quickly.
  • Adjust assumptions directly in code.
  • Display probability metrics immediately.
  • Show charts in the same working environment.
  • Build a proof of concept before investing in a full application.

This makes Jupyter a practical middle ground. It is more dynamic than a static report, but much easier to build than a production dashboard.

Why a Live S-Curve Is Better Than a Static Finish Date

A single forecast date hides too much uncertainty. If someone says, “We expect to finish on June 30,” that sounds clear, but it raises important questions.

  • Is June 30 a 50 percent probability date or an 80 percent probability date?
  • How wide is the range of likely outcomes?
  • Is the target date aggressive, realistic, or conservative?
  • How sensitive is the forecast to the duration assumptions?

A live S-curve helps answer those questions visually. Instead of compressing uncertainty into one statement, it shows the full relationship between finish date and confidence.

That changes the conversation. Instead of debating whether a single date is “good” or “bad,” stakeholders can see the range and discuss what level of confidence they actually want.

What This Notebook Example Does

The code below builds a simple schedule risk model using synthetic task data. Each task has three estimates:

  • Optimistic.
  • Most likely.
  • Pessimistic.

The notebook then:

  • Simulates many possible project durations using a triangular distribution.
  • Converts those simulated outcomes into finish dates.
  • Builds a cumulative probability S-curve.
  • Calculates key dates such as P50 and P80.
  • Calculates the probability of hitting a chosen target date.
  • Displays a Plotly chart directly inside Jupyter Notebook.

This is not a full enterprise schedule risk tool, but it is an excellent starting point for project managers who want something practical and easy to understand.

Step 1: Create a Small Sample Schedule

The first step is to define a simple schedule. In this example, the project includes six serial activities.

import numpy as np
import pandas as pd
import plotly.express as px
from datetime import datetime, timedelta

tasks = pd.DataFrame({
    "task": [
        "Requirements",
        "Design",
        "Build",
        "Testing",
        "Training",
        "Go-Live Prep"
    ],
    "optimistic": [5, 8, 15, 10, 4, 3],
    "most_likely": [7, 10, 20, 14, 6, 5],
    "pessimistic": [10, 15, 30, 22, 9, 8]
})

project_start = datetime(2026, 5, 1)

This table gives each activity a three-point estimate. That is often enough to create a useful first version of a schedule risk model. It does not require perfect data. It only requires a disciplined estimate of the likely range.

Step 2: Simulate Many Possible Project Durations

Next, we run a Monte Carlo simulation. For each simulation, the code samples one possible duration for every task, then sums those durations to produce one possible project finish.

def run_simulation(tasks, n_simulations=5000, start_date=datetime(2026, 5, 1)):
    total_durations = []

    for _ in range(n_simulations):
        sampled_durations = np.random.triangular(
            left=tasks["optimistic"],
            mode=tasks["most_likely"],
            right=tasks["pessimistic"]
        )
        total_duration = sampled_durations.sum()
        total_durations.append(total_duration)

    total_durations = np.array(total_durations)
    finish_dates = [start_date + timedelta(days=int(d)) for d in total_durations]

    return total_durations, finish_dates

This approach uses a triangular distribution, which is a common way to model uncertainty when you have optimistic, most likely, and pessimistic estimates. It is simple, understandable, and practical for early risk analysis.

For project managers, the value is straightforward. A deterministic finish date suggests certainty that rarely exists in real life. Running thousands of simulations creates a range of outcomes, which is much closer to how projects actually behave.

Step 3: Convert the Results Into an S-Curve

Once the finish dates are simulated, they need to be converted into a cumulative probability curve.

def build_s_curve(finish_dates):
    finish_series = pd.Series(pd.to_datetime(finish_dates)).sort_values()
    cumulative_probability = np.arange(1, len(finish_series) + 1) / len(finish_series)

    s_curve_df = pd.DataFrame({
        "finish_date": finish_series.values,
        "cumulative_probability": cumulative_probability
    })

    return s_curve_df

This function sorts the finish dates from earliest to latest, then assigns cumulative probability values across the range. If 80 percent of simulations finish on or before a certain date, then that date corresponds to the 80 percent cumulative probability point.

This is the point where raw simulation output becomes a stakeholder-friendly chart. Instead of a list of thousands of simulated values, you now have a visual that supports a real conversation.

Step 4: Calculate Confidence Dates and Target-Date Probability

Stakeholders usually ask a few predictable questions. What is the P50 date? What is the P80 date? What is the probability of hitting the target?

The following helper functions answer exactly those questions.

def get_percentile_date(finish_dates, percentile):
    sorted_dates = pd.Series(pd.to_datetime(finish_dates)).sort_values().reset_index(drop=True)
    index = int(np.ceil(percentile / 100 * len(sorted_dates))) - 1
    return sorted_dates.iloc[max(index, 0)]

def get_probability_by_target(finish_dates, target_date):
    finish_series = pd.Series(pd.to_datetime(finish_dates))
    probability = (finish_series <= pd.to_datetime(target_date)).mean()
    return probability

These metrics are useful because they convert probability into decisions. A target date with a 35 percent chance of success leads to a very different discussion than a date with 80 percent confidence. This is where schedule risk analysis becomes genuinely useful for governance.

Step 5: Run the Analysis in Jupyter Notebook

Because this version is written for Jupyter Notebook, it uses fixed Python values instead of Streamlit controls. That makes it easy to run inside a notebook cell without needing a Streamlit app server.

simulation_count = 5000
target_date = datetime(2026, 6, 30)
confidence_level = 80

durations, finish_dates = run_simulation(tasks, n_simulations=simulation_count, start_date=project_start)
s_curve_df = build_s_curve(finish_dates)

p50_date = get_percentile_date(finish_dates, 50)
p80_date = get_percentile_date(finish_dates, 80)
selected_conf_date = get_percentile_date(finish_dates, confidence_level)
target_probability = get_probability_by_target(finish_dates, target_date)

print(f"Probability of finishing by target date: {target_probability:.1%}")
print(f"P50 date: {p50_date.strftime('%Y-%m-%d')}")
print(f"P80 date: {p80_date.strftime('%Y-%m-%d')}")
print(f"P{confidence_level} date: {selected_conf_date.strftime('%Y-%m-%d')}")

This gives you the core metrics directly in the notebook output. In practice, that already allows you to answer several important stakeholder questions before you even display the chart.

Step 6: Plot the S-Curve

Now the results can be visualized using Plotly.

fig = px.line(
    s_curve_df,
    x="finish_date",
    y="cumulative_probability",
    title="Project Finish Probability S-Curve"
)

fig.update_traces(line=dict(width=3))
fig.update_yaxes(tickformat=".0%")

fig.add_vline(x=p50_date, line_dash="dash", line_color="blue")
fig.add_vline(x=p80_date, line_dash="dash", line_color="orange")
fig.add_vline(x=pd.to_datetime(target_date), line_dash="dot", line_color="red")

fig.add_annotation(x=p50_date, y=0.5, text="P50", showarrow=True, arrowhead=1)
fig.add_annotation(x=p80_date, y=0.8, text="P80", showarrow=True, arrowhead=1)
fig.add_annotation(
    x=pd.to_datetime(target_date),
    y=min(target_probability, 0.95),
    text="Target",
    showarrow=True,
    arrowhead=1
)

fig.show()

This chart displays the cumulative probability of finishing by different dates, while also marking the P50 date, P80 date, and chosen target date. That makes the output much easier to interpret during a meeting or review.

Full Jupyter Notebook Code

If you want the complete notebook-ready example in one block, here it is.

import numpy as np
import pandas as pd
import plotly.express as px
from datetime import datetime, timedelta

tasks = pd.DataFrame({
    "task": [
        "Requirements",
        "Design",
        "Build",
        "Testing",
        "Training",
        "Go-Live Prep"
    ],
    "optimistic": [5, 8, 15, 10, 4, 3],
    "most_likely": [7, 10, 20, 14, 6, 5],
    "pessimistic": [10, 15, 30, 22, 9, 8]
})

project_start = datetime(2026, 5, 1)

def run_simulation(tasks, n_simulations=5000, start_date=datetime(2026, 5, 1)):
    total_durations = []

    for _ in range(n_simulations):
        sampled_durations = np.random.triangular(
            left=tasks["optimistic"],
            mode=tasks["most_likely"],
            right=tasks["pessimistic"]
        )
        total_duration = sampled_durations.sum()
        total_durations.append(total_duration)

    total_durations = np.array(total_durations)
    finish_dates = [start_date + timedelta(days=int(d)) for d in total_durations]

    return total_durations, finish_dates

def build_s_curve(finish_dates):
    finish_series = pd.Series(pd.to_datetime(finish_dates)).sort_values()
    cumulative_probability = np.arange(1, len(finish_series) + 1) / len(finish_series)

    s_curve_df = pd.DataFrame({
        "finish_date": finish_series.values,
        "cumulative_probability": cumulative_probability
    })

    return s_curve_df

def get_percentile_date(finish_dates, percentile):
    sorted_dates = pd.Series(pd.to_datetime(finish_dates)).sort_values().reset_index(drop=True)
    index = int(np.ceil(percentile / 100 * len(sorted_dates))) - 1
    return sorted_dates.iloc[max(index, 0)]

def get_probability_by_target(finish_dates, target_date):
    finish_series = pd.Series(pd.to_datetime(finish_dates))
    probability = (finish_series <= pd.to_datetime(target_date)).mean()
    return probability

simulation_count = 5000
target_date = datetime(2026, 6, 30)
confidence_level = 80

durations, finish_dates = run_simulation(tasks, n_simulations=simulation_count, start_date=project_start)
s_curve_df = build_s_curve(finish_dates)

p50_date = get_percentile_date(finish_dates, 50)
p80_date = get_percentile_date(finish_dates, 80)
selected_conf_date = get_percentile_date(finish_dates, confidence_level)
target_probability = get_probability_by_target(finish_dates, target_date)

print(f"Probability of finishing by target date: {target_probability:.1%}")
print(f"P50 date: {p50_date.strftime('%Y-%m-%d')}")
print(f"P80 date: {p80_date.strftime('%Y-%m-%d')}")
print(f"P{confidence_level} date: {selected_conf_date.strftime('%Y-%m-%d')}")

fig = px.line(
    s_curve_df,
    x="finish_date",
    y="cumulative_probability",
    title="Project Finish Probability S-Curve"
)

fig.update_traces(line=dict(width=3))
fig.update_yaxes(tickformat=".0%")

fig.add_vline(x=p50_date, line_dash="dash", line_color="blue")
fig.add_vline(x=p80_date, line_dash="dash", line_color="orange")
fig.add_vline(x=pd.to_datetime(target_date), line_dash="dot", line_color="red")

fig.add_annotation(x=p50_date, y=0.5, text="P50", showarrow=True, arrowhead=1)
fig.add_annotation(x=p80_date, y=0.8, text="P80", showarrow=True, arrowhead=1)
fig.add_annotation(
    x=pd.to_datetime(target_date),
    y=min(target_probability, 0.95),
    text="Target",
    showarrow=True,
    arrowhead=1
)

fig.show()

What Stakeholders Can Learn From the Output

Once the notebook runs, stakeholders can quickly understand several things.

1. The probability of hitting the target date

This is usually the first question leadership asks. If the selected target has only a low probability of success, that is a clear signal that the date may be too aggressive.

2. The difference between likely and conservative planning

P50 often reflects a middle-ground forecast. P80 is a more conservative planning date. The gap between those two dates helps reveal how much uncertainty exists in the schedule.

3. The shape of uncertainty

A steep S-curve means outcomes are clustered fairly tightly. A flatter curve suggests wider uncertainty. That can indicate unstable assumptions, unresolved dependencies, or weak estimate quality.

Why This Works Well in Practice

The real strength of this approach is not the chart by itself. The strength is the quality of conversation it creates.

Instead of saying, “The forecast date is June 30,” you can say:

  • Here is the probability of meeting June 30.
  • Here is the P50 date.
  • Here is the P80 date.
  • Here is the spread of uncertainty behind the forecast.

That is a much more useful discussion. It moves stakeholders away from false certainty and toward informed decision-making.

A Few Practical Improvements You Can Add Next

This notebook example is intentionally simple, but it can be extended without too much effort.

You could add:

  • More realistic project data loaded from a CSV file.
  • Milestone-level analysis instead of only final finish dates.
  • Histograms of finish dates alongside the S-curve.
  • Sensitivity analysis for high-risk tasks.
  • Working-day calendars instead of simple elapsed days.
  • Scenario testing for different target dates.

These changes can make the notebook even more useful while still keeping it lightweight.

Important Limitations to Explain Clearly

A tool like this becomes useful only when people understand what it does and does not represent.

1. The example assumes a simple serial schedule

All task durations are summed in sequence. Real projects often include parallel work, dependencies, float, approval gates, and rework loops. A more advanced model would need to account for that.

2. The simulation is only as good as the estimates

If optimistic, most likely, and pessimistic values are unrealistic, the output will be unrealistic too. A notebook does not solve poor estimating discipline. It only makes the assumptions more visible.

3. Calendars are not included

This example uses simple elapsed days. It does not account for weekends, holidays, or resource calendars. In real environments, that can materially affect the finish date.

4. Risks are represented indirectly

The uncertainty is modeled through duration ranges, not through explicit risk events tied to tasks. That is fine for a starting point, but more mature models may handle risk drivers more explicitly.

Final Thought

A static risk report can tell people where the project appears to stand. A notebook-based S-curve can help them understand what that actually means.

That difference matters. One format reports uncertainty. The other helps people discuss and manage it.

If you want a practical way to make schedule risk more visible, more understandable, and more actionable, Jupyter Notebook is an excellent place to start. It is fast to build, easy to test, and strong enough to support much better stakeholder conversations than a single forecast date ever could.

How To Land the Job and Interview for Project Managers Course:

Advance your project management career with HK School of Management’s expert-led course. Gain standout resume strategies, master interviews, and confidently launch your first 90 days. With real-world insights, AI-powered tools, and interactive exercises, you’ll navigate hiring, salary negotiation, and career growth like a pro. Enroll now and take control of your future!

Coupons

AI For Project Managers

Coupon code: 396C33293D9E5160A3A4
Custom price: $14.99
Start date: March 29, 2026 4:15 PM PDT
End date: April 29, 2026 4:15 PM PDT

AI for Agile Project Managers and Scrum Masters

Coupon code: 5BD32D2A6156B31B133C
Details: Custom price: $14.99
Starts: January 27, 2026
Expires: February 28, 2026

AI-Prompt Engineering for Managers, Project Managers, and Scrum Masters

Coupon code: 103D82060B4E5E619A52
Details: Custom price: $14.99
Starts: January 27, 2026
Expires: February 27, 2026

Agile Project Management and Scrum With AI – GPT

Coupon code: 0C673D889FEA478E5D83
Details: Custom price: $14.99
Starts December 21, 2025 6:54 PM PST
Expires January 21, 2026 6:54 PM PST

Leadership for Project Managers: Leading People and Projects

Coupon code: A339C25E6E8E11E07E53
Details: Custom price: $14.99
Starts: December 21, 2025 6:58 PM PST
Expires: January 21, 2026 6:58 PM PST

Project Management Bootcamp

Coupon code: BFFCDF2824B03205F986
Details: Custom price: $12.99
Starts 11/22/2025 12:50 PM PST (GMT -8)
Expires 12/23/2025 12:50 PM PST (GMT -8)

Leave a Reply

Your email address will not be published. Required fields are marked *