Build a Sensitivity Analysis Pack

This guide explains how to build a sensitivity analysis pack in Model Reef by creating dedicated models that vary specific assumptions and then summarising their impact on key outputs.

Model Reef does not have a one-click data table feature. Instead, you produce a set of carefully designed models that each represent a point on the sensitivity curve.

Before you start

You should have:

  • A solid Base Case model.

  • A clear set of assumptions you want to test, for example:

    • Revenue growth.

    • Gross margin.

    • Capex levels.

    • Discount rate.

  • Valuation metrics configured (NPV, IRR, Money Multiple) if you want value sensitivities.

If needed, review:

  • Build a DCF Model (FCFF)

  • Build a Valuation Sensitivity Model

What you will build

  • A small family of models where only one or two assumptions change.

  • A summary of how outputs respond as each key assumption is moved.

  • A pack that can be shared with stakeholders showing risk and upside.

1

Choose sensitivity dimensions and ranges

Select a small number of variables to test, for example:

  • Annual revenue growth rate.

  • Gross margin percentage.

  • Capex per year.

  • WACC or equity discount rate.

For each, define a range of values that you want to test, such as:

  • Base growth plus or minus 5 percentage points.

  • Margin increasing from 50 percent to 60 percent.

  • WACC between 8 percent and 12 percent.

These ranges should be realistic and focused on the questions people actually ask.

2

Create sensitivity models from the Base Case

For each sensitivity dimension, create separate models.

  • Duplicate the Base Case model for each point you want to test, for example:

    • Model - Sens - Growth 15pc

    • Model - Sens - Growth 20pc

    • Model - Sens - Growth 25pc

  • In each model:

    • Adjust only the targeted assumption, for example revenue growth drivers.

    • Leave everything else unchanged.

Repeat this process for each dimension, keeping changes isolated to one assumption per group of models where possible.

3

Update valuation and outputs in each model

In each sensitivity model:

  • Ensure valuation settings are appropriate.

  • Record key outputs for that point, for example:

    • Revenue in key years.

    • EBITDA in key years.

    • Project NPV and IRR.

    • Equity IRR and Money Multiple.

    • Minimum cash balance.

You can export these values or copy them into an external summary table.

4

Build sensitivity tables and charts outside the models

Once you have collected outputs from each sensitivity model, arrange them into tables or charts, for example:

  • Growth rate along the horizontal axis and NPV on the vertical axis.

  • Margin assumptions versus IRR.

  • Capex levels versus minimum cash balance.

This step is typically done in an external tool or documentation, using Model Reef as the engine that generates the underlying numbers.

5

Create a narrative sensitivity pack

For communication purposes, assemble a pack (for example a deck or report) that includes:

  • A short description of each sensitivity dimension and why it matters.

  • The tables and charts built from your models.

  • Key non-technical messages, for example:

    • Value is more sensitive to margin than to growth beyond a certain point.

    • Small changes in WACC have a meaningful effect on valuation.

    • High capex strategies significantly increase funding risk.

This pack gives stakeholders a structured view of risk and leverage points in the model.

6

Keep the sensitivity family in sync with model updates

When the Base Case changes, the sensitivity models may become stale.

  • Decide how often to refresh the sensitivity pack, for example:

    • After major assumption updates.

    • After board meetings or financing events.

  • When refreshing:

    • Start from the updated Base Case.

    • Recreate only the sensitivity models that are still relevant.

Keeping a small, focused set of sensitivities is easier than maintaining a very large grid of cases.

Check your work

  • Sensitivity models are identical to the Base Case except for the targeted assumptions.

  • Outputs are consistently measured across all models.

  • Tables and charts clearly show how outputs respond to assumption changes.

  • Stakeholders can understand both the range of outcomes and which assumptions matter most.

Troubleshooting

chevron-rightToo many models become hard to managehashtag

Reduce the number of sensitivity points and focus only on the most informative ones.

chevron-rightSensitivity results look erratichashtag

Check that assumptions are being changed consistently in each model and that there are no unintended structural differences.

chevron-rightStakeholders are overwhelmed by detailhashtag

Highlight the top few sensitivities and summarise the rest in an appendix or technical note.

Last updated