Build a Machine Readable Model for Automation
This guide explains how to design and maintain a Model Reef model that is easy for external tools and automation layers to consume. The goal is to make the model predictable and self-describing so that APIs, scripts or downstream systems can integrate with it without bespoke interpretation each time.
Model Reef itself is the modelling engine. A machine readable design makes it much easier to plug that engine into other systems as automation features and integrations evolve.
Before you start
You should have:
A working model with three statement logic in place.
A good understanding of variable types, branches, categories and the Data Library.
A clear idea of which outputs you want external tools to consume, for example:
Forecast cashflows.
Scenario summaries.
Key KPIs or drivers.
If you are still designing the model structure, see:
Model Structure Principles
Build a Central Assumption Library
What you will build
By the end of this guide you will have:
A model with:
Stable naming conventions for branches, variables and Data Library entries.
Consistent category and sub category structures.
Clearly tagged assumptions and outputs.
A set of exportable series and reporting views that external tools can rely on.
Define a stable naming scheme
Machine readability starts with predictable names. Define conventions for:
Branches:
Use simple, consistent names such as
Group,Division Retail,Division Online.Avoid frequent renaming once integrations depend on them.
Variables:
Prefix with type and purpose, for example
Revenue - Online,COGS - Retail,Opex - Marketing - Paid Ads.
Data Library entries:
Use names that include the role, for example
Assumption - Inflation - General,Driver - Units - Retail.
Document these conventions in your internal documentation or GitBook so the whole team follows them.
Standardise categories and sub categories
External tools need a consistent mapping between variables and report lines. To support this:
Use a stable category hierarchy for the P&L, Balance Sheet and Cashflow Statement.
Avoid frequent reclassification of variables between categories once the model is integrated.
When new lines are added, place them within the existing hierarchy where possible rather than inventing new categories ad hoc.
A consistent structure makes it much easier to automate extraction and mapping of outputs.
Centralise assumptions in the Data Library
Automation layers benefit from having all assumptions in one place. To enable that:
Store key assumptions in the Data Library rather than in many separate variables.
Tag assumptions clearly, for example
ASSUMPTION,DRIVER,FX,TAX.Avoid embedding important hard coded values deep in formulas where they are hard to scrape or interpret.
This allows external tools to read a single table of assumptions and know what each entry represents.
Tag and document machine facing series
For series you expect external tools to consume, such as:
FCFF and FCFE cashflows.
Scenario level KPIs.
Specific driver series.
Use tags or naming to signal that they are machine facing, for example:
Tag
EXPORTon Data Library entries you plan to extract via CSV or future APIs.Add notes with concise descriptions and units.
Maintain a simple index document in GitBook that lists these series and their intended use.
This is particularly important if the model is large and complex.
Create stable reporting views
Design reports and dashboards that will act as canonical views for automation, for example:
A standardised Cashflow Waterfall configuration.
A multi year P&L view at a fixed level of detail.
A valuation summary table with NPV, IRR, Money Multiple and selected metrics.
Simple KPI dashboards.
External tools can then use these views as structured exports rather than having to assemble numbers from scratch.
Plan for scenario integration
If you intend to automate scenario analysis:
Keep scenario models structurally identical.
Use the same naming conventions and category structures in each.
Ensure that machine facing outputs carry the same names and tags in all scenarios.
This lets external tools treat scenarios as multiple instances of the same schema, rather than as unrelated models.
Use exports as an interim integration channel
Until direct APIs are in place, you can still use exports as a machine interface, for example:
Export CSV of selected Data Library entries marked
EXPORT.Export report tables to CSV for ingestion into BI tools or scripts.
Use predictable file naming, for example
company modelname metricname date.csv.
Automation scripts can then consume these exports on a schedule, knowing that structures and names will remain stable.
Check your work
Names and categories are consistent and follow documented rules.
Important assumptions live in the Data Library and are tagged appropriately.
Machine facing series and reports exist and are clearly labelled.
You can explain to an engineer or data analyst exactly where to find the series they need.
Troubleshooting
Related guides
Last updated