Skip to main content

Article

Physical climate modelling in an investment context: Sorting the insight from the noise

In the first on a series of articles looking at how climate modelling is affecting organisations’ everyday business processes, we consider how some of the world’s largest financial institutions are grappling with the new world of physical climate risk data.

In the first in a series of articles looking at how climate modelling is affecting organizations’ everyday business processes, we consider how some of the world’s largest financial institutions are grappling with the new world of physical climate risk data.

In the last few years, the collective realization that climate change is not a possibility, but an emerging operating reality, has posed the business and financial community with a new question: How do you invest confidently in a world where, within the investment horizon, a changing climate can impact the viability of an asset or an entire business?

The climate agenda has found its way to the boardroom of many organizations, for both prudent risk management and increased regulatory pressures. Whether it’s mandated stress tests for banks or public reporting — such as via the Task Force on Climate-related Financial Disclosures (TCFD) framework — for others, it boils down to the need to ingest new and complex climate data, connect that data to existing functions and models, and to do so at pace.

For many, it represents a greater reliance on third-party data and models than is natural or comfortable based on historic risk management approaches. This does not mean climate models themselves should be taken on faith, but rather that understanding them and what good looks like is becoming a new fundamental and imperative for leadership.

Where do I start?

While climate modelling in this context is relatively new, its precursor, natural catastrophe modelling, is a well-established industry. Primarily focused on serving the insurance world for present-day pricing and capital modelling, the distillation of physical risk into an expected view of loss ”risk today”’ means a lot of the component parts of “risk tomorrow” are well tested and well audited. (It also has the helpful bonus that if an insurer is likely to use the same model, it should come to similar conclusions on a property’s risk profile, for example.) And, while a 30- to 100-year climate model cannot be audited the same way against present-day outcomes, one should expect common underpinnings of the same quality, for example:

  • Highly granular spatial resolution (such as 100m or lower) for the perils that require it (see box out) and for the use cases demanding it (for instance modelling individual properties). 
  • Where appropriate, public defences against the likes of flood risk are taken into account in the property’s risk profile.
  • An ability to translate physical damage (depth of flood and speed of wind, for instance) into financial loss via an accepted process (typically using vulnerability functions) built up on insurance loss data specific to the building types in question.

Bolting on the climate-adjusted aspect of risk is far from trivial business, but equally, no trend analysis is meaningful without a clear and reasonable view of risk today as a starting point.

All the above focuses on how to assess an individual property, but for this kind of modelling it’s very difficult to produce a meaningful portfolio-level answer any other way. So whether modelling a mortgage book, a real estate investment trust, or the operating assets of a company, the specific locational context of each asset needs careful consideration.

To bring this to life, we can compare this bottom-up approach to the aspiration of many organizations looking to create postal code-level databases of climate risk. The smart way to do this is to aggregate climate data on the basis of the individual property risk profiles in the postal code; the alternative is to create an average of some kind over the land mass, regardless of where properties are located, which will likely skew the view of risk (for example, no one would build a house in the river that flows through the middle of the postal code).

Similarly, properties built in an area which is known to be riskier today should be built to a higher standard of resilience to that peril. This can be as simple as using more nails on roof tiles in windy areas, or as dramatic as building houses on stilts in flood-prone areas. One can therefore readily see the dangers of under- or over-reporting risk that arises without understanding how risk and property design interact.

What climate perils should I care about?

It’s very important for an organization to arrive at its own view of what climate risks need to be modelled, rather than to accept a data vendor’s view, which will likely align to its own product.

Several inputs can be useful here:

  • Present-day loss data and/or frictions in achieving insurance
  • National risk reports, such as those released by the Environmental Protection Agency (EPA) in the US to give a broader view of operating context
  • The input of the academic weather and climate community

Notably, for property portfolios, it’s common for flood-related risks, wind/hurricane, and wildfire to drive the majority of direct impacts, while extreme heat, water stress, and drought cause longer-term indirect impacts, such as business disruption (down days or adjusted work hours).

There will likely be overlap between an organization’s priority list and what is available from the climate model community, but not a perfect fit. Presently, there are few climate-adjusted landslip or tornado models available, for example. That said, virtually all climate model vendors have ambitious roadmaps for augmenting capability: given a typical vendor will work with a client for five-plus years (as the integration into processes can run very deep), it’s as important to understand (and potentially influence) these roadmaps so as to understand their current capabilities. We have included in the box out some potential questions to ask regarding commonly sought peril models.

What outputs should I seek to get, and how are they delivered?

No two vendors produce the same suite of outputs. Some focus more on a score system, which promises simplicity, while others offer an array of data points seeking to describe the climate-risk distribution. For investment modelling, we tend to recommend working with the latter. Given the non-linear relationship between financial stress and credit-worthiness, knowing whether an organization is exposed to frequent, nuisance-level risk, or highly unlikely catastrophic risk (the latter typically being more insurable than the former) is important. For this reason it’s important to focus on data structured in return periods or similar and avoid score-based averages, which invariably reduce granularity.

We have typically found that organizations create their own score-based system from this data in order to aid decisioning. The difference here is that the simplification is owned and understood in-house rather than being at the behest of the vendor. This also has the advantage of the organization being less tied to the data architecture of a particular vendor — return-period-structured data is fairly common, whereas scores are always bespoke.

It’s worth noting that virtually all vendors focus on property damage rather than business interruption risk as the latter requires much more information about the nature of the enterprise, the criticality of various sites to their ability to generate revenue, and so forth. It’s perfectly possible for an investment house to build this up using a vendor’s damage data as an input, but few vendors can meaningfully provide this answer outside in. The same is true for value chain analysis: if this is not first mapped by the investment house, it’s unlikely that the data vendor would be able to model it meaningfully.

We have also found that how vendors choose to characterize their product varies wildly:

  • Many concentrate on the quality/experience of the interface (web portal, API integration), as this can be compelling and obvious even to a non-technical user.
  • Some concentrate on auditability, with an ability to click through various calculation stages and/or to access metadata.
  • Others concentrate on the quality of the underlying model in terms of completeness, granularity, and noise reduction — this is the most critical for risk-decisioning, but also the hardest to audit during a tender process.

Again, an organization’s use cases will dictate how important each of these dimensions are. Ease of execution might be appropriate for experimental use or very high level reporting, while the quality and granularity of the model will likely matter more when making active investment decisions. 

How do I know if it’s right?

No physical risk vendor will claim to have perfect foresight, and the data can be inherently noisy. Auditing a vendor’s approach is the best source of comfort, and is best achieved through the following:

  • Detailed model documentation, including methodologies, assumptions, and limitations.
  • Access to the technical team responsible for the modelling for further interrogation on approaches taken.

Where practical and helpful, you should back-test present-day risk for certain perils and geographies against real-world outcomes. You should expect a correlation between major recent events and locations, which would should show as a higher risk in the model. This is best undertaken by an independent party, though the vendor should have an internal process equivalent to this.

What do I do with all this information?

The outputs of these models are useful across scenario analysis and reporting (such as the stress tests and TCFD reporting mentioned above), internal testing of existing risk positions, and front-book decision making. Ideally, an organization will use the same model across all these exercises (and across business units, such as debt and equity investments) in order to ensure consistency of appetite.

The final step to consider is how to increase the resilience of the organization’s assets. This can mean many interventions with the occupier, such as:

  • Allocating/provisioning additional funds specifically for adaptation that improves site resilience, or making investment decisions conditional on such changes.
  • Adjusting covenants to require specific insurances to be in place, such as increasing limits for flood insurance.
  • Expecting to see and audit active risk management plans, such as creating wide and well-maintained firebreaks around a property.
  • Repurposing sites so higher-risk locations are less critical to overall operation, as well as wider resilience strategies to mitigate potentially catastrophic business model issues.

The above suggestions represent better overall economic outcomes than rendering assets and businesses stranded, but may require a step-change in engagement from certain investors.

Depending on both the timeframe of the physical risk profile evolution and the length of investment, many changes can be encompassed into existing operational risk management processes with limited disruption. The key is to bring forward the analysis and communication of risk to maximise the window for resilience building and adaptation.

Concluding thoughts

Physical risk data modelling is a rapidly evolving technical area. It’s small wonder that large investment organizations have on-boarded staff fluent in the design and use of these models to ensure full value is extracted from them.

At the same time, they will only ever tell a subset of the physical risks a location or entity might face. They are not designed, for example, to conduct what-if analyses of how a commercial location might be affected by nearby residential abandonment arising from physical risk, and none will tell you directly how to keep a property insurable in adverse circumstances (thus preventing technical default by breach of covenants).

Still, their shortcomings are not a reason to delay their integration into your organization to safeguard against adverse selection, better understand long-term risk, and begin to build capability for the realities of climate change. 

Back to Top

Top questions to ask a vendor for select perils and features

Perils (subset only shown)

  • Flood risk
    • Is the model high resolution (at least below 100m, ideally below 30m)?
    • Does the model cover each of surface water, river, and coastal flooding (including both sea level rise and surge)?
    • Are public defences taken into account across all countries of interest?
  • Wind
    • Does the model cover both hurricane/typhoon and synoptic wind?
    • For hurricane, does the model account for poleward shift arising from warming seas?
  • Wildfire
    • Does the model allow for wildfire resilience standards where present?
  • Subsidence
    • Does the model translate into financial damage (versus offering a risk score)?
  • Extreme Heat
    • Is the cut-off temperature reported adjustable to align to, for example, the technical working limit of particular sites?

Features

  • Back-testing
    • When/how was the model last bias corrected (for example, adjusted for recent real world events)?
  • Translation of damage
    • What control can we have over the translation of physical severity into damage by property type?
  • Outputs
    • Does the model produce simple risk score outputs or provide fuller visibility into the climate risk distribution?
    • What visibility does the model vendor provide into the methodology to calculate each output?

Our people

George Baldwin

George Baldwin

Climate Resilience and Strategy Advisory Leader, UK, and Climate Centre of Excellence Co-Leader

  • United Kingdom

Callum Ellis

Callum Ellis

Physical Risk Modelling Lead, Climate Resilience & Strategy, Marsh Advisory UK

  • United Kingdom

Take control of your supply chain with Sentrisk

Our AI-powered platform, Sentrisk™, helps you transform your supply chain risk exposure into business opportunity through best-in-market data capabilities.