Associate Professor John Evans

Dr Amandha Ganegoda

Dr Amandha Ganegoda

The Rumsfeld classification

In the Finsia Journal of Applied Finance in 2008, we used the “Rumsfeld” classification of risks to identify the different types of operational risks and how to best approach each type of risk from a quantification perspective. The Rumsfeld approach characterised risks into:

  • known/known (ie, we know the risk exists and we know how to model the outcomes);
  • known/unknown (ie, we know the risk exists, but we don’t know how to model it with any reliability); and
  • unknown/unknown (ie, we have no idea what risks might exist and, by definition, no idea how to model the risks).

This classification system, in terms of risk management, implies:

  • for known/known risks, it is appropriate for the institution to accept these risks and to manage them through economic capital;
  • for known/unknown risks, it would be prudent to transfer these risks to another entity; and
  • for unknown/unknown risks, these should not be taken on at all, and are best managed by “positive” contract wording that specifies what risks the institution is exposed to.

The Diebold classification

In their 2010 book on risk management, Diebold et al advocated a slightly different classification system consisting of identifying risks as Known (K), Unknown (u) or Unknowable (U), but the implication of this classification system was the same as our Rumsfeld approach.

Both the Rumsfeld and Diebold classification systems are based on the extent of knowledge available as to the occurrence of an event, and how to model the outcome of the event. Under the Diebold system, K refers to situations where there is a broad agreement between experts on the relevant theories and the underlying models, u refers to situations where there is more than one competing theory or a model in which none of them have reached the status of a paradigm, and U refers to situations where there is no theoretical model.

Our extension of the Diebold classification: Ambiguity

In our soon-to-be-published paper in the Australian Journal of Management, we have extended the classification system to include an “Ambiguity” group of risks, as there are situations in which future outcomes are vaguely defined due to ambiguous behaviour of the market participants, but the risks are neither K nor u. In this particular context, we use the term Ambiguity to refer to the uncertainty created by market participants’ ability to respond differently to certain events and circumstances.

Although often ignored, Ambiguity is an important source of indeterminacy, which is difficult to measure and manage like risk, nor can it be reduced by investing in knowledge as for a u-type risk. A primary source of Ambiguity is the ability to understand something more than one way and to respond differently.

Examples of ambiguous situations are surprisingly common. A classic example of how Ambiguity can lead to disastrous outcomes is the highly publicised and controversial credit rating system of asset-backed securities (ABS), which served as a catalyst to the recent subprime mortgage crisis and the consequent global market meltdown. The originate-to-distribute lending model of the banking industry, which has been partly blamed for the subprime mortgage crisis, heavily depended on the ability of the rating agencies to accurately value ABS. Rating agencies used a scale similar to the ones they used to rate bonds to rate the probabilities of default on ABS. It was only later realised that although a BBB tranche of an ABS may have the same expected loss as a BBB corporate bond, the loss distributions of the two are significantly different. Even though the BBB ABS was priced higher than the BBB corporate bond, most market participants failed to notice this due to the Ambiguity created by similarities in the notation of the rating systems. If rating agencies used a different notation from the one they had been using for the bonds, investors might not have made the false interpretation.

Why introduce Ambiguity?

Whereas previously we included Ambiguity risks in the u category, we feel that these risks have significant differences from the u-type risks and that specific attention should be brought to these risks. Our classification would then become:

  • K, where we know the risk exists and can be confident of the modelling;
  • A, where we know the risk exists, but recognise that there is a range of outcomes, each of which can be modelled, but where we are uncertain as to which outcome will occur due to the difficulty of predicting human actions and counteractions;
  • u, where we know the risk exists, but we are not confident of how to model the outcome; and
  • U, where we have no idea what risks exist.

Issues remaining

There are still some issues in using this classification system. First, even though we might have a model for a K-type or A-type risk, care needs to be exercised in understanding the model rather than just blindly adopting it, as the assumptions may not be appropriate in all circumstances. Second, in terms of the implications of this classification (as well as the Rumsfeld and Diebold versions), there is a serious problem in that it would inhibit financial product innovation, as most innovation would involve at least u-type and U-type risks. If introduced as part of a prudential regulatory process, it could result in inefficiencies in the capital markets. To some extent, the effect of the introduction of u-type risks could be ascertained by stress testing, but care needs to be exercised in the acceptance of the results as the process is highly dependent on historical experience. Reverse stress testing, where you search for events that would cause insolvency, can also be useful. It may be possible to manage the u-type risk by spreading it across the capital markets so that the impact on any one institution is minimised, but, as the global financial crisis showed, poor information systems may not detect cumulative risks of this type – with disastrous consequences.

There is also another issue with this classification. It may infer that once classified, a risk remains constant, which is not correct, and there is the possibility of misclassifying a risk as a type K when it is in reality a type u. For example, a u-type risk may, with the passing of time, become a K-type or A-type risk, and it is hoped that it will in fact become reclassified and reduce the risk. Also, once a U-type risk occurs, it progresses to become a u-type or A-type risk.

Matching the correct tools to each classification

While the proposed classification system will highlight the differences between risks, it is then important to develop tools for assessing these risk types. History is littered with misunderstandings of what is an appropriate tool for measuring a particular risk. A classic example is that the models based on the assumption of daily market returns are Normally distributed. Using daily returns of the Dow Jones Average, Estrada pointed out that the lowest of the best 10 daily returns during the period of 1900 to 2006 was 8.6 standard deviations above the mean. If we assume that the life of planet Earth is around 4.5 billion years, under the Normal assumption one return of this magnitude or larger should occur every 223,014 lives of our planet; and yet 10 such returns have been observed during the period of 107 years. The degree of the error is enormous, yet the Normal assumption is widely used in the pricing of derivatives, as well as in economic capital calculations. Furthermore, Estrada showed that for an investment made between 1900 and 2006, by not being invested in the best 10 days of the market, the terminal wealth would decrease by 65%, whereas by avoiding the worst 10 days of the market, the terminal wealth would increase by 206% relative to a passive investment strategy. The aggregate outcome – in this particular example, the long-term performance of the investment – is largely determined by just a few extreme observations, and demonstrates the disproportionate impact of the extreme events on the aggregate outcome and the importance of properly accounting for those extreme events. In other words, in contrast to light-tailed risks, the uncertainty of risk measure for a heavy-tailed risk would be largely determined by the ability to accurately estimate the probabilities of the extreme events. The mean and standard deviation hardly provide any useful information.

In dealing with A-type risks, the analysis must be very different from that for K-type and u-type risks. Cognitive psychologists have studied behavioural traits of humans, including herding, framing, mental accounting, loss aversion, overconfidence, conservatism and anchoring. These behavioural traits can often assist to explain (and maybe even to predict) market anomalies and events, such as bubble formations, erratic trading activities, and overreactions to information. A risk manager who pays attention to the cognitive behaviour of market participants will have a better chance of understanding financial markets and their future direction. The findings of behavioural economics show that financial markets and their participants may not always behave rationally. Hence, it is always a good idea to perform a scenario analysis by relaxing the “rational” assumption of traditional finance theory to see what can happen when market participants behave irrationally.

Finally, when dealing with U-type risk, we believe that the focus should be on proactive crisis management. Even though a solid risk management framework will aid in containing risk, unforeseen events would still occur simply because we are unable to predict all possible future states.

Taleb famously defines such events as black swan events. Even though black swan events are almost impossible to pre-identify, this does not necessarily mean that we cannot prepare for an unforeseen crisis. Diebold et al point out that although crisis events may have unique and unanticipated causes, most of the time the required post-crisis responses are often quite similar. Thus, readiness for a known possible crisis can become useful in responding to a surprise crisis situation. For example, several experts have pointed out that the system redundancies developed in New York city in anticipation of the Y2K bug (which never materialised) became indispensable for the fast recovery of the city’s transportation and telecommunication systems after the 9/11 attacks.

The lesson here is that even if one cannot anticipate the nature of a possible black swan event, it is still possible to have some sort of contingent plan in place to assist in a crisis situation. The ability to steer through a crisis depends more on the decisions made before the crisis than on the decisions made in the midst of it.

We identify four important traits a company should have in order to successfully steer through a crisis. First, it is important to have an established process to monitor near-miss events. Monitoring near-miss events can provide early detection of problems, as well as avoid possible future crises. Second, a corporate culture that encourages the reporting of problems rather than the habit of hiding them is vital for early detection of a problem and appropriate response. Third, flexibility of organisational structure is important, since – similar to the evolution of species – only the organisations with flexibility to adapt and innovate will survive while the others will become extinct. Last, but not least, firms need to maintain good public relations and collaborate with relevant parties. Unless firms develop cooperative relationships with their partners, they will not get preferential assistance during either a crisis or an opportunity.

The message

The message is clear: in developing a risk management strategy, it is critical to classify risks according to their characteristics in order to better understand possible outcomes, and then to build models that are appropriate for ascertaining those outcomes. “Shoehorning” inappropriate models into a situation to save costs will lead to disasters, as we saw in the global financial crisis.

John Evans is an Associate Professor and Head of Risk & Actuarial Studies at the Australian School of Business. Dr Amandha Ganegoda is a Research Assistant at the Australian School of Business. This article first appeared in Risk Management Today.