Everyone does risk assessment - but do they do it right?

Risk assessments have become ubiquitous. While they are predominantly unconscious and "automatic," they are increasingly mandatory in the work environment and regulated by laws and standards. However, there has been little concrete support for the actual implementation. Therefore, those involved regularly make more or less obvious mistakes. Approaches for the improvement of risk assessments are presented.

Risk assessment
© depositphotos, Remains

Smaller or larger risk assessments are carried out on a daily basis, for example in road traffic, sports or medical cases. In addition, there are many other risk assessments that are carried out in a professional environment for various reasons. These assessments are used, among other things, to decide on investments, to estimate expenses, to ensure the safety of products or to guarantee the reliability of services.

It is interesting to note that risk assessments are carried out by all levels in the company, both on a small and a large scale. The scope of the resulting decision does not depend on the hierarchical level. For example, the assessment of a potential terrorist threat by a security guard may even have more far-reaching consequences in the short term than an investment decision by management. This shows that the quality of risk assessments and the resulting decisions is very important across all levels of the company. Many of these assessments are made unconsciously, implicitly, or automatically - even in the business environment. However, some of these assessments are carried out explicitly because they are necessary for governance and/or compliance reasons.

Findings and typical mistakes

At the very least, risk assessments carried out for reasons of good governance or compliance should comply with certain rules and minimum standards. Regardless of the importance of risk assessment, however, we regularly make mistakes in the process - sometimes consciously, sometimes unconsciously. From the author's point of view, these can be roughly divided into three categories:

  1. Sources of definitional errors
  2. Procedural error sources
  3. Cognitive sources of error

Risks that are not risks: The group of definitional sources of error includes, among other things, incorrect, inadequate or not mutually agreed definitions of risks. This makes correct assessment more difficult. Here it is noticeable - but only on closer inspection - that there is often no common understanding among the people involved. This fact alone is problematic. Furthermore, risks are usually not consistently linked to corporate objectives. This is also problematic, as it means that the relevance of the risks is not necessarily given. Finally, cause-effect chains are not scrutinized more closely by definition. Instead, simplifications are often made "for pragmatic reasons", but these rarely do justice to the actual complexity of the risks.

All these factors mean that many risk portfolios not only list risks, but also contain either multiple risk drivers, i.e. factors that influence the actual risk, or uncertainties, i.e. situations or scenarios that may, but do not necessarily, turn into an actual risk at a later date. The distinction between risk and uncertainty is made on the basis of probabilities and the degree of concreteness of the consequences. Since uncertainties can also have a positive effect, they are sometimes also referred to as "opportunities.

In sum, it is not uncommon for risk assessments to be performed for risks that are not risks. This makes assessment very difficult and controllability almost impossible.

Risk assessment process - immature, improper, incomplete: Deficits in the definition have a direct impact on the assessment process. Apart from the aforementioned definitional sources of error, new, additional sources of error also lurk in the assessment process itself. In addition to rather obvious error potentials (e.g. manual input errors, individual misjudgements, intentional misstatements, etc.), there are also not so obvious error potentials, some of which are mentioned here as examples:

- The selection and composition of participants in a risk workshop alone can significantly influence the outcome.

- The choice of method has a massive influence on the result and the resulting decisions. The financial crisis provides a prominent example of the choice of an unsuitable method: Before the financial crisis, all banks relied almost exclusively on the value-at-risk method to calculate risk and realized gigantic losses when the mortgage markets collapsed. This is because this method only shows realistic risk values if the general conditions remain constant. In the event of a complete market collapse, the risk values calculated on the basis of this model are simply not correct.

- Assessing risks based on probabilities: There are cases where this works well and makes sense. However, there is often a lack of suitable statistical data for a correct assessment, for example, or the parties involved massively overestimate. Moreover, probabilities also offer considerable potential for error in the subsequent evaluation of the results. Scientists such as Gigerenzer and Kahneman & Tversky have impressively demonstrated that probabilities are rather unsuitable in risk assessment in many cases.

- At the end of a risk assessment process, the result is usually presented as a visual matrix. Even though this so-called risk matrix is very widespread, this presentation often leads to blatant misinterpretations due to the condensation of information.

- Ranking lists are even more dangerous (because they are misleading). Especially dangerous are ranking approaches based on multiplication of the axes (probability times extent of damage). Apart from the fact that independent quantities should not be mixed with each other, they are not even possible from a mathematical point of view (ordinal scales).

Cognitive Traps: The third group of error potentials includes cognitive aspects that have a strong effect on the process, depending on the case. The following effects are an extremely abbreviated selection of findings by Professors Gigerenzer and Kahneman & Tversky, who have worked intensively on these issues for many years. They significantly influence the entire assessment process and thus the result. For the interested reader, book recommendations are included at the end of the article for more intensive reading:

First - Illusion of certainty: People tend to seek certainty. Sometimes this goes so far that a "precise risk value" (even if it has arisen questionably or may even be wrong) is preferred to an approximate range of amounts (even if this range is much more realistic).

Second - Anchor effect: In the course of a group discussion on risk assessment, it can happen that someone consciously or unconsciously sets a so-called "anchor" by proposing a value X. This can lead to a situation in which the discussion is limited to the value X. As a result, there is often only a discussion about adjusting the value instead of completely questioning the value or setting a completely different one against it.

Third - Overestimating oneself: Managers tend to overestimate their own judgment, or rather the quality of their judgment. This effect tends to be amplified in the case of particularly successful people (because in their perception they have done many things right so far), in the case of certain personality traits and in the case of situational aspects.

Fourth - Framing effect: This effect describes the circumstance that theoretically identical problems should be decided identically by the same people under the same framework conditions, even if they are formulated slightly differently. However, this is precisely not the case: Even slight deviations in the problem descriptions (i.e., the "frame") can lead to completely contradictory or opposing decisions. There would be various other cognitive traps such as the conjunction effect, the gambler's miss or aspects of groupthink, which can play a not insignificant role in the risk assessment process. In this context, Gigerenzer and Kahneman & Tversky also speak of "heuristics and biases". In summary, risk assessments are massively influenced by such cognitive effects of the acting persons. In combination with the definitional and procedural aspects described above, a risk assessment can quickly lead to a politically desired, but possibly quite unrealistic result.

There would be various other cognitive traps such as the conjunction effect, the gambler's miss or aspects of groupthink, which can play a not insignificant role in the risk assessment process. Gigerenzer and Kahneman & Tversky also speak of "heuristics and biases" in this context.

In summary, risk assessments are massively influenced by such cognitive effects of the acting persons. In combination with the definitional and procedural aspects described above, a risk assessment can thus quickly lead to a politically desired but possibly quite unrealistic result.

As a consequence, the initiator or moderator of a risk assessment process must therefore be aware of these effects and ensure through appropriate measures, selection and moderation that these effects are eliminated or balanced as far as possible. The newly revised 2019 version of ISO standard 31010 provides very effective assistance in this regard.

ISO 31010:2019 - a helpful standard for "Risk Assessment".

With the completely revised new edition of ISO 31010:2019 "Risk Assessment Techniques", published in 2019, a new reference work is now available. This standard (approx. 40 pages) provides detailed help on how a "Risk Assessment" should proceed in the opinion of experts, which aspects should be considered in it and which of the 41 methods described could be used and when. Like all ISO standards, it is based on a minimum consensus. This means that there are no mandatory requirements, only recommendations. If the user follows the recommendations, this would be interpreted as "good practice".

The previous version of ISO 31010 from 2009 had significant weaknesses in terms of content and was too strongly oriented towards the "parent standard" 31000 (Risk Management). The following aspects in particular are new in the ISO 31010:2019 version:

Scope expanded: In the course of the revision, the scope was first cleaned up and expanded. The basic idea is that 31010 is now not only applicable in the context of ISO 31000, but covers as many different cases as possible in which a "risk assessment" could be required. It should be emphasized that explicit uncertainty can also be an application case.

Core concept and process revised: While the original version was created in close accordance with ISO 31000, the new version focuses on application and benefits. Therefore, the entire text and the entire "Risk Assessment" process have been revised from the ground up.

Implementation aspects added: Based on the same considerations, an additional chapter was added with aspects and notes that, from the experts' point of view, should be taken into account during an implementation.

Link to appendix: One of the main criticisms of the previous version was the lack of a link between the process and the many methods in the appendix. Special attention was therefore paid to this aspect in countless places during the revision and appropriate references were made in the new version.

Method choice: To assist users, additional guidance and criteria have been designed to make it easier for users to filter out a method that is appropriate for their situation and needs from the multitude of methods listed in the appendix.

Revision of the annex: Last but not least, the method inventory has been completely revised. Of the 31 methods previously listed, some have been deleted and others added. For all 41 methods newly listed in the standard, a uniform grid of two pages per method was defined, which should ensure more comprehensive information and comparability per method.

All in all, this revision of ISO 31010:2019 makes a quantum leap in terms of content compared to the previous version. In summary, however, it should be clearly pointed out that the standard - figuratively speaking - is not to be understood as a construction manual, but rather as a guide for the adequate use of the available "toolbox".

Outlook and tips for the next assessment

Due to the quality of the new ISO standard 31010:2019, a recommendation to use the standard for the next assessment suggests itself. With reference to some explicitly addressed weak points in risk assessments, the following tips might be helpful in particular:

- Make sure that the chosen approach is appropriate to the actual complexity of the issue. This also includes selecting an "appropriate tool/method" for the assessment.

- Since probabilities have a high error rate in many cases, it is recommended to use frequencies instead.

- Ordinal scales should - wherever possible - be avoided and replaced by numerical scales. These then allow to calculate with them later.

- Work and evaluate in ranges, since exact values (are not possible at all, but) only fit under special conditions and thus offer bogus security.

- Take a closer look at cause-effect chains to distinguish real risks from drivers. Risks can be assessed, drivers can be controlled.

- If necessary, consult a specialist.

- Ensure that the assumptions and framework of the risk assessment are well documented for future reference.

Conducting risk assessments is a learning process, and if at least the majority of the above points are taken into account, it will be possible to significantly improve the quality of the next risk assessment.

About the author

Axel Sitt, Practice Lead Cyber Security & Privacy at AWK Group AG. He holds a PhD in Risk Management and has been working in the areas of Risk and Crisis Management, ICS and IT Security for 20 years. Since 2012, he has been a delegate of Switzerland at ISO, where he has been involved in the update of 31000 and significantly in the revision of the 31010 standard.

 

References

  • Gerd Gigerenzer: The Eimalein of Skepticism

  • about the right way to deal with numbers, Verlag Piper

  • Daniel Kahneman: Fast thinking, slow thinking. Penguin Books

  • ISO: 31010:2019 Risk assessment techniques (can be obtained via SNV)

(Visited 148 times, 1 visits today)

More articles on the topic

REGISTER NOW
SECURITY NEWS
Important information on safety topics - competent and practical. Receive exclusive content and news directly to your email inbox.
REGISTER
You can unsubscribe at any time!
close-link