About Me

This blog carries a series of posts and articles, mostly written by Anthony Fitzsimmons under the aegis of Reputability LLP, a business that is no longer trading as such. Anthony is a thought leader in reputational risk and its root causes, behavioural, organisational and leadership risk. His book 'Rethinking Reputational Risk' was widely acclaimed. Led by Anthony, Reputability helped business leaders to find, understand and deal with these widespread but hidden risks that regularly cause reputational disasters. You can contact Anthony via the contact form.

Friday 21 November 2014

How Culture can affect Honesty

We are delighted to welcome a guest blog post from  Professor Ernst Fehr, Professor Michel Maréchal and Dr Alain Cohn, working at the Department of Economics at the University of Zurich.  Their research, just published in Nature, suggests that culture can influence honesty.

Bank employees are not more dishonest than employees in other industries. However, the business culture in the banking industry implicitly favours dishonest behaviour. that is the conclusion of a behavioural study at the University of Zurich.

It follows that a change in cultural norms would thus be important not only in order to improve the battered image of the industry but also to improve actual banker behaviour.

In the past years, there have often been cases of fraud in the banking industry, which have led to a considerable loss of image for banks. Are bank employees by nature less honest people? Or does the business culture in the banking sector favour dishonest behaviour? These questions formed the basis for our new at the Department of Economics at the University of Zurich.

Our results show that people who are bank employees are not in themselves more dishonest than their colleagues in other industries. The findings indicate, however, that the business culture in the banking sector subtly encourages dishonest behaviour.  The results suggest that the implementation of a healthy business culture is of great importance in order to restore trust in the banking industry.

Our experiment

We recruited approximately 200 bank employees, 128 from a large international bank and 80 from other banks. Each person was then randomly assigned to one of two experimental conditions.

In the experimental group, the participants were reminded of their occupational role and the associated behavioural norms with appropriate questions. In contrast, the subjects in the control group were reminded of their non-occupational role in their leisure time and the associated norms.

Subsequently, all participants completed a task that would allow them to increase their income by up to two hundred US dollars if they behaved dishonestly. The result was that bank employees in the experimental group, where they had recently been reminded of their occupational role in the banking sector, behaved significantly more dishonestly.

A very similar study was then conducted with employees from various other industries. In this case too, the employees had recently been reminded of either their occupational roles or those associated with leisure time were activated.

Unlike the bankers, however, the employees in these other industries were not more dishonest when reminded of their occupational role.

Our results suggest that the social norms in the banking sector tend to be more lenient towards dishonest behaviour and thus contribute to the reputational loss in the industry,

We therefore believe that a change in norms is needed in the banking industry.  Social norms that are implicitly more lenient towards dishonesty are problematic, because people’s trust in bank employees’ behaviour is of great importance for the long-term stability of the financial services industry.

We suggest that concrete measures could be used to counteract the problem.  The banks could encourage honest behaviour by changing the industry’s implicit social norms.  Several experts and supervisory authorities suggest, for example, that bank employees should take a professional oath, similar to the Hippocratic Oath for physicians. If an oath like this were supported with a corresponding training program in ethics and appropriate financial incentives, this could lead bank employees to focus more strongly on the long-term, social effects of their behaviour instead of concentrating on their own, short-term gains.

Alain Cohn, Ernst Fehr, and Michel Maréchal 
Department of Economics at the University of Zurich
http://www.econ.uzh.ch/index.html 

Reputability LLP
London
www.reputability.co.uk

Friday 7 November 2014

ORSAs: Evaluating Risks to Insurers


After years of uncertainty, Solvency II, the risk based insurance capital regime for insurers, will come into force on 1 January 2016. Even more comprehensive than banking's Basel III, it requires an enormous amount of work over the next twelve months or so to ensure that companies can meet its capital, risk management and reporting requirements. Most think that it is a step forward from the existing ‘one-size fits all’ regime of Solvency I, but there are still concerns that it is excessively onerous and bureaucratic.

Introducing the ORSA

What's new in Solvency II is the Own Risk and Solvency Assessment (ORSA), a development paralleled by the USA’s National Association of Insurance Commissioners (NAIC) who also require ORSAs. This is all part of a coordinated global approach to ORSAs, driven through the International Association of Insurance Supervisors (IAIS). It means that national and regional standards are converging. An important driver of the introduction of ORSAs was the collapse of AIG in 2008, an event that took regulators completely by surprise.

Insurers have always accepted that a robust regulatory capital requirement is essential for customer protection, whilst feeling that sensitive implementation is essential to ensure it doesn't get in the way of efficient management of the firm. The ORSA addresses this dilemma through an annual report to the regulator which includes the Board's own view of the capital needed to run the business in future irrespective of the regulatory requirements. The aim is to demonstrate to the regulator that the Board understands the business, the risks and challenges it faces, and that it has adequate capital to achieve its strategic plans. Thus the ORSA requires a description of the risks based on the business model, assets and liabilities with a strategic, forward-looking perspective, written from the standpoint of the Board. It cannot be outsourced.

There is no pre-defined process, but EIOPA, the European regulator of insurance regulators, has indicated that the report should comprise:-
  • Summary of current business strategy and risk appetite
  • Current risk profile against risk appetite
  • Required capital regulatory (SCR) and economic capital
  • Available funds to meet the capital requirement
  • Expected future risk, capital and solvency profile – with a capital plan and contingency planning as required
  • Potential risk, capital and solvency profile under various stressed conditions
  • An independent review of the ORSA

The place of reputational, behavioural and organisational risks in the ORSA

Historically, quantifiable regulatory requirements (such as for capital) have tended to eclipse unquantifiable factors.

However, research into corporate failures of the last few decades, such as the FSA’s McDonnell Report (2003), the Airmic/Cass Business School Report 'Roads to Ruin' (2011) and Reputability's report 'Deconstructing Failure' (2013), has revealed the potentially fatal weakness of this approach. Individual and collective human behaviour plays a key role in corporate failure. These unquantifiable weaknesses, now widely called ‘behavioural’ and ‘organisational’ risks are regularly found to be the root causes of crises and of a subsequent reputational collapse.

Regulators are recognising these new insights with alacrity. Reputational, behavioural and organisational risks are explicitly dealt with in the latest Financial Reporting Council Guidelines to boards on risk, and the importance of hard-to-quantify risks is increasingly recognised by global regulators including the Basel Committee on Banking Supervision and the IAIS.

In the insurance sector, IAIS, EIOPA and the NAIC have highlighted examples of risks they expect to see covered in the ORSA. Insurance, market, liquidity and counterparty risks are obvious to all, but the lists also include ‘operational risks’, with both the NAIC and EIOPA making clear that while risks such as reputational, strategic and operational risks can be hard to quantify, they must nonetheless be evaluated.

The range of ‘operational risks’ has also been clarified. It is now reasonably clear that, as regards the European ORSA, ‘operational risks’ include:

  • non-quantifiable risks in general,
  • reputation risks
  • risks from organisational complexity, and
  • risks from human behaviour, whether individual or collective.
As to the USA, the NAIC guidance on the ORSA explicitly mentions complexity risk, and operational and reputational risks as examples of hard-to-quantify risks. Behavioural risks are not mentioned by name. However since they are ‘operational risks’ and the most frequent root causes of both crises and reputational collapses, behavioural and organisational risks will be investigated by insurers that understand reputational risks. Those that lack this understanding can expect to be singled out by supervisors for remedial work. In turn, supervisors who lack this insight can expect enlightenment from the IAIS.

Implications for Insurers

Since the extent to which behavioural or organisational risks both cause crises and tip them into reputational catastrophes has only recently been recognised, ignorance has kept these risks out of risk registers. Nowadays there remain two main reasons why it is difficult to get these risks onto risk registers and into ORSAs.

First, it has often been personally dangerous, even for risk professionals, to bring these risks to the attention of their leaders. This is because the ultimate source of many of these risks is often the company’s leadership - both Board and Executive.  Fortunately, recent regulatory requirements to leaders to deal with these risks are beginning to address this problem.

However, the second reason, a serious practical problem of cognitive bias, remains. This makes behavioural and organisational risks notoriously difficult for insiders to find, recognise and understand. These risks are most easily seen by outsiders with sectoral experience who are trusted with insiders’ knowledge and given the authority to identify risks of these kinds and explaining them to insiders. Given sufficient independence, such trusted outsiders can also be relied on explain any painful truths to leaders without putting the risk team in danger of reprisals. We have developed tools systematically to find these risks and help insiders, at all levels, to understand their implications.

Conclusion

The ORSA is an important development that has the potential to become a valuable tool for management as well as supervisors when, in 2016, it becomes a regulatory requirement throughout the EU. As part of the exercise, insurers will need to make an objective assessment of their behavioural and organisational risks.

Will insurers have the tools to accomplish this task? Or will the behavioural and organisational risk gap remain as a result of self-delusion? Market history shows this is a residual risk that could have devastating consequences for firms, for the reputation of regulators and for the stability of the market as a whole.

Regulators should also put this risk on their own risk register. The painful lessons of the AIG debacle should not be forgotten.

Professor Derek Atkins
Anthony Fitzsimmons
Reputability LLP
London


Anthony Fitzsimmons is Chairman of Reputability LLP and, with the late Derek Atkins, author of “Rethinking Reputational Risk: How to Manage the Risks that can Ruin YourBusiness, Your Reputation and You

Tuesday 4 November 2014

“Where Were The Auditors?” The Inevitability of Behavioural Risk


We are delighted to welcome a Guest Post from Jim Peterson, a US lawyer with a long audit firm experience.   









Three immediate predictions were available, when British food retailer Tesco announced in September that its first-half profits were over-stated by some £ 250 million, because of irregular accounting for supplier discounts and rebates:
  • The number would grow.
  • The affected time period would extend.
  • The four company executives promptly suspended would soon be joined by others.
It is still early days, but in addition to an investigation by the Serious Fraud Office and the predictable start of shareholder class actions in the United States, news from the company is that:
  • The number is now at least £ 263 million.
  • The accounting issues extend back at least to last year.
  • The casualty count of suspended executives is up to eight, plus the announced resignation of chairman Sir Richard Broadbent.
And as shouted yet again, by the Financial Times, no less – “Where were the auditors?”

It’s the same hue and cry that has been raised with every outbreak of serious financial scandal since independent audit was invented in the Victorian era.

The business, legal and regulatory structure under which the Big Four accounting tetrapoly – Deloitte, EY, KPMG and PwC -- provides statutory assurance on the financial statements of the world’s large companies – the model of Big Audit – remains firmly in place. But with much current exploration of corporate governance and board effectiveness, the notion of “auditor effectiveness” faces this fundamental question:

Is the traditional form of audit assurance – a “true and fair view,” or its American cognate, “fairly presented in all material respects” – fit for purpose, if it only works generally well, most of the time – up to the point, as in Tesco, when it seems not to work at all?

Lessons from my graduate-level business and law school course in Risk Management and Decision-Making are useful, going to the very nature of the core expectations and limitations of Big Audit, at levels deeper than have been plumbed to date.

Recent progress on the subject of corporate risk has been considerable -- chiefly on the company side of the complex relationships among issuers of financial information, the communities of information users both inside and outside those companies, and the auditors themselves – to identify such systemic threats as risk blindness, ambiguous leadership attitudes about ethos and culture, defects in communications, and skewed performance and compensation incentives.

Behavioural scholarship suggests that these concerns are no less present in the audit function – with its execution dependent on the behavioural limitations of fallible human beings. Which in turn suggests that, if deeper insights into the causes of financial failures were credibly brought forward into the standards and practices of Big Audit, progress might be made in areas that have long defied improvement.

That is – forty years of research have identified sources of bias that are hard-wired into our human DNA, because of which they are pervasive, inevitable and unavoidable. These are shared along with the rest of humanity by auditors who are no better genetically designed or equipped than anyone else to address them. As an effect or consequence, they work to suppress the exercise of professional skepticism, to inhibit the delivery of negative or critical messages and, conversely, to default in favor of confident agreement with management’s upbeat assertions.

Inquiry quickly goes beneath the customary charge that auditors are incentivized to accommodate long-standing clients – a facile assertion yet to be supported by anything like credible research or evidence. Examples among the catalog of broadly-observable instinctive and dangerous shortcuts -- “heuristics,” to use the term explored since the 1970s by, e.g., Nobel economics prize-winner Daniel Kahneman and his late lamented research partner Amos Tversky -- are:
  • Confirmation bias: The readiness to accept that premises are valid as presented, rather than do the harder work of seeking out and evaluating disaffirming or contrary evidence – in the audit context, the too-ready willingness to agree with submissions and judgments of management.
  • Over-Confidence: Good news is simply preferred over bad; as John F. Kennedy quoted the Duke of Wellington, “Victory has a thousand fathers; defeat is an orphan.” So too, a bullish quarterly earnings estimate or a Goldilocks reserve calculation will be preferred over the pessimistic alternatives.
  • Representativeness: Giving credence to that which is familiar and ready to hand – e.g., “this year’s results look much like last year’s – so the trends and estimates can be taken as acceptable.”
  • Herding and Groupthink: The convergence of individual conclusions with those of the group, especially as led from the top – influenced, for example, by pressures from peers or superiors of time, budget, and engagement evaluations – perniciously seen in the message unsubtly conveyed to junior staff, that “If you raise a problem, then you become the problem!”

From the persistent ubiquity of these influences it would follow – and the regular recurrence of financial irregularities over the decades would confirm – that Big Audit as now designed and performed is of seriously diminished value and effectiveness.

But, with the auditors apparently fated to operate under seemingly universal human behavioural constraints, it is not either necessary or helpful to demonize them by ascribing either malign intent or corrupt motives.

If they are instead doing as well as their human limitations allow, even if not to the level of expected systemic satisfaction, then whole conferences and symposia should be convened, to design and explore the application of new tools and approaches, and to bring these limitations out from the academic shadows into the realms of daily inter-actions among the auditors, their clients, users and the regulators.

In other words, auditors may very well be able to bring their competence and expertise to bear on issues of real importance. But not while they – and all the other players involved in Big Audit – are shackled to a model that demonstrably fails to allow for the ever-present and increasingly well-recognized frailty of human beings – a group that (sometimes reluctantly acknowledged) does include the auditors.


Jim Peterson is a US-trained lawyer, concentrating on complex multi-national matters involving corporate financial information.   His blog, "Re:Balance" is the successor to "Balance Sheet," his financial and accountancy column which appeared bi-weekly in the International Herald Tribune.

Since 2009 he has been teaching a graduate-level course in Risk Management at business and law schools in Chicago and Paris.

Monday 3 November 2014

Error Management: Lessons from Aviation's Success


We are delighted to welcome a Guest Post, on how error can be managed positively, from Professor Jan Hagen, author of  Confronting Mistakes: Lessons from the Aviation Industry when Dealing with Error 



The financial markets crisis began in 2007 and unfolded with increasing severity. At the time, we were dumbfounded that big-name banks had taken such disproportionately high risks with their structured securities.

Many of us saw the investment banking sector’s remuneration system and the associated asymmetric risk distribution as the main causes of the crisis. We asked how things could have spiralled so far out of control, especially as even before the crisis some parties within the banks had urged caution.

The question is why these warnings went unheard. Were they overlooked? Underestimated? What mistakes were made? How did they come about? Who failed to pick them up? And how were they allowed to trigger a series of further errors that ultimately had such dramatic consequences?

Banking is by no means an exception, however. There have been mistakes, errors, poor decision making, infringements, affairs and scandals in any and every industry and organisation you care to mention.

None of them appears to have had any effective controls in place that allowed them to intervene in time to prevent things going awry. Instead, those involved could only watch as fate ran its course.

Let us take a look at normal day-to-day operations in a company.

What happens if someone makes a mistake or takes the wrong decision? The issue here is not intentional misconduct, fraudulent behaviour, gross negligence or large-scale mismanagement. I am referring to the little mistakes, errors, and poor decisions that occur every single day. Mostly, errors are the result of momentary blackouts, a temporary short circuit in the brain, false impressions, deceptive memories, dots wrongly joined, fragments of conversation that we interpret incorrectly, prejudices, momentary feelings of mental imbalance, disorientation, stress and other disturbances.

All this we could perhaps accept but our problem is that we believe we can and should be “right” when in reality we start out with “quasi-right” at best and adjust our decisions and actions as we proceed. The alternative – believing that we are right and later realising that we were wrong – creates a state of confusion leading to uncomfortable questions as to whether our self-belief is justified.

Recently, working with colleagues at the European School of Management and Technology in Berlin I looked at how managers discuss errors made by their employees and how they were informed about their own mistakes. According to our study, most managers accept errors as being a normal part of the work culture.

Yet, there is one aspect that does not match this conviction; namely, the overwhelming preference for discussing errors in private and involving as few people as possible: Of those questioned, 88% claimed that they generally address errors made by others in private and only 4% would do so openly in front of a group of people.

What does this mean for companies? No doubt, most still have a long way to go before error management becomes a regular part of day-to-day work life with the overwhelming preference for discussing errors in private and involving as few people as possible. Mistakes, in other words, are still associated with shame and embarrassment.

In aviation the concept of open error communication was developed as a consequence of a number of high profile human factor related accidents at the start of the 1980s. Today they are part of a concept we call Crew Resource Management (CRM). In this error management has evolved as the central element to ensure that flight crews interact effectively as team members in the complex environment of aviation. Another major part of CRM is assertiveness on the part of junior crew members vis-à-vis their superiors. This is to ensure that information flows freely in the cockpit and is not blocked by hierarchy. Leadership, communication, and decision making are the other components of CRM.

Of course, the question is how a system as highly successful as CRM can be implemented in everyday business life. After all, unlike most other industries, aviation is a high-risk industry. Most managers do not arrive at work each day knowing that they are responsible for the safe transport of hundreds of people. They are, however, in charge of business processes, the success of their particular division and for keeping their work force employed. So the number of errors they make should be limited as well.

From this perspective, the answer to the question above is simple: error management is relevant to every organisation that wishes to reduce error volumes be it in a high-risk industry or not. In fact, most organisations will already have taken steps in this direction by trying to eliminate potential error sources and attempting to analyse and resolve errors that do occur.

Still, there is a fundamental difference between the traditional approach to preventing errors and the error management strategies used in CRM.

Conventionally, errors are stigmatised as individual weaknesses, whereas modern error management accepts them as an unavoidable aspect of human behaviour. While both strategies seek to avoid errors, the former puts them in a negative light and associates them with embarrassment, shame, fear and punishment. This is in stark contrast to the practice in aviation where individuals are not only protected from being blamed but really encouraged to actively report their errors. The logic is simple: Every identified and trapped error provides an opportunity for a learning process that is not confined to the individual and allows the organisation as a whole to benefit.

As far as the larger organisational error management is concerned, its implementation has to start as a top-down management decision though the overall success will depend on individuals and teams. The scope of the change of mind-set must however not be underestimated, however: in aviation it took pilots more than ten years to accept CRM – but the safety record of today speaks for its success.

Jan Hagen 
European School of Management and Technology
Berlin 


Jan Hagen is an associate professor, European School of Management and Technology (ESMT), Berlin, Germany, and author of Confronting Mistakes: Lessons from the Aviation Industry when Dealing with Error (Palgrave MacMillan, UK, 2013)