top of page

An Introduction to Human Risk

We’re hearing lots about how Robotics and Artificial Intelligence (AI) is going to transform risk functions, with machines taking over many of the tasks currently done by people. That doesn’t mean we’re all going to be replaced by Risk Robots, because no matter how far technology develops there will still be a need for humans. In part, because stakeholders, especially regulators, aren’t ever going to allow all responsibility and decision making to be abdicated to AI.


We are, however, going to need to change what we do and the Risk Officers Of The Future will need to develop new skills. It’s why I support the idea that we should all learn the basics of coding. We don’t need to be experts, but to understand the risks in a technologically advanced world, requires an understanding of what is going on inside the machines.


It also means that we’ll need to be more “human” and focus on doing those things that the machines can’t. Whilst they can analyse, process and spot patterns better than we can, AIs can’t (yet!) inspire, challenge, persuade or use intuition. Even if they come with a friendly voice like Amazon’s Alexa or Apple’s Siri, they have no Emotional Intelligence (EI). To succeed in this world, we’ll need more EI than ever before. It’s one of the reasons I’m a keen student of behavioural science.


To err is human..


Another is the increasing importance of something I’m calling Human Risk:


The risk of people doing something they shouldn’t, or not doing something they should


You only have to look at the number of times organizations explain things that went wrong by reference to “human error”. Even in situations where the human element might not initially be obvious, such as an IT outage like this:


British Airways Flight Outage: Engineer Pulled Wrong Plug

British Airways pointed to human error as the cause for mass flight cancellations that grounded at least 75,000 passengers last month and led the carrier’s passenger traffic to decline 1.8 per cent.


An engineer had disconnected a power supply at a data center near London’s Heathrow airport, causing a surge that resulted in major damage when it was reconnected, Willie Walsh, chief executive officer of parent IAG SA, told reporters in Mexico. The incident led BA’s information technology systems to crash, causing hundreds of flights to be scrapped over three days as the airline re-established its communications.

Source: Bloomberg


Even things like natural disasters, which we can’t (yet) prevent, can be made that much worse by human action or inaction. You might not be able to stop a hurricane, but you can substantially worsen its impact by not having appropriate disaster recovery planning in place.


To properly reduce operational risk, we need to have a better understanding of why people behave in the way they do so that we can appropriately influence it. This isn’t straightforward. As we all know from our own behaviours, human beings aren’t always rational. So we need to incentivise them to do the right thing.


Senior Risk

One of the challenges of managing Human Risk is that it isn’t simply mitigated by experience. Perceived wisdom tells us that “practice makes perfect”; the more we do something, the better we are at it. Of course, that’s true, up to a point. Play more tennis and you’ll get better at it, regardless of your natural talent.


But that logic doesn’t always apply, especially in organisations with a strong hierarchy. We’ve seen plenty of recent examples of senior leaders getting things wrong.


Take PwC partner Brian Cullinan who had the responsibility for handing out envelopes to presenters at this year’s Oscars ceremony. Seemingly distracted, when it came to the Best Picture award, Cullinan mistakenly handed the wrong envelope to the presenters. It is hard to imagine that outside the glamour of the Oscars, that someone of Cullinan’s status would ever have opted to hand out envelopes. You only have to watch the footage of the event to see that the original mistake was then compounded by a delayed response. It’s the kind of error you might expect from someone with no experience, rather than a senior partner. As the Academy’s CEO Cheryl Boone Isaacs put it:


They have one job to do. One job to do!


Then there’s the case of Barclays CEO Jes Staley, who was found to have twice attempted to unmask the author of letters to the Firm’s board that raised concerns about someone he had hired. It’s not difficult to see why having the integrity of the Firm’s whistleblowing process undermined by the CEO is a bad thing. And yet his actions did just that. Unsurprisingly the Firm’s regulators are unimpressed.

As we know, Human Risk within organisations is heavily influenced by the “tone from the top”. But it will also become more critical at all levels of organisations with the onset of automation. As the roles that humans perform become more cognitive and less repetitive, so the inherent risk of the activities they’re performing substantially increases.


Robot Risk

Which brings me back to the robots; they won’t make mistakes and will just do as they’re told. Without a good understanding of behavioural science, we’re running the risk of deploying AI that mimics our bad habits. We’ve all heard of Unconscious Bias, but we also need to understand concepts like Narrative or Confirmation Bias (once we’ve decided something, we look for data to confirm we’re correct and ignore data that doesn’t) and Moral Licence (using the fact we’ve done something good, to then justify doing something bad). We’re all susceptible to these, whether we know it or not. It’s bad in humans, but it’s really bad in machines.


Even when we program machines “correctly” to undertake logical processes, there can be unintended consequences. Take Uber’s “Surge” algorithm which hikes the price of rides when demand is high. It’s a legitimate business practice (airlines do it all the time) and it works seamlessly. But it’s not so good from a reputational risk perspective when natural disasters or terrorist incidents increase demand and leave the company open to legitimate accusations of profiting from emergency situations. The machine does what it’s programmed to do, but on a human level it produces totally the wrong outcome.


Amazon’s “Frequently Bought Together” feature is good for customers in that it recommends products that go well together; so if you buy a printer, it recommends the right cartridges to go with it. It’s also good for Amazon as it increases sales. It’s less good when the same algorithm that powers that feature, ends up generating news headlines like this:


Potentially deadly bomb ingredients are ‘frequently bought together’ on Amazon

A Channel 4 News investigation can reveal how Amazon’s algorithm can guide users to the chemical combinations for producing explosives.


When I think about what technology can do for us, I’m really excited. But I’m also absolutely convinced that whilst we need to learn to code the machines, we also need to de-code the humans.

bottom of page