top of page

What a spying scandal in football can teach us about Compliance

Usually when there are newspaper headlines involving football that refer to cheating, then it is reasonable to assume that they’re referring to player activity on the pitch. However, this weekend in the UK we had an off-pitch incident involving Managers which the sports press somewhat predictably named “Spygate”.


Frank Lampard, the Manager of Derby County, accused Marcelo Bielsa, the Manager of rivals Leeds United, of having sent a spy to secretly watch a Derby training session ahead of a game between both clubs. A game that Derby went on to lose 2-0.


After the match, Lampard said that he felt that Bielsa had not acted in the spirit of the game. In response, Bielsa freely admitted that he had sent someone to spy and that he had been doing it for a number of years. Bielsa, who is Argentinian, started working in the UK in the middle of last year having previously worked in a number of other countries where, he claimed, it was a common ‘tactic’. In his post-match interview he said:


“I found out by calling him that he thought I didn’t respect the fair play rules. If we take into account this fact, Frank Lampard is right, Derby County are right and their fans are right.


The conclusion that has been drawn is that I had a sporting advantage and that my behaviour was not right. I can explain my behaviour but I don’t justify it because I have to respect the norms that are applied in the country where I work.”


Note that word “norms”, we’ll come back to it later. Reinforcing what Bielsa said, Leeds United also put out a statement saying that they had subsequently reminded him of “integrity and honesty which are the foundations of the Club.”


Bielsa hasn’t yet been charged with anything by the Football Association (FA) and I suspect that it’s unlikely he has actually broken any rules. But it’s clear from the widespread condemnation he received and the Club’s response, that he has breached an unwritten code of conduct.


I’m fascinated by this from a Human Risk perspective. During my hiatus from blogging, I’ve been working on “Bringing Behavioural Science (BeSci) to Compliance”; deploying BeSci to deliver better outcomes from the two disciplines I cover: Compliance and Operational Risk.

Both are largely focussed on influencing human decision-making; Compliance because organisations can’t be compliant of their own accord, its the people within them that determine that status. Meanwhile, the largest single factor, whether root cause or accelerant, behind Operational Risk is people.


The Bielsa case highlights something I’ve been thinking about a lot: how can we get people to do what we want them to? In particular, how to write rules and policies that are more effective at influencing human behaviour.


Your perspective or mine?




Rules are most commonly written from the perspective of the organisation writing them, rather than the perspective of the target audience. The implicit presumption being that because a particular outcome is required by the organisation; “if we write it, they will comply”. In predictable environments, this approach makes perfect sense. I want the ground and aircrews of the plane I’m about to fly on, to have completed standard pre-flight checklists. Regardless of whether or not they think it’s a good idea. Though obviously those flying with me have a huge incentive to do so.


However, in less predictable environments,this can lead to rules that are model answers to exam questions that codify a theoretical “black and white” world, rather than the “grey” real one in which people find themselves.


To achieve this, the rule-writers often err on the side of writing longer, rather than shorter rules. This makes logical sense. Having lengthy all-encompassing rules seems like a good way to mitigate risk. After all, if you cover everything, there’s no room for confusion. Ironically though, the more prescriptive the rules, the greater the potential risks.


In a fast-moving and more complex world, there will increasingly be situations that people find themselves in, which won’t have been in the direct contemplation of the rule-writers. In “rule-centric” environments, people are incentivised to behave like electricity seeking the path of least resistance and seek the “rule of least resistance”.


As a result, “good” people can end up doing the wrong thing having tried to do the right thing by following the rules, whilst “bad” people can use this dynamic for arbitrage.


Perception is everything

Even where the rules do contemplate a particular situation, what is often also ignored is the target audience’s pre-existing perception of the situation and the authority that is seeking to control their behaviour. If the target audience already has a view on how it would approach a situation then a rule that supports this is likely to gain more natural traction with them than a rule that goes against that. But equally a rule that codifies something the target audience would do of their own accord can also backfire, if there’s a sense that it is patronising or trying to force them to do something they would do naturally of their own accord. And if the target audience thinks the rule-makers have overstepped their authority, then there’s a likelihood of a backlash.


This doesn’t mean just writing rules the target audience like. What it does mean is giving careful thought to how they are likely to react and finding smart ways to codify rules that will maximise their propensity to comply. If we want rules to be effective; then they need to be as user-centric as possible.

I recognise that in global organisations, this can be challenging. Not least because Social Norms may differ according to culture. But just because it’s difficult, doesn’t mean we shouldn’t think about it. By employing people, these dynamics exist whether we want them to or not.


In the Knowledge Economy, there is a trend towards hiring people to do the things that machines can’t. These tasks involve cognitively advanced skills like judgement, nuance and intuition. If we want smart people that have been hired for those skills to perform at their best, then we need to craft rulebooks that give due consideration to them as “thinking” rather than “unthinking” users. The best example of the implementation of “thinking compliance” in a user-centric framework that I’ve come across is the one deployed by Netflix. Their Freedom & Responsibility culture begins with the principle that there are only two kinds of non-negotiable rules: those that prevent irrevocable disaster; and those that cover moral, ethical and legal issues (for example bullying). Other rules can “shapeshift” according to circumstances. Hence their Expensing, Entertainment, Gifts & Travel policy is 5 words long: Act In Netflix’s Best Interests. Obviously, this leaves huge room for interpretation, but it’s not a rule that can be arbitraged or that any employee can deem unreasonable. Compliance becomes a matter of judgement and requires employees to justify their behaviour in absolute terms, rather than relative to a prescribed norm in the rule. It’s admittedly very bold and not something every company will be able to adopt, but it shows how rules can be crafted in a more user-centric way. For more on this, I recommend Powerful: Building a Culture of Freedom and Responsibility by Patty McCord, the former Netflix Chief Talent Officer.


The reason I suspect there’s no rule against what Bielsa did, is that the FA probably intuitively engaged in some Netflix style thinking. There’s no rule, because no-one ever thought it was necessary to have one. After all, if the Social Norm is well understood, then why would you need a rule? The answer might be that with an increasingly global market for players and managers, a domestic Social Norm might no longer be understood by everyone.


To understand whether a rule is needed and if it is, how it is best written, it is necessary to have a good appreciation of the perception and perspective of the target audience.

Assuming I’m right that there isn’t a rule against what he did and the FA decided it did want to write one, how would they do it? One thing’s for sure; it wouldn’t be easy. Don’t take my word for it. Frank Lampard revealed precisely that in his post-match interview:


“I don’t know what the rules are. I believe there’s not an absolute clear-cut rule about it but we can’t open the door to this thing happening every week. What kind of farce would that be, of everyone sending undercover people, drones, whatever, into training. It would be farcical. So something has to be done, I just don’t know what it is and it’s not my decision.”



Perhaps unsurprisingly, what I’ve been discovering is that it’s very easy to write rules. It’s a lot harder to write rules that are effective in delivering the intended outcome in all situations. To misquote a well-worn aphorism: I’m sorry my policy is so hard to comply with, but I didn’t have time to write one that made it easy.


Bringing Science to Compliance

Fortunately, I’ve also been discovering that my hunch (that BeSci can help us solve this and other Compliance challenges) which set me off on my quest, is being proven to be correct. In my research, I’ve come across some really innovative thinking. Here are just three examples:


The first comes from a practitioner friend of mine, Maarten Hoekstra who has created the wonderfully named Broccoli Model for Compliance.



(c) Maarten Hoekstra


This user-centric approach delivers Compliance objectives by helping people to do the right thing, rather than just telling them what to do. By focusing on Moments, real-life situations in which people find themselves, and factoring in Social Norms, the Broccoli Model works with the grain of human thinking rather than against it. To find out more and why Maarten gave it this unusual name, watch his presentation.


The second is from the field of Behavioural Law. In The Law Of Good People, a book I highly recommend, Professor Yuval Feldman explores the factors we need to consider about “the rule of law in a world populated by individuals with different levels of awareness of their own unethicality”.



Two things he covered that gave me particular pause for thought.


Firstly, there is a temptation to talk and think about Compliance as if all requirements are designed with the same intent. They’re not. Take Whistleblowing as an example; you want people to feel incentivised to whistleblow if, and only if, they come across situations that necessitate it. You don’t want everyone doing it all of the time. In Feldman’s words, this is a situation where “it only takes one to help”. Contrast that with other things which you need everyone to do all of the time, because “it only takes one to harm”. What we’re asking people to do in these two instances is very different, yet we tend to incentivise both behaviours in the same way. Not least in the language we use to write the rules.


Secondly, Feldman opened my eyes to the concept of “Quality” of Compliance which I’d never really thought about before. Some tasks are binary in nature; you either are or are not, compliant. But others, have a quality component. Whistleblowing is one. At least with whistleblowing, you can assess that quality after the event. For some things, you can’t.


Many Firms ask people to track their activity; either for client billing or resource allocation purposes. For this exercise to have any value, people need to feel incentivised to take it seriously and provide accurate data. That isn’t always easy to do and the individuals tracking their activity will probably be aware of that. Whilst it is possible to do some form of Quality Assurance (QA) on the reported activity of individuals doing predictable tasks or working on specific clients or activities, it is much harder for those that don’t. Particularly in countries like Germany with strong data privacy laws. It’s the people with the least QAable data, whose input potentially offers the most interesting perspectives. After all, if we have no means of QAing it, then what they’re giving us is by default new insight. So for this and other activities where we need people to undertake tasks involving an element of quality, there is a need to ensure they’re positively engaged in it and not doing it under duress.


Finally, I am indebted to Dr Roger Miles for bringing something called The Table of Eleven (T11) to my attention in his highly readable book on Conduct Risk Management.



T11 is a framework created by the Dutch government that helps legislators consider potential reasons for non-compliance. By analysing the reasons why people might not want to do something, it can help legislators think about how they might craft better rules with a greater likelihood of people willingly complying with them.


I know some readers will see this thinking as overly complex and will prefer a simpler world where rules are simply written and obeyed. Experience tells us that this doesn’t work and that the billions that are spent on writing, maintaining and policing rules, isn’t necessarily money well spent. By making e it easier for people to do the right thing, we’ll get better outcomes and almost certainly save money. People don’t do things just because we tell them to or just because we employ them. And if we need quality of Compliance, then we risk not achieving it by thinking in those terms.


To rule or not to rule…

It will be interesting to see whether the FA actually has a rule that covers spying on other clubs. If it does then Frank Lampard doesn’t know about it. But then he doesn’t need to; he’s compliant because of a Social Norm. His behaviour is intrinsically motivated. That’s far more powerful than the extrinsic motivation of doing it because of a rule that tells you to.

If there isn’t a rule, then the FA should think hard about whether one is really necessary.

Write a rule that suggests you think there isn’t a powerful Social Norm on the topic and you risk undermining the one that’s already there. Not only has the publicity surrounding this case demonstrated quite how powerful that Norm is, but it also theoretically means that no-one in English football can be unaware of the issue. Which is probably enough to impose the Social Norm amongst those that weren’t already abiding by it. At least in the short term. Except perhaps foreign managers who come in the future. Though I’d argue that there are probably better ways to get to them, than through the rulebook.


If there’s one thing I think the Bielsa case illustrates, it’s that if we want to get the right outcomes, then my mission to Bring Behavioural Science to Compliance is the right way forward.

bottom of page