Output ≠ Outcome
by Hui CHEN and Zach COSEGLIA
“Effectiveness is about outcomes, not output. It is not some mystical ‘sweet spot’ of programmatic activities that, when balanced just right, will meet an imaginary line used by government agencies to assess your program. It is about whether everything you are doing is working towards its intended goal.”
DOJ has just issued its latest memo on FCPA enforcement. It directs the Criminal Division to focus FCPA enforcement on “individuals…and not attribute nonspecific malfeasance to corporate structures.” It reinforces Attorney General Bondi’s Day One memos by focusing on connections to cartels and transnational criminal organizations. Finally, citing the Executive Order, the Criminal Division is to focus on cases where there is “economic injury to specific and identifiable American companies or individuals.” There is no mention anywhere of a compliance program, which makes perfect sense as individuals and cartels don’t typically have compliance programs.
This latest memo is only further confirmation that the days of relying on the fear of DOJ to bolster compliance programs are over. The important question today is: how do we justify our existence? Well, let’s start by being able to demonstrate that our efforts and activities have accomplished what we set out to do. This question—the question of “effectiveness”—is fundamental to any endeavor.
In ethics and compliance, our goal is to prevent, detect, and remediate misconduct. No matter how elaborate or expensive a program is, if it is not accomplishing these intended goals, there is little value in its existence. While our profession has been wrestling with this question for years, few can say, in measurable terms, how much better their organizations are at preventing, detecting, and remediating misconduct today than they were ten years ago.
One reason: despite all the talk about “effectiveness,” the compliance industry remains overly focused on the measurement of output: how often are pro-compliance messages delivered, how often do trainings occur and how many people attend, how many policies and procedures exist, how many third parties go through due diligence. None of these metrics tells us much, if anything, about whether these efforts accomplished their intended goals: they merely confirm that efforts took place.
By contrast, an outcome is about the final results and the differences they make: Have the pro-compliance messages led to the desired behaviors? Has the training made a difference in people's decision-making? Have the policies and procedures led to clear expectations from the employees? Are the third parties conducting themselves according to the company's expectations? And what measurable improvements have we made in preventing, detecting, and remediating misconduct?
Effectiveness is about outcomes, not output. It is not some mystical “sweet spot” of programmatic activities that, when balanced just right, will meet an imaginary line used by government agencies to assess your program. It is about whether everything you are doing is working towards its intended goal. Even if your programmatic decision-making is driven by government expectations (note: we’d suggest you think more broadly), what better way is there to satisfy those expectations than through evidence that demonstrates the ultimate desired outcome: the prevention, detection, and remediation of misconduct?
This Is About Our Credibility
When we work with organizations that tell us “We have a strong culture” or “The risk is low in this area,” we always ask (as a regulator, prosecutor, or board member would): “How do you know? What is your evidence?” When forced to answer these questions, though, many compliance professional have little actual data and evidence to share in response.
As compliance professionals, we owe it to ourselves, our profession, and our programs to find out whether what we are doing is making an impact. Why? Because without measuring outcomes, we’re unable to test the standards, guidance, and formula of the corporate compliance programs we design and operate; we’re unable to communicate the difference–if any–our efforts have made; and we’re completely in the dark about whether any one approach is more effective than any other in accomplishing the program’s goals. In short, we need to have the courage to overcome FOFO (that’s: “the fear of finding out”) and the curiosity to take our profession to the next level.
A recent study, for example, tested the impact of different versions of policies on employees’ knowledge and behavior. The researchers found no difference in the employees’ knowledge or behavior, whether they read a 19-page, 4-page, one-page info graph, or no policy. What did influence the employees’ belief and behavior was what they considered to be the social norm. This is exactly the type of research and testing that should inform more compliance efforts. Let’s turn our assumptions into hypotheses—and put them to the test!
How Do We Do It?
So, how do we move towards a more outcomes-driven approach? We begin by being more disciplined with use of the word “effective.” Effectiveness is not a fully realized end in itself: it needs to be associated with a specific goal. “Is our training effective?” is a starting point; but we need to round it out by defining what the training is intended to accomplish. Is it to provide knowledge? Create awareness? Drive understanding? Reduce policy breaches?
Once effectiveness has been defined against specific goals, our focus sharpens and the data that would be relevant in measuring the goal become clearer. For example, if one frames the question as “Is our anti-bribery and corruption program effective in detecting and preventing bribery by our staff?”, an effectiveness analysis would focus on data indicative of the occurrence and avoidance of bribery. This might include metrics that address employee perceptions; employee approaches to dilemma-based hypotheticals; the nature and occurrence of control breaches; transactional anomalies and high-risk expenditures; audit exceptions; detection time; reporting stats; and investigation data, etc.
One important consideration to keep in mind is that the data will be different depending on the question posed. The data for measuring the detection and prevention of money laundering activities will be very different from those for bribery. Even within the same subject matter, the data used to measure overall programmatic outcomes (prevention, detection, and remediation) are different from those measuring component outcomes (policy, training, etc.). Success will require clear definition of the goals of the programs and components, critical thinking on what the leading and lagging indicators may be for those goals, and deep knowledge of the business to understand what data exist.
Once the relevant data has been identified, it must be harnessed, analyzed, and presented. A single set of data—for example, investigations data—is limited in its ability to provide insights. However, when that set of data is combined with other data sets—for example, personnel data relating to seniority and locations of those involved in investigations, financial data relating to patterns of transactions relating or similar to the investigations, etc.—pictures will begin to emerge about the who, what, when, where, and why behind the investigations. It is not only necessary to know how to compose this canvass with the right sets of data: it is necessary to be able to present the canvass in a way that helps all stakeholders understand the picture.
To make this happen, the compliance profession needs to look farther and wider for inspiration and talent. We need to get out of—or at least, expand—our current echo chamber. We need people who can do more sophisticated data analyses (with both hard quantitative and soft qualitative data relating to human cognition and behavior); we need to leverage the expertise and knowledge of social scientists; and we should look to other prevent-and-detect industries, such as health, safety, and crime prevention, for inspiration. Heck, in a recent episode of our podcast, we found inspiration in a carpet!
Compliance teams have begun this journey: it’s exciting, for example, to see more and more compliance teams embed data scientists and analysts into their teams. However, we need to move past creating dashboards and toward generating insights. Likewise, we’re thrilled that some in our profession are increasingly curious about behavioral science. But behavioral science is so much more than just nudges and the knowledge of various social biases: at its core, it is the application of the scientific method to the study of human behavior. One thing we need to learn from existing research is how to test our assumptions through trials and data.
Conclusion
Measuring output tells a story about how hard we tried. Measuring outcomes will tell us whether we have succeeded. Our business colleagues are judged on their outcomes rather than output: not how many sales calls made, but the revenue generated; not how many marketing campaigns run, but the incremental revenue achieved. If we want our seat at the table, we must hold ourselves to the same standard: provide measurable evidence of the difference we have made.