Ep.26: The Metrics Not Enough of Us Track
Remember, you can always listen here or follow us on Apple Podcasts or Spotify. Either way, thanks for supporting us.
About this episode. In this episode of The Better Way?, Zach and Hui shine a light on some of the compliance metrics that rarely get used but matter most—from behavioral signals and free‑text insights to knowledge‑acquisition and retention and early risk indicators buried inside culture surveys and HR systems. They argue that traditional outputs like training completions and policy attestations tell us almost nothing about trust, behavior, or risk.
They walk through high‑value but overlooked metrics that reveal whether a program actually works—how long misconduct went undetected, which managers drive repeated issues, and the questionable choices and risky behaviors that systems and controls managed to prevent in real time. These underused data points—especially when connected—offer a far more accurate picture of prevention, detection, and real‑world effectiveness.
Who? Zach Coseglia + Hui Chen, CDE Advisors
Full Transcript:
ZACH: Welcome back to The Better Way? Podcast brought to you by CDE Advisors. Culture. Data. Ethics. This is a curiosity podcast for those who ask, “There has to be a better way, right? There just has to be.” I'm Zach Coseglia and I am joined as always by the one and only, Hui Chen. Hi, Hui.
HUI: Hi Zach. Hi everyone out there. Hope your day's going well.
ZACH: Indeed. All right. So, it's just us today, no guest, but we have a really good topic and one that's probably long overdue for us to dive into—and it really is all focused on data. What we wanted to do today was to have a more robust discussion about some high value metrics that are often missing from people's storytelling or their internal analysis of risk or programmatic performance.
HUI: Yes. And we run into this a lot as we work with folks on the question of measuring effectiveness. And you know, as we dive into that topic, we realize that there's a lot of data that companies are not necessarily currently collecting or putting them together. So we thought this would be something that drives towards that big question of how to measure effectiveness.
ZACH: That's right. I mean, we've had a lot of discussions about the need to move from output to outcomes, and along the way we've definitely talked about some of the ways in which folks might do that. But the goal today is to really get pretty specific and pretty practical around specific metrics that folks could use, that some folks do use, but that most folks don't use to do those measurements.
HUI: Yep, that's right.
ZACH: All right. And so, we're going to actually structure our conversation around different facets of data or different elements of the program. And so, Hui, let's start by talking about speak up and trust—sort of, what metrics folks currently use to measure, speak up and the health of trust within the organization, and then we'll dive into some of the ones that maybe they don't. But let's start with . . . what are -- what are the common ones that folks are using around speak up and trust health.
HUI: So almost every time we would see two types of metrics that people use on this question, which is the number of reports that come into their speak up system and the percentage of anonymous versus named reports. That's the two most standard metrics that we see in this space. But boy, Zach, there can be much more, right?
ZACH: There can be much more and—and before we even get into some of those, what's really interesting about this one is it's a really hard set of metrics to use and a really challenging analysis because it's one of the fewer places where too many reports may be a reflection of maybe a robust speak up environment, but potentially a problematic compliance environment; but where too few are likely, you know, a potential signal of folks not being willing to speak up. But that dynamic underscores the need to not rely exclusively on the number of reports or even the nature of reports and to dig a little bit deeper. So, one of the things that some folks use, but that's not used enough, is information from, you know, company engagement surveys or surveys that are used to pull employee perceptions of the organization. So, for example, you might ask employees just at a baseline, do they know how to report before we even get to whether or not that they're comfortable. Do they know how to do it?
HUI: Yeah. And I think, you know, so much of this is in how you ask that question, right? Because it's one thing to say, do you know how to report answer yes or no? I mean, who's going to say no? But, you know, a different way to ask that question may be, you know, if you had a concern, here are all these different sort of options, which ones would you choose? That's another way to ask. And then there's also another way to say if you had a concern, just free text, where would you go, right?
ZACH: That's right.
HUI: Let them list what they know. I would trust the second and the third a lot more than I would trust the first way of asking.
ZACH: Absolutely. On the first of those . . . is something that we like to do a lot, whenever we're doing kind of employee polling or even when we're doing focus groups. It's about giving people dilemmas and then asking them to tell us how they would behave in those moments, rather than just relying on the sort of standard multiple choice, strongly agree or strongly disagree with a particular statement. And the second option, which is the free text, which we also have sort of this, I think, positive bias in favor of . . . we're now powered with tools that we didn't have even just a few years ago that enable us to analyze those results at scale in ways that would have been really difficult to do if you have thousands of employees just a few years ago (using, you know, common AI tools, sentiment analysis and the like). And you know, I think that's been the real barrier to folks using that kind of free text response because they're like, well, someone's going to have to go through and read all of these—and that's not necessarily the case today. So, I fully, fully agree with that.
So, then, once you have a better idea of whether people know how to report, it's about understanding whether or not people feel safe reporting. And obviously, historically, folks have used that sort of anonymous versus named metric as one way of doing this, but another way of doing this is to actually ask people. And you know, again, it's about asking the question in a little bit more of a nuanced way than just do you feel safe reporting things?
HUI: Right. I mean, it can even be something as somewhat . . . a more creative way of asking is, you know, what do you think happens after we report something? Again, that shows our bias towards free text; but also, you know, keep in mind sort of electronic survey is not the only way you can do something. You can also collect this kind of data in just even casual, but systemic, conversations with people—like focus groups or just like going to different parts of the business. Make sure you individually talk to100 people.
ZACH: Yeah, I think that that's one of the things I really want to emphasize on this question across the board of additional data sources is: I think sometimes we see folks get kind of stuck because they think that it needs to be collected from thousands of people or it needs to be big data. And of course, there's a place for that and there's a lot of value in that, but there also were a lot of biases associated with that. And so, finding these places where you're getting information, whether it's information from conversations that you're having about people's willingness to speak up or, you know, information about people's knowledge and familiarity with your policies and compliance expectations. Finding a way to quantify those things, finding a way to track those things so that you can turn the conversations that you're having every day into a meaningful data point that can then be part of your story.
Alright, look, before we move on, just a couple other things to think about with respect to speak up. One is: look at how long it takes you to follow up with people and see whether or not you're following up with folks and in a time frame that you know is consistent with your values and consistent with the way you want that program to operate. Ask people who have reported what the experience of reporting was like for them so that you can get a better understanding of how the process that you've built end to end is actually operating.
HUI: Yes, yes. Yup.
ZACH: And anything else?
HUI: Also, issues that are raised informally to managers versus directly coming to you. We -- we do see that a lot. That's not necessarily a bad thing, but having an understanding of, you know, when and why people choose to go to their managers first rather than come to you would be helpful in your in understanding the perception of your program.
ZACH: Yeah, for sure. All right, let's move on to another bucket. This one is a big one. We're calling this bucket behavioral impact of training. So, we talk a lot about effectiveness. We talk a lot about understanding outcomes rather than just output. This is the place where we wind up having that conversation most frequently. So Hui, tell me, most teams track the completion of training. How do we feel about that?
HUI: I think we have been pretty clear about how we feel about that. It is a metric that shows your power of compulsion and nothing else, really. Just that you know it people, you made people go to a training, they showed up and that's what it showed. And the other thing is the quiz score, right? The knowledge assessment at the end or throughout the thing. And typically, these kinds of quizzes are the kind of quizzes you can take infinite number of times until you get everything right.
ZACH: My issue with the way the quizzes are used today is that they're really not used that much at all. And this is what I mean by that. Most people will say we had a quiz at the end and we had 100% pass. But as you said, the reason they got that 100% pass is because people had a multiple choice question and they had, you know, five tries to get it right and they eventually got it right. What I'm interested in seeing is if you're going to do that kind of quizzing, let's actually tell a story with that data. Let's actually understand how many people got it right the first time. How many people got it right the second time? What do the behaviors around people's experience taking that quiz tell us about whether they were actually paying attention and whether or not they did acquire some knowledge. Do people actually learn things?
HUI: Right. And I think there are, you know, when we talk about the outcome we want from training, it really isn't just knowledge. I mean, knowledge inside people's head is useful or not, I don't know, particularly of, you know, areas of law or your -- your company, company's regulations or internal rules. What you're interested, the real outcome that matters is behavior. So, nothing right now is being used to measure that outcome of training.
ZACH: Very little, very little. So look, I think some things that I'd like to see some high value metrics that are missing are if we're going to do knowledge, which I think we should, let's set a baseline, let's understand what people know going into it and then also measure what people know going out of it. And what they know six months down the line—so that we can really tell a story about knowledge acquisition and knowledge retention. So at a baseline, I'd love to see that. I don't think that there is enough of that happening today, but I think we can also do more than that.
And one thing that I think we can do is to ask questions and to collect data, not just about people's mechanical understanding of the quote rules—and instead try to understand how people will act when confronted with a potential dilemma, and we don't see a lot of that happening in these questions. These questions tend to be more here are the rules; now I'm going to quiz you to see whether or not you've got them, rather than here are the rules and expectations and values that we hold as a company . . . now let me put you in a scenario where those things may be challenged, and let's understand how you would act in that situation. And maybe if you do that right, there's multiple right answers or multiple wrong answers, but we're getting an opportunity to sort of create a data point that serves as a surrogate for actual behaviors which are oftentimes harder to come by when we're looking to tell the story about the behavioral influence of the training.
HUI: Right. And I think I'm going to show my bias for -- for sort of free narrative type of responses again because I have seen those scenario-based training, but the answers that you're . . . the questions that you're supposed to answer out of a multiple, a set of multiple-choice answers are so obvious. I mean, you know, right? It -- it it's like,
ZACH: Oftentimes, yeah. This is right.
HUI: Okay, so we told you how to drive and here you are in front of a pedestrian crosswalk. Should you hit the pedestrian, accelerate or stop? Or drive to the side and, you know, onto somebody's lawn. Like the choices were just too obvious.
ZACH: Yeah, agreed. I do think that there is a way to craft . . . I mean, we've done it. We've done it many times to craft multiple choice questions that maybe are a little bit easier to analyze than those free text questions, but really test whether or not, one, people were paying attention and, two, give us some hints, some ideas of how they will make decisions in the moment. And I think that we need more of that. What are some other high value metrics that we want to see in this space?
HUI: I think correlations between areas that you have done training and where also areas, you're seeing substantial violations or problems based on your monitoring, investigations and audits. That really is an underutilized data comparison that people are making because if you're doing training a lot, you know in this area and yet you continue to see violations, then maybe you should question if your training is working.
ZACH: That's right. That's right. I mean, we often hear folks say, oh, the issue arose and we did training in response to make sure that everyone understood what was expected. We fixed it. And that that always leads me to say, well, how do you know if you fixed it? And then that always leads to, well, it popped up again. So, you didn't fix it. So doing that . . analysis is critical.
HUI: We have, we have actually, you know, interestingly worked with a number of clients who have come to us precisely with this problem that -- that you know, we've had the same problem occur over and over again and my God, you know, we've done training after each time and that's when they realized perhaps the training that had been done was not really solving the problem, and it's time to think of new—new solutions.
ZACH: That's right. That's right. And look, to take it back a couple of steps to even something that I think folks have that they're not using to its fullest extent. This is not about knowledge, and this is not about behaviors, but it is an important metric that will help you get a better sense for how people are experiencing your trainings.
And that's something as simple as the seat time. How long is it taking people to go through this training? That is a data point that I think most folks who are using any sort of LMS system have, but they're not really looking at it for this purpose.
If you're seeing that there are pockets of people who are getting through your training—that you expect to take 30 minutes—in 5 minutes, I think you should ask yourself, how is it that they're doing that? And what does that say, potentially, about whether or not people are really paying attention or whether they're just trying to get through this thing? And then if you do quizzes or questions, how are those people performing on those quizzes or questions? And that's the kind of thing where I feel like you kind of have to look at it and be willing to do a little bit of introspection. Perhaps it's the tool that you've created that is creating that behavior. Maybe it's that you haven't created something that isn’t relevant to these folks or something that isn't connecting with them in the way you need it to.
HUI: Or . . . or that they already know this topic that you're trying to force them through a training on. So, you know, we are advocates in terms of people considering the option of testing out of training right because the testing out gives you the opportunity to do the baseline setting, so that you can say we're not putting you through two trainings. Here is a test out option. You just go take the test that gives you the baseline and based on their performance. They might get additional training, and the additional training might even look different based on your scores.
ZACH: That's right. That's right. A couple other things to think about, and this probably applies equally to policies, which we'll talk about in a moment, but also is relevant to training. It's kind of to echo something that I said before around this information that comes your way that often isn't tracked and therefore can't really be quantified or measured. You know, do you have people coming to you with a lot of questions about something that you've trained on? And what does that -- what could that tell you? Well, it could tell you that either your policy or your training maybe isn't clear, and that as a result, people have a lot of questions.
It could also be telling you that people aren't really reading your policy or engaging with your training, and therefore it may not be doing a whole lot to actually transfer knowledge to them. It also may be telling you that people either lack confidence in their ability to make these decisions or they're on autopilot. Any one of these is a problem. And so if you're getting those kinds of questions, find ways to track that so that you can add that to the story you're telling about, you know, the strength of the controls that you're otherwise touting as helping manage the risk.
HUI: And it's not just the number and type of questions, but also the timing of the question. When are they, is there a lot of them coming in right after a training that you have launched or a new policy that you have launched or a new system, the clustering of the questions . . . are there particular pockets of the organization that are at they're asking that question . . . those questions more than others.
ZACH: Absolutely. I mean, and that's something that probably applies to a lot of the categories that we're going to talk about today, which is doing some amount of profiling of the people who are involved in issues or the people who are asking questions or the people who are performing a certain way on the quizzes that you're giving. You know, one of the examples that I always think about from many years ago is, you know, a client who had come to us because they were seeing repeat issues—over and over and over again. And they were like, “let's just retrain, help us create a new training, let's retrain everybody.” And we said, well, hold up before we just go retrain everyone. Let's understand where these questions and these issues are coming from. And when we profiled the people who were making mistakes, we realized that it was a very specific group of people who were making these repeat mistakes. And we were able to say that about 95% of them were people who had only been with the company for three to six months. And suddenly we had a whole lot more information than we had before. One thing we knew is, well, there's a very discreet population of people who we need to be focused more on, on this issue rather than spending a whole lot of time training thousands of people. It also told us that, you know, that there was a gap around people who had just joined the organization—and hadn't yet either gotten that information or hadn't yet sort of assimilated to the cultural norms that those policies and procedures articulated. So again, yeah, building profiles around people is such a huge power play.
HUI: That just so echoes a point that we talked about a couple of episodes ago. Do not train the whole company on misuse of corporate credit cards when only 2% of the company have the . . . those credit cards.
ZACH: Absolutely, absolutely. All right, let's move on to another one. And this is one that there's not, in most cases, there's not a whole lot happening . . . and that's around manager and leader conduct signals or data that sort of tells a story about manager and leader accountability. So, Hui, talked to us a little bit more about what we'd like to see in the space.
HUI: Well, let's start with why the two of us always draw our eyes when people talk about tone from the top. And you know, it's interesting when we—when I—wrote the Evaluation of Corporate Compliance Program, we intentionally changed tone from the top, to conduct from the top. And of all the sort of deep dives that people did in that document that was somehow missed. And I to this day don't really understand that people could seem to be pouring every word of that document, especially when it first came out, and they missed that. Because it's the conduct that matters. And so you know, counting how many times they sent a pre prepared e-mail to the company or it said something like “do the right thing” at a town hall, it really isn't important. People are watching how the leaders behave. So, some of the very common data that people surely already have, but maybe underutilizing is, where are the investigation findings and audit findings and complaints coming from? You know, we have seen—you and I have both seen—in our investigative work that the same manager leads a team that just has problems over and over again. And at some point, hopefully, it's not a point that's very far down the timeline, somebody should realize is it the, you know, is there some problem, some contextual problem with this team here? It's not just with the people because we put new people in, they make the same mistake.
ZACH: That's right. It seems so obvious, but I can think back about my time as an investigator, my time leading an investigative function. And you know, when you're in an environment where you have a very busy docket, you have a lot of matters coming in, you're in a high-risk part of the world, I see all the time folks sort of churning those investigations. They're doing them well and they're oftentimes doing them quickly, but they're resolving them as though they don't operate within a larger ecosystem of the company and of teams and of people.
And so, what you'll often see is, you know, a very junior person making a mistake and getting fired or getting a warning letter—and that happening time and time and time and time and time again. And the subject in those investigations is always the more junior person, either because it was found that way from an audit or monitoring or because it was reported that way through your hotline or other channels. But what we started doing was looking at the broader environment and saying, hold up, these people who we’re all investigating, they all report to this the same person or they're all part of the same broader team. And what started happening, and I don't see this a lot now, but what started happening in my experience was: we started making those people the subject in investigations. No one ever made an allegation about them. There was never anything that they personally did non-compliantly, except they were supervising a team that was engaging in all kinds of non-compliant conduct that became the non-compliant conduct that was attributed to them
HUI: That's right.
ZACH: So, if we talk about manager accountability, if we talk about the role that managers play in promoting compliance, then we have to look at the role that they're playing around issues that have been investigated and substantiated.
HUI: It's interesting . . . when I was at DOJ, there was one time a company came in to—we were discussing the misconduct in question, and they showed us an org chart. And there's—there's one team that clearly it was red, like that whole, everybody on that team or most of them were, you know, sort of indicated by red as being involved. And sitting on top of that team is a manager whose box was green. And so I asked the people who were presenting to DOJ, you know, did you ask this person to explain to you what is going on [on] his team? And they said, well, he wasn't involved. And I said, well, neither are you, yet you're here explaining it to us.
ZACH: Right, right. That's right. That's right. I mean, it really, it makes so much sense, but it actually doesn't happen nearly enough. And it goes to this broader point around how data can be used more effectively beyond just, like, here are some additional metrics. It's about being curious about these things. I can't. You can't. We can't today articulate every potential data point or every potential inquiry that you might want to use when—when analyzing your data. We can give you some, we can give you a lot. But at the end of the day, the real power is in being curious with what you have in front of you and figuring out those connections, so that you have a more robust understanding of behaviors and of the actual situation, not just the one that's sort of staring you right in the face.
HUI: Yep.
ZACH: So, a couple of other things that some folks use, but certainly we'd like to see more of, is, again, how information from your culture work—your culture assessments—contribute to an understanding of how leaders are perceived and how you know teams operate. So that could be questions around individual employees perceptions of their manager or perceptions around leadership. It could be done through analysis of values alignment between employees and leaders, but there also were some others that typically sit within HR that could be used for these purposes. So Hui, why don't you talk a little bit about those, because those really aren't used nearly enough.
HUI: Some of the HR data can be very helpful in understanding what's going on in the company. One is exit interviews. So, HR functions often do conduct exit interviews, and if they don't, I would urge you to work with them to make sure they do, because that is a point when someone really does feel free to talk about their experience in the company. So that is very valuable source of candid feedback about what is going on. Oftentimes when HR does these interviews, those data may not be systematically quantified and/or shared with compliance at all. So, lots—lots of people getting interviewed, you know, things might be reported in there. So, I'm not talking about actual violations being reported in these interviews, but about the culture dynamics of the team, you know, the sense of safety, psychological safety, their perception of their managers and of the leadership. Those things are valuable data, and they certainly can be collected, quantified, and analyzed.
ZACH: Absolutely. There's also other HR data too, around things like, you know, turnover rates, retention rates within teams and what that potentially tells you about the strength, you know, the bond within individual micro-cultures. You know, one of the things that I remember from some work we did with a client not too long ago was realizing that in the past five years, the company we were working with had a very high turnover rate, meaning that somewhere between 40% and 50% of their people had only been with the company for less than three years. And that might tell you something about management and leadership and values, but what it certainly tells you is that the things that you think of as being kind of social norms and deeply ingrained within your organization may not actually be so deeply ingrained because you have a serious population of people who haven't been around that long. Haven't sort of assimilated to your organizational culture and it underscores the need to double down on some of those efforts, if you're looking at, you know, your culture, your values, those social norms that you hold close to actually be controls when it comes to compliance.
HUI: Yeah.
ZACH: All right, let's move on and let's talk now about issue life cycle friction and bottlenecks. And the way that I think about this—and what I'm most interested in—is how we can use data around how long it takes us to do things as a reflection of the effectiveness of our program. And so, most people are tracking things like how many cases they open or how many cases they closed. But I'm also interested in just like, how long did it take us to get here and what does that tell us about the strength of our detection and remediation efforts? Hui, do you want to expound on that a little bit more?
HUI: Yeah. So let me just clarify because we we're basically talking about investigations. So yeah, how long does it take for us to react to risks and misconducts that have arisen? So commonly, people do track cases open, cases closed. Some track investigation cycle time from opening to closing and that is helpful data, but it can be so much more helpful. The most important metric when it comes to an organization's ability to detect misconduct is how long does it take for a misconduct to be discovered by the organization? So, from someone starting to misbehave: do they get discovered in a week, in a month, in five years? We have seen those, right? So, that requires you to collect this data point and document it in your investigation in a way that then can be quantified into actually number of days, right? So, when you're investigating, you really have to dig in . . . how long ago did this start? When did the first transaction that never should have happened? When did the first instruction that never should have been given? So, give an estimate, find out that origin date, and get that in your report. Have a field in your system that captures those dates so that you can do the calculation. That almost never happens.
ZACH: It almost never happens. And the truth is, one of the reasons why this is so important is that it is not often measured in days. I mean, if we're being realistic, it's maybe measured in months and oftentimes measured in years.
HUI: Yes. Years, yes.
ZACH: And if we saw that, if we saw that it was measured in years, how would we react to that? Well, one, we might say, well, we're not doing a great job of detecting this stuff. It might actually not tell us just something about issue cycle friction, bottlenecks and our detection abilities, but it might also tell us something about the strength of our speech wake up controls, which is where we started, that it took this long for this thing to get to us. It also may tell us something about those mechanisms, if the way it got to us was surprising. Maybe it was just that we happened to be doing an audit in that area. An audit identified this issue long after it actually happened, rather than it being something that came to us proactively. So there's a whole host of stories to be told if you're actually tracking this and analyzing it and looking for insights around it.
HUI: And you may even have to look system wide, not just in your investigation data. And you know, so I -- I have seen one of the biggest cases that DOJ had done originated in a hospital where multiple complaints had been made by patients . . . about the quality of service that they were receiving. And it turns out they were having those complaints because the people who were delivering those services were never there. That was why there was problems, right? So, this issue had been flagged to the organization through customers, through staff as complains about service. And if anybody had adequately looked into that right from the start, they could have discovered that.
ZACH: Absolutely.
HUI: So, it gives you a sense of also just how responsive your organization is as a whole, not just your investigative function. But there's another point, though, that investigators often are concerned about, you know? So in this measurement of investigation cycle time, there's often a fight as to whether cases are closed upon the conclusion of investigations or upon the conclusion of completion of remediation. Why? Because remediation often takes a long time and oftentimes is outside of the control of the investigator. But we don't want to close the case because we want to make sure that that it's on somebody's radar, that remediation still needs to be done. So, if we don't close the case, then the cases look long and investigators don't like to be blamed for it. And if they do close it, then it drops off everybody's radar and in the end the fear is nothing gets done.
ZACH: Yeah, I've never met an investigator who likes the cycle time metric because it's . . . it feels very personal. It sometimes feels like . . . sometimes, like, it feels like a little bit of an affront, but it really is important. So yeah, continue.
HUI: So, you know, we have always told people it's very important to be able to mark that timeline again in your system. So this is the day the investigation is closed. Now your clock starts anew. The case is not closed yet, but you start a new clock, which is the remediation phase and what you want is to present . . . these . . . both of these metrics investigation cycle time that ends at the completion of the investigation, and then there is the remediation cycle time, right? That set of metrics measures a whole different set of stakeholders.
ZACH: Yeah. Absolutely, absolutely. So just to recap, we want to measure the cycle time because investigators should be accountable. They're an important part of the process and we want to make sure that investigations are happening, you know, swiftly and within a reasonable time frame. And so that would be measured from the time that the issue was brought to the attention of the investigators to the time that the investigation was complete. And I just want to talk about just, I want to put a finer point on that. It's not a lot of you probably don't close the investigation at that point. It's not that it's closed. It's that the work that the investigator is accountable for is complete. The other things like implementation of remediation and corrective action, as you said, falls on someone else and that's something that should be measured separately.
HUI: Yes.
ZACH: Now, you say some of these, some of these things that are being measured during that period are things like delays caused by business resistance. You know, sometimes you don't even know what the corrective action is because you have a business that's pushing back on your recommendations or that's pushing back on the findings. We want to measure that. That's an important component of the story of the overall effectiveness of our program—and about the way in which leaders and managers think about the program and their relationship with compliance. So being able to tell a story around delays caused by business resistance is important, but sometimes it's not even like substantive business resistance. Sometimes it's just participants in the process not doing what they need to do swiftly enough. And we want to measure that too.
HUI: Exactly. And you want to have that metric. Exactly. You want to have that metric so that those in the governance roles can know whom they should be looking at for problems with this metric.
ZACH: Yeah. One of the other things that I want to mention here that I think is important that I some folks do, but a lot of folks don't, is having a process by which you are categorizing your investigations by risk or by severity. I personally am a big proponent of that because I don't think that every investigation is equal. You know, sometimes an investigation is as simple as, you know, someone didn't do the process in the right order. And sometimes the investigation is someone was sexually harassed or someone was discriminated against or someone engaged in a multimillion-dollar fraud. Those two things should be treated differently when we're quantifying them as part of a more data-driven program. But it's also important because I think you can have different standards around remediation and cycle times based on the level of severity and the level of complexity of a matter. And if you're not tracking that, if you're not using that language, then you're just sort of treating everything equally, and as a result, you might wind up taking too much time on the really important investigation because it just happened to come in later than this big stack of unimportant things that's on your docket.
HUI: Yep.
ZACH: All right, let's move on and let's talk about sort of policy effectiveness and early risk indicators. Now, what we see a lot of people using as they're kind of beginning their journey on data-driven compliance is metrics like number of policies that exist, number of policies that have been updated, the timing of when those policies were pushed out to people, attestations, you know, with 100% attestation around policies. We see folks talking about communications similarly around how many communications were put out. And none of those things really are telling a story about whether those policies are effective. They're telling a story about what you've done to get the word out or what you've done to sort of build our it’s . . .
HUI: Output.
ZACH: . . . it’s output, it's about building the guardrails. One of the things that's so important to telling a story about prevention is having data around systems and controls preventing things from happening. You know, so much of the data we have is about something that has happened that we've now found through audit, investigations, or monitoring. But what about all of the things that were input into the system or that someone tried to do that couldn't even happen because the system or the controls prevented them from . . . I think that is the single biggest sort of roadblock or gap or blind spot for compliance teams and telling this effectiveness story because effectiveness is about prevention first and foremost. And we just simply don't have much data around prevention.
HUI: And like you said, it's not because it's impossible, right? You know, it drives me crazy when I hear people say it's impossible to measure prevention. Precisely what you said. If you have set up controls that have worked, you stop the payment from going out that never should have gone out. You stopped a third party from being used when it never should have been used. That is how you score prevention. That is exactly prevention in its core definition.
ZACH: Absolutely. One of the other things that we don't see a lot and that I'm really interested in seeing more of is a more intentional analysis and quantification of boundary pushing. And I think there's a real place for analytics to play a role here. You know, so much of compliance for so long has been about right and wrong, yes or no, black and white, compliant and not. And so if we were to simply look at how many deviations we had, you know, a lot of folks with a good compliance program might not have a lot of deviations . . . but what about things that fall short of an actual deviation that might actually be reflective of a potentially problematic behavior or a problematic instinct or a problematic culture? Like, for example, and we've talked about this before, but you have a cap in place to say you can't spend more than this amount, or in aggregate you can't spend more than this amount, or you have maybe fair market value protocol in place to govern anything relating to transfers of value or fees or third party relationships. You might not have those thresholds being exceeded, but what if you look at the way people are spending money, the way people are actually behaving and see that when the cap is $25.00, everyone is spending $24.99—and that could be reflective of a variety of things. One of the things is that maybe people are actually kind of smart to your controls and are processing things in a way that's intended to not be caught. But it also could just be that people are on autopilot and they're not really thinking about how much to spend. They're not really thinking about this transaction. They're not actually thinking about the values and the policies that you've put in place. They just know that they can't spend more than $25. So they're spending the absolute most that they possibly can. And that is almost equally as dangerous. You know, intentional misconduct or just complete, you know, negligence or autopilot. They often lead to the same place.
HUI: It’s not good fiscally for the company to have your people spend the absolute most possible under a threshold. You want people to spend what is reasonable to spend. So are your people actually just wasting your company's money, for one?
ZACH: Absolutely.
HUI: Another area where I think that we could be collecting interesting data that I haven't seen anyone do is what I call sort of intervention data. So, just like your controls catching things from happening, your people are also controls. So, we do see questions asking people, you know, in the last 12 months, have you seen misconduct, right? What about asking people in the last 12 months, have you stopped something that shouldn't happen from happening? You know, maybe someone . . .
ZACH: Hmm.
HUI: . . . raised an idea that really was just not the best idea in terms of, you know, it could land you in trouble and someone else in the room says I don't think that's a good idea. You know, because our policies, there's a policy against that. You know, we really can't do that. I think that does happen, but we're just not capturing that. And every time when you have some employee, a manager, a colleague saying: let's not do that: your prevention has worked. It's scored. You're just not capturing the score.
ZACH: That's right. And I actually think it's even more nuanced than that. And it's the fact that someone stopped it is a really wonderful story, as you say, about prevention. But the fact that someone was willing to raise an idea doesn't also necessarily mean that you have problematic people or that you have problematic behaviors. It means people are comfortable raising ideas, and sometimes an . . . sometimes an idea is a bad idea. And if we only allowed good ideas to be articulated, you know, a whole host of things that we love, that are, you know, innovative and that are part of our life probably never would have come to fruition. So there's power in the bad ideas too. And so I think being . . .
HUI: Yes. Yes, we emphasize that dynamic is healthy. Someone raising something and someone carefully deliberating it and expressing concerns or, you know, cautions. Those are all healthy dynamics.
ZACH: Yeah. Absolutely. All right, let's talk about retaliation. We started by talking about speak up as we kind of near the end of our discussion, we're going to talk about what happens sometimes when people do and obviously, we don't want to see that. But the kind of data that most folks are using around retaliation is, you know, formal retaliation complaints . . . and it happens. What would be interesting, though, is to see a little bit more sophisticated analysis of that. So not just looking at how many people have reported retaliation or how many people have we . . . you know, how much, how much retaliation have we substantiated, but also looking down the line about post-reporting career outcomes for people who report retaliation. You know, we think we've stepped in, we think that we've addressed it, but did we actually, you know, or does it look like perhaps their career stalled in some way or is there some information to suggest that perhaps outside of the compliance function, people who are speaking up are not progressing in the way that they should. Is there information around turnover among reporters that suggests that people who raise issues you know, wind up leaving the organization and what does that tell us about how things were addressed and the experience that they've had? Again, it goes to this point about sort of being another step curious around the issues that we care about.
HUI: It's also important to remember that when you collect this kind of data about your reporters, they need to be contextualized in your overall data. So, it's not enough that you say X percentage of our reporters got promoted or denied promotion or left, but how does that compare to the general population? That's when you make meaning out of that. Just, you know, having those data sets about your reporters.
ZACH: Thank you.
HUI: Very important first step and then the next step to make meaning out of that you have to compare that with the general population.
ZACH: Indeed, and it's kind of a theme that applies to a lot of the stuff that we're talking about that we probably should spend another episode or a portion of another episode talking about, which is not just what metrics we should use, but a little bit more about how to analyze and utilize metrics to tell a story. Because the importance of the point that you just made is that we need to normalize these things in some way. We need to create comparisons that are going to create valuable insights and sometimes either presenting a data point on its own may create the “so what factor” where no one really knows what it means; and sometimes also if we don't do that sort of thoughtful mathematical analysis, the normalization of certain metrics, we wind up potentially telling the wrong story because we've not analyzed the data the way it should be.
All right, we're just about at the end. But before we go, let's talk a bit more about data integration and telling a holistic story, because it's not always just, hey, here's a metric, here's a KPI. Sometimes the power is actually in bringing multiple data points together and seeing what they mean once we've connected them.
HUI: That's, in fact, what we've been talking about throughout this entire episode, right? It is not just about data points that you could collect, but what you can put together, like what we just said about the reporters—putting it together with the general populations comparable metrics. You know, we talked about putting together the training data with your investigation audit monitoring data to see if the training has in fact worked. We also talked about putting together investigation data with other HR data. So, all of this is about puzzle pieces that come together.
ZACH: Yes, yes. It really is.
HUI: It is about using all these different pieces to put together a bigger picture about what is happening in your effort to prevent, detect and remediate misconduct in your organization. So, a lot of compliance presentations will be, you know, today's topic is training. Here's our training plan for the year and you know training dovetail with communications. This is our rhythm, whatever. Okay, next meeting we're going to talk about investigations. Here are the trends. Here's, you know. But how are you putting those together to tell that story?
ZACH: Yeah, yeah. I mean, I so whenever we talk to clients about this, I, you know, I actually pushed back on having the conversation that we're having today too early on because I think we'll get to the discussion about metrics, we'll get to the discussion about data. But at the end of the day, if you're going to put together a strategy around this stuff, we don't want to start there. What we want to start with are what are the questions that you want to ask? What are the questions you need answers to? What are the questions that your stakeholders are going to ask you? And what do you want to be able to tell them? And so, I guess to wrap it up, no one really wants to go into their board, no one really wants to go into their executive committee, no one wants to sit around a table with a bunch of compliance folks and just be able to answer the question of . . . how many trainings did we deliver? How many policies do we have? And how many cases did we open? The questions you ultimately want to answer are: do people trust leadership? You want to answer questions like, are we comfortable with the amount of time it takes us to detect issues? Are we comfortable with the amount of time that it takes us to remediate issues? You want to be able to answer the question, are we preventing issues before they actually happen? You want to be able to answer a question like, is our culture as communicated aligned with the lived experience of our people? These are much bigger, more substantive, more important questions. So, start by asking: what are the questions you want to answer? And I guarantee you the number of people who took your training or the percentage of people who completed it is not going to be the answer to any of the questions you really care about.
HUI: Yep, quite sure of that.
ZACH: All right, Hui, thank you so much. I hope folks will continue the discussion with us and share some of the metrics that . . . that they use or some of the methodologies that inspire them. But this has been a really great chat.
HUI: As always.
ZACH: Thank you.
ZACH: And thank you all for tuning in to TheBetter Way? Podcast. For more information about this or anything else that’s happening with CDE Advisors, visit our website at www.CDEAdvisors.com, where you can also check out the Better Way blog. And please like and subscribe to this series on Apply or Spotify. And, finally, if you have thoughts about what we talked about today, the work we do here at CDE, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.