Ep.20: End of Year Review
Remember, you can always listen here or follow us on Apple Podcasts or Spotify. Either way, thanks for supporting us.
About this episode. It’s the last episode of the year, and Zach Coseglia and Hui Chen are taking a step back to reflect on what made 2025 so memorable. From a new presidential administration and major regulatory and enforcement changes to questions about the rule of law and the future of DEI— they talk through what these shifts mean for businesses and compliance professionals trying to stay grounded in an increasingly unpredictable world.
They also dive into the year’s other big headline: artificial intelligence. How much are people really using it at work? Do they trust it? And what does “AI-powered” actually mean? They’ll share some surprising data, practical insights, and a few laughs as they explore how companies can move beyond the hype to use technology in smarter, more meaningful ways. Plus, they highlight some of the most rewarding work they’ve done this year—helping organizations listen better and solve problems faster. It’s a thoughtful, honest wrap-up of a year full of change.
Who? Zach Coseglia + Hui Chen, CDE Advisors
Full Transcript:
ZACH: Welcome back to The Better Way? Podcast brought to you by CDE Advisors. Culture. Data. Ethics. This is a curiosity podcast, for those who ask, “There has to be a better way, right? There just has to be.” I'm Zach Coseglia, and I am joined, as always, by Hui Chen.
Hi, Hui.
HUI: Hi, Zach.
ZACH: How's it going?
HUI: Pretty good. It's just us today and I cannot believe that we're at the end of 2025 already.
ZACH: I know, an entire year has gone by. We are 20 something episodes in now.
HUI: I can't believe it.
ZACH: I know. So, this is our year in review episode. We're going to look back on all the wonderful, strange, unfortunate, sad, exciting things that have happened over the course of the past year.
HUI: We're going to try anyway.
ZACH: So, the way that we structured our discussion today is we have a couple of big topics that we're going to talk about, things that really have impacted our world in the broadest sense. But then we're going to take them home and talk about how those things are very much impacting the world of compliance and culture. And so, Hui, you're going to get us started with a very big topic. Which is probably the highlight or headline, maybe better said, of 2025, which is a change in presidential administration.
HUI: Right. So, it seems like just yesterday we resumed our podcast series with the pause on FCPA and that was in January and here where we are in December, and what we have seen is really a lot of regulatory rollbacks and narrowed white collar enforcement priorities. The administration has brought a lot of changes all across the landscape and those are really not to be minimized in any sense. In fact, in some ways I feel like they're very important, in the context of: this is where we live, this is the society in which we live and work. And even if you're listening from outside the US, things that happen in the US have global repercussions, like it or not. So, we have to keep in mind that the year has brought a lot of disruptions; and this is an administration that takes pride in not doing things the way it used to be done. And there are people who like it, there are people who don't. But what that has brought is a lot of disruptions. And like I said, we started the year with the FCPA. We saw other executive orders that roll back on environmental, financial reporting, agency rulemaking, in all of those areas. Agencies have been directed to revisit or rescind some of the rulemaking during previous areas. And we can expect continuing changes across agencies like SEC, EPA, Department of Labor—and certainly at DOJ. We're looking at a more deregulated emphasis.
We're also looking at a narrowed enforcement focus, and this is based on all the various DOJ memos and policy documents—and speeches accompanying them—that have been made, right. So they revised the corporate enforcement and voluntary self-disclosure policy to make declarations more automatic. They have announced less use of monitor ships. They have terminated some existing monitor ships. There are shorter, or at least there's the intention of having shorter or more streamlined investigations to make things go quicker, faster resolutions to reduce burdens on corporations, and they also revised the 2025 White Collar Enforcement Plan. It's also known as the Gagliotti Memorandum that refocuses DOJ resources on a defined set of 10 high impact areas. So, fraud that victimizes U.S. investors, trade and custom fraud, money laundering, national security and financial crimes, federal program fraud, etc. Now, having said that, this is what they have said they would do. What I think is really interesting is looking at what's actually happening. So looking at what they said about we're going to only use FCPA when it's tied to these high impact areas, priority cases. In fact, we don't even really like how the law has been used. The first time in 15 years a company, this year in October, has been indicted for FCPA, happened in Florida. So, this is the Smartmatics case. So Smartmatics is a London-based company with a US subsidiary in Florida. Last year, the Biden administration indicted two of the company's senior executives for conspiring with others to bribe government officials in the Philippines. And this conduct happened in the 2010s. So think about how long that took, right?
So, from everybody's understanding this has been an ongoing case, DOJ had previously decided not to charge the company, only charged two individual executives. Surprise. In October, after 10 months of talking about how FCPA is going to be used differently, DOJ issued a superseding indictment in the case that added the company to the two individuals being charged. No new facts. But just added the company. Why is that? Everybody's puzzled about that. And there is really, at least from the outside, there is not a lot of information as to what seems to be a reversal prior decision . . . not to charge the company. And one of the interesting things to know about it is that Smartmatic has a pending $2.7 billion lawsuit with Fox News. It has accused Fox News of deliberately spreading lies that the company helped to steal the 2020 election for Biden. Fox had settled a similar lawsuit with Dominion Voting System for almost a billion dollars, 787 million, previously. And now you have a company that's suing for $2.7 billion engaged in this lawsuit with Fox News and suddenly—surprise—given all the rhetoric’s that's been said, here it is an indictment, first FCPA indictment since 2010.
ZACH: Yeah.
HUI: What do we make of this, right?
ZACH: Well, I think you've set it up for folks to make much of it, in fact.
HUI: I think there are a lot of people who are making much of it, and I think for good reason. It doesn't seem to fit the priority areas that's been identified. It doesn't seem to fit with all the rhetoric’s and the policy and the memos that DOJ has issued on the on this topic. Why is that? And I think one of the things that we said during our FCPA pause podcast in January . . .
ZACH: . . . All those months ago . . .
HUI: All those months ago was, I think you said it, you believe that . . . you said you know I believe FCPA would probably still be enforced from time to time, but it would be enforced based on sort of consideration of more of the company's alliances and political interests.
ZACH: Yeah. That's right. Well, I think, I mean, what I take away from what you've shared are really two important things I think folks have to remember. One of them is: let's not just focus on what people are telling us. Let's look at what they're actually doing. That's a really good lesson for all of us to take in just about every facet of our life, like let's look at actions over just words. But beyond that, it is what we talked about many months ago, which is maybe more diplomatically, just being framed as unpredictability. Being prepared for unpredictability—we've always been a little bit sort of skeptical of those who try to prognosticate in any administration, but here it's particularly challenging because the rules have seemingly changed and the factors that those in positions of power may use when exercising the power of our government may not be the kinds of things that we've historically seen motivating decision-making. It's just much more . . . if anything is predictable, it's unpredictability.
HUI: That's very true. I think the way that companies calculate risk, the risk of enforcement, is a different calculation in this world than it had been previously.
ZACH: Yeah.
HUI: So that's a very significant development, I think, in 2025; and will continue as we as far as we know, will continue certainly for the remainder of this administration.
ZACH: Absolutely. Yeah.
HUI: The other area I think you know on the same similar theme is, this whole notion of the rule of law, where we are on the rule of law. One of the things I remember so strikingly from my days at DOJ under the first Trump administration was a prosecutor colleague and I were having a discussion about having to draw the line of what you believe is unacceptable behavior beforehand. Because she expresses fear of just inching down that slippery slope. And before you know it, you have done something that you really shouldn't have done because you never realized you were that far down the slope because you've been inching down it. So, she said, I want to say that if I'm ever asked to defy a court order, that's the line for me. I actually haven't checked to see if she's still at DOJ. Because that seems to have happened in this administration. I think there are stances that's been taken by the administration that you can say seem to demonstrate a defiance to the rule of law. We have the administration defying court orders certainly in some of the deportation cases that we have seen. We have the unlawful firing of inspectors general. The inspectors generals are the compliance officers essentially for these federal departments—and their firing is permissible, but it has to be preceded by a 30-day notice to Congress, which was not done in these cases. They are the compliance officers, the watchdog for our federal departments.
We also have the unlawful freezing of congressionally appropriated funds. So the Office of Management and Budget had issued a blanket freeze on billions of dollars of congressionally approved spending, including foreign aid and domestic grants. So, the Constitution is written in such a way that the Congress has the power of the purse, and when the Congress allocates funds to someone, it really shouldn't be unilaterally decided by the administration that money is now frozen. So, that's happened. So, I think I mentioned these things because they affect the credibility of the things that we're trying to do as ethics and compliance professionals, because what is compliance? But really a respect to the rule of law, to do what the law requires of you. And ethics is even more than that, right? So, if we have at the high level when . . . we all love to talk about tone from the top, but at the very high level when you have a defiance of the rule of law. How do we cope with that? As people who want to promote that respect for the rule of law in our organizations, it really is very challenging.
ZACH: It is. I think about this often because there was a talk that I did probably a decade ago when I was based in China, came back to the US for a period, and was asked to talk about the rule of law with a US and a China perspective. And to talk about the difference between the two and I think about what I said back then. And I could stand by much of it, for sure. Let's be clear. The United States and China are still very politically and socially distinct.
HUI: Indeed.
ZACH: But it does feel like we're on a slope, and perhaps that slope is a bit slippery when it comes to the treatment of the rule of law in the US. There's a lot of what I said then that I don't feel nearly as confident with today based on just what's happened in the last eight or nine months. And that's a really, really . . . it's a scary thing. You know, I'm also just mindful of things like the tone of the rhetoric that we hear in politics today. You know, the disrespect that we see displayed by senior political officials, including our president, the name calling that we hear. And while I think that most people that we know and work with, irrespective of their sort of political leanings, would agree that that kind of behavior shouldn't be, wouldn't be tolerated within their companies. When you see it at that level, coming from someone in that position of power and trust, I just can't help but think that it begins to erode our understanding of what is decent in a civilized society.
HUI: This is troubling in the sense that we all want to work in respectful workplaces. We work towards promoting respectful workplaces. You and I go around and work with companies to encourage them to listen to their employees in a respectful way. And when you're seeing behavior at the very top of the world, really that is, contrary to that kind of behavior that we're promoting, very contrary, almost to the other extreme. It is very, very difficult to hold that tone in that kind of environment.
I also think this leads us the point about DEI. There have been executive actions dismantling federal DEI programs and related personnel change—and this leads private sector DEI programs into potential legal and reputational pressure in certain jurisdictions. So, what we have seen in some of the employee surveys that organizations have done is that employees want DEI initiatives, just like they want respectful workplaces. But these programs now present a risk. A potential enforcement of regulatory risk. So part of the problem is companies that try to maintain their DEI commitments now really have to fly under the radar. So you're not hearing about companies trying to stay committed to those in the press. What you would here in the press is companies retreating from it, and that creates an effect of the more you hear it, the more you think this must be true, this must be something that I have to do. You're not aware that you're not hearing of the ones that have not retreated from these commitments.
ZACH: Yeah. I mean, I have a very similar concern here, which is, it's about authenticity. It's about how reactionary so much of what we've seen over the course of the past nine months has been. And we talk all the time about, look, in a world filled with unpredictability, the one thing that can remain constant is your North Star, your values, the things that define who you are and what you believe and who you want to be and how you want to be perceived by all of your stakeholders. And yet it does feel like in these months we've seen folks really, at least publicly, pull back. And even if it's just public, I'm challenged by it because the public pull back, I think impacts the authenticity of what may still be said and done internally. If we're that willing to move with the swing of the pendulum, I've said this before: what happens when the pendulum inevitably swings back?
So, to wrap up this part of our discussion, Hui, I mean, what else do we want to say to the ethics and compliance and culture champions out there about the world that we're in and the future that we see over the course of the next several years.
HUI: I think you've said it, Zach. So one is unpredictability. And in this world of unpredictability, really the best thing you can do is to hold on to your North Star.
ZACH: Yeah.
HUI: But be smart about it, right? Hold on. Understand what your values are. Understand who you are as a person, as an organization. Live according to your value but understand the risks.
ZACH: Absolutely. All right. So, the second big sort of theme or topic for 2025, I'm guessing most people could probably predict that we would say this—cause how could you not? And that is the role of artificial intelligence and the continued sophistication of technology, frankly. And it feels a little silly to sort of highlight the developments in AI because they're all around us. We don't need to talk about them. We can just look around our world. It's sort of the very definition of the obvious. But I think you could say the same about politics and the administration.
HUI: Indeed.
ZACH: But rather than sharing some top line developments what I thought we could do, as the data guy, is to actually share with you some data that we've seen in the marketplace around perceptions and use of AI. So I want to share a couple of stats. So, the first is from a Pew Research study that was just completed this fall—and we'll share the details in the program notes—but one of the things that they looked at was the share of US workers using AI on the job. So, the question was: what percentage of employed adults said that they're using AI in their work. So, in 2024, 63% said “not much or none.” In 2025, a year later, that number went up to 65%, not a huge change from 63 to 65—and I was surprised.
HUI: Surprising. I am very surprised.
ZACH: I was surprised by this. Now, there were a series of options of: “all of their work,” “most of their work,” “some of their work.” Those are all bundled together, and in 2024, 17% chose one of those. In 2025, 21%. So, it was a 4% shift, year over year, but again, I would have expected these numbers to be much larger.
HUI: I wonder sometimes if people don't realize they're using AI. I think this goes to what people's understanding of AI is. Because for example, when you're doing a Google search today, you're using AI.
ZACH: Yeah, I think that's right. There's a there's a couple other stats that come later that I think also could potentially explain some of this, but we'll put a pin in that. The last number, which makes sense, but that was people who said they've not heard of workplace AI usage—that went down. That's what makes sense. But the percentages still seem fairly high. It was 16% last year, 12% this year saying they've not heard of workplace AI use. Now part of this probably is attributed to the diversity of the population that they were outreaching to. This doesn't just include the kind of white collar office workers that we often work with. This was a very diverse kind of national study, but still, I was surprised by these numbers.
Now, what we also saw in some of the research that we were looking at, and this is what I think could potentially be an explanation for the surprisingly low numbers in some of these places, is, one, trust. So, there was a separate study that we looked at that was done by KPMG and I think it was also done in partnership with the University of Melbourne—again, we'll put a link to the research and the methodology in the notes. But there, it made it very clear that trust continues to be a critical challenge for AI. With only 46% of people globally being willing to trust AI systems. Now, what’s interesting though. Is that I actually thought this seemed kind of high. Nearly half of people surveyed say that they trust AI systems. I feel like in my casual, and more formal business communications, there’s a deep amount of distrust. And I guess these numbers don’t contradict that. But I certainly thought that there was more distrust than this one study showed.
HUI: But I think I'm not terribly surprised because I do think a lot of people do use it without much thought in terms of what happens to “all this data that I'm inputting.”
ZACH: Yeah, well, that is certainly true and raises a whole host of risk and governance issues, which we've separately talked about on the podcast.
HUI: But also, on the output part though, right? We have seen plenty of people who have erroneously relied on the results of the AI searches or AI formations in their work. And that's only the ones that we have heard about publicly, so think about how often that actually happens in, you know . . . instances that we never hear about.
ZACH: In fact, this study showed that a lot of people, like a lot, rely on AI without evaluating accuracy. I think that number was something like 66%, in fact. And 56% report making mistakes in their work due to AI. I mean, it's fascinating because look, I find these tools very helpful. We use them in our work. We use them in collaboration with clients, at times. We see how our clients are using them. But the examples that we've seen where things have really gone wrong are when people have allowed it to replace the human. And in all of our work, we will find errors in the AI's output, and it's incumbent upon us as the human to check those things. Just as we would if we were delegating work to a new hire or a green employee.
The last data point that I wanted to share, which I thought was very interesting. Again, this was from the KPMG and University of Melbourne study. They said that 57% admitted to hiding their use of AI at work. And the sense that I got from this particular statistic wasn't what may immediately come to mind for folks, which is that they were maybe using systems that weren't approved, that could potentially be part of it, or that, you know, there were restrictions on what could be done through the company's, you know, governance and policy framework. But that it was being hidden so that they could present AI generated content as their own. So, it was sort of shadow secretive use of it.
HUI: Interesting. That is fascinating.
ZACH: Yeah. And this is what I . . . this is sort of what I was thinking could potentially . . . I guess these were two different studies that we're pulling some of this from. But that's what I was thinking could potentially be at play in the surprisingly low stats around AI usage is that there may be folks who aren't quite comfortable admitting that they're using these tools and instead are very comfortable utilizing them to make them better, but want to present that content as their own.
HUI: That is really very interesting.
ZACH: So, I have a few thoughts for compliance on this. My first is, I think in our world, ethics, compliance, culture, people are still getting their feet wet. And I haven't seen a lot of folks who are really ready to dive in head first. In fact, I see people doing things from scratch that could easily be done using generative AI to save time and to generate new ideas. It's like, why write a policy from a blank piece of paper when you could use one of these generative AI tools to give you, at the very least, a first draft. And in my experience, a pretty good first draft.
HUI: Agreed. So true. I have certainly used AI to draft documents, draft presentations. It never is the final product. In fact, oftentimes the final product looks very different. But boy, having AI do that first draft really saves me a lot of time and it gives me a good place to start.
ZACH: Definitely. I also think for those who maybe hear that and maybe feel a little uncomfortable with it or who feel like there's some dishonesty with it because it's creating a starting point that isn't their own. I just think of how did we used to do things? I mean, if we were going to write a policy, if we were going to draft a new compliance control framework, we were always thirsty for examples. Let's go pull other people's codes of conduct from online. Let's try to get some example policies. Let's look at how we’d done things in the past. Let's do the benchmarking that we always sort of criticize. That's really not all that different from using these generative AI tools to give you a head start—to give you a starting point. So, for those who are maybe feeling a little uncomfortable, I say, let it go. Focus on things that are more important. If there's a computer that can give you a head start on something that enables you to make time for something of higher value. Do it.
HUI: I also think if you feel that level of discomfort, you can still always do your homework and make that part of your prompt for the AI. So for example, you want to draft the policy and you give some thought. You say, you know, I really want this policy to address these five points. The policy has to include these five things. Then you put that in your prompt.
ZACH: Yes, it's such a good point, and it's the difference between asking it to do your work for you and asking it to truly just make your work easier and faster.
HUI: Exactly. And I hate to say that that oftentimes I find the drafts coming out of generative AI better than most drafts that I have seen from lawyers in the past.
ZACH: Yeah, truth. Truth. The second thing that kind of comes to my mind here, or the reflections that I have as I look back on 2025, is that the focus seems to be very much still on saving time rather than doing the work better. And it's funny because when I think back about my time in house building an advanced analytics capability for compliance ten years ago, a decade ago, we were talking about predictive analytics and machine learning as tools to help us spot risk, to identify patterns and behaviors in data that had never been done at scale before. But the way technology has developed in the past decade is dramatic. The things that we were building then could be built today at a fraction of the cost. And we could not just take them to the next level. We could take them multiple levels of sophistication from where I started—and yet, I don't see a lot of people doing that. I was asked recently about do I think that these tools are in fact making things faster. And I think the answer to that is yes. I truly believe that that is the case when done well. But that's not the only question we should be asking. We always talk about we want things to be effective. What do we want them to be effective at doing? I want to see these tools used to help us make our programs smarter, to make our programs stronger, to help us identify risk in ways that we couldn't have imagined doing before because the technology wasn't there or it would have been too expensive. And those are the outcomes that I'm much more interested in assessing and the things that I'd like to see AI being used for.
HUI: So can you give an example?
ZACH: The most obvious example is around, you know, monitoring of financial transactions, which is what I was doing a decade ago using advanced analytics. And to this day, I still see folks doing more traditional process-based transaction testing. Pulling paperwork and looking.
HUI: No way.
ZACH: She's being sarcastic, folks. Pulling paperwork and actually looking at . . . doing an audit, doing a paper-based audit of things. And yes, there's been a lot of advancement in many industries and there are lots of wonderful examples. But I still think that those wonderful examples of folks doing things in a more sophisticated way are the exception, not the rule. And today, we have the ability to look at hundreds of thousands of transactions involving our vendors and our suppliers and spotting patterns in the spend and risk associated with those relationships that we couldn't have possibly found before. We also have the ability to not just rely on quantitative measurements, but to be able to actually interrogate that data using . . . our language. Interrogating that data by saying, show me the 10 highest risk transactions or show me transactions that are statistically anomalous across a particular data set. We can do these things now without the kind of heavy system development that was required even just a few years ago. And yet, we just don't see that a lot. We see folks getting pressure to utilize artificial intelligence, but it's often utilized in ways that are ultimately supporting the organization—if I'm . . . going to be cynical, the organizations broader interest in reducing headcount and cost rather than reducing cost and risk by making the program stronger.
HUI: And also, there's the default to what many compliance officers are comfortable with, which is policies and procedures and training, right? So that we see more usage in those areas like being able to have a chat function that help people answer questions about policies and procedures, using AI to generate training. Not that there's anything wrong with using AI in these areas, but we do see more default to these areas that tend to be more traditional focus of compliance officers.
ZACH: Absolutely. I mean, how wonderful would it be? And if there are folks out there who can do this, who have done this, or who have developed software that does this, please come calling. But we've been talking a lot about root cause analysis, and we've increasingly been advising clients who have been taking an interest in root cause analysis. How wonderful would it be if instead of doing a more traditional root cause analysis, if you were actually able to utilize artificial intelligence to take all of your handwritten reports in PowerPoint or in memos, in Word, and to extract the root causes from that—to give you an analysis of what patterns are seen across years and years of investigation, so that you could then use that to ask better questions and tell a better story and ultimately improve your program. I haven't seen anything like that, but I'd love to see more energy put towards things like that than toward just, hey, we'll write that report for you so that we can save you time.
HUI: Exactly, yes.
ZACH: All right, the third observation that I have here or the third bit of, I don't know, I won't call it wisdom cause I don't think that's what it is. But reflection is AI continues to be a buzz word, a buzz term. I do think the buzz is real. I do think that, but I want to make sure that folks are being curious and questioning when they're talking to folks who are telling them that they can offer them an AI powered solution. And I'll be a little cynical here, but I think a lot of what we see labeled as AI powered today is some form of window dressing or marketing because companies know that AI sells. Companies know that your leadership is looking to you to show that you're on the cutting edge and you're using AI in the way that the world thinks we should be. And so, if you are skeptical, or if you're just not sure, ask questions like: what does the AI model do that you're selling me? What machine learning techniques are being used? How does the system learn from data over time? How does it handle situations that it wasn't explicitly programmed for? Can it handle situations that it wasn't explicitly programmed for? How does it adapt to change? Which, if you've been listening to everything that we've been talking about for the past year or just for the last 40 minutes, change is everywhere. Change is a constant. So how does it adapt to that change. And what new capability beyond automation—what new capability does the AI that you're selling me actually provide? These are just some of the questions that I think folks should ask, and I think it's healthy to be a little bit skeptical of folks who are selling AI in a world that is so driven by the buzziness of it all.
HUI: Yeah, really getting beyond that buzzword being used as a marketing tool. I mean, didn't you and I have a conversation about AI powered washing machine?
ZACH: Yes, I love watching . . . I love watching a good Price is Right every now and then. And yes, and I was watching it and they were describing the prize and they were talking about how, you know, the AI powered washing machine. And I was like, I don't know, maybe someone can educate me about why that's something that I should be excited about. But like to me. Like washing the clothes, like that's a, you know, it's like water and soap, is really what's important. So yes, it's used everywhere.
HUI: Exactly. I mean, yes. And I think you're honing in on something that we always, always emphasize is that be curious, be critical in your thinking.
ZACH: Absolutely. All right. So those are some reflections on big developments, things that have just sort of naturally been part of our world over the course of the past year. But Hui, we've each also thought about something that we've observed in the past year that we thought was noteworthy, that was worth sharing with folks. So I'll go to you. Why don't you start?
HUI: So over the last year, we've had the opportunity to work with companies in a different way of engaging their stakeholders that I find very exciting. And I think it's some of the most rewarding experiences that I've had professionally over the last year. So I'll give a couple of examples.
One is a company has come to us. They're a parent company with a lot of subsidiaries. And there's not a lot of consistency and alignment across all these different entities. And we got everybody together from the different entities and we spent a day together and we not only listened to where everyone was—but we helped craft an action plan that would bring more consistency and cohesion, towards an overall strategy, for the entire enterprise. And what I find really just exhilarating and rewarding about all of this is—given our talk about AI just now, right—there are certain things that you just have to do with people in person. It truly took listening to people in a structured way. So, people are not just ranting and raving or bragging and whatever about their programs—but it's very, very focused and facilitated listening that then led to a very structured discussion about how do we move forward in a way that everybody's on board. And it's one of those situations where people told us that in that one day they made such progress that they hadn't seen in years. The joy of doing something like that is not only you get the opportunity to solve a problem, but you actually see the solution being planned out and with everybody signing on board all in one day.
ZACH: Yeah.
HUI: And I found that just to be something that I just truly enjoyed doing. So that's one example. The other example was similar, which is companies are seeing repeated compliance issues in certain areas of their enterprise. So, they have come to the realization that we just can't do the same things that we always have done—which is we have a problem, we do more training, we do some disciplinary actions, we maybe revise the policy. Guess what? Next year, same problems come back again. So, a couple of these companies realize we have to do something different this time. So, we have the opportunity to work with these companies, again with all the critical stakeholders in the room in person and doing workshop that leads to real identification of what is . . . you talked about root cause analysis . . . what are the root causes that are contributing to these? What is acceptable to us? What is not acceptable to us? What are the realistic things that we can do that might make a difference? And again, at the end of those workshops, coming up with a smart plan: specific, measurable, actionable, relevant, timed. SMART plan to make a real change in these business units that have been seeing repeated problems.
And you know, what I found rewarding at the end of all this was people telling us, “You know, I have never been to a compliance workshop like this.” And this is coming from business leaders. And they really loved being heard and being able to contribute to a real solution. And I have just loved doing these and I have so appreciated these companies that are curious enough and courageous enough to say: “we're going to have to do something different. We're going to try this and see what happens.” So that's something I have really appreciated seeing and I hope to see more of.
ZACH: Absolutely. I share your enthusiasm about that kind of work, and it has been this really sort of wonderful trend that we've seen across companies that we work with over the course of the past year–and credit to them for being curious as you say and being interested in trying innovative techniques to solve some of their problems. But yeah, it's been this shift toward: let's really get the right people in a room and hack an important problem today over let's spend 3 months working on a project. And I would so much rather do the former than the latter.
HUI: Yeah.
ZACH: And I think that the reason we're seeing more of this is not only because we love it and we're out there talking to folks about the value of this, but also because compliance is always strapped from a budget perspective. And perhaps not even, perhaps more so now than ever. And so, the idea that we could really take a day or two to solve an important problem versus spending months and months working on an extended project is very appealing I think, to everyone involved.
HUI: And certainly, is rewarding when you see those solutions being mapped out at the end.
ZACH: Yes, indeed.
HUI: You don't get that in a lot of corporate meetings.
ZACH: We're going to do a whole episode on meetings in short order so folks can stay tuned for that cause we've had some doozies that are worthy of discussion.
HUI: That's right.
ZACH: But my trend or observation or reflection to end this, is similar to yours in that, I think, that the common thread is listening. So, one of the things that we have done a lot of this year with the companies we work for—and again, credit to them for wanting to do this, seeing the value in this and being excited about this. And that is: not just doing traditional culture surveys or pulse checks or questionnaires—but sending us or others out there to do focus groups, to do observational exercises, to get in a room with people and amplify their voice in a way that you simply can't do in a questionnaire or in a survey.
And we have done this in the United States with fairly small companies. We have also done this globally with multinational companies with 10s of thousands of employees. We've talked to folks about doing this work for a long time, but it really wasn't until the last year that I think more and more of the companies we work with got comfortable doing this at scale. And I think that part of that is because folks really are understanding that culture and ethics and integrity are complex topics that warrant a more sophisticated approach, a more human touch, and it's been really exciting to see that and to be a part of it.
HUI: We had the privilege of doing quite a lot of focus groups this year. We also have trained folks on how to conduct them so that they can do it on their own. One of the things that we always thought was interesting and we do emphasize when we train people on how to do this is it's not just about the words they say. So much about the focus groups are about the observations of their body language, the kind of cues they give out in different ways towards each other, to us and as facilitators. One of the focus groups that I remember very clearly was conducted online and I remember maybe the about 7, not quite 10 minutes, but almost 10 minutes into the focus group, one of the members text me, sort of privately messaged me, and said he noticed that one of the members was not really doing a lot of speaking and he asked me to encourage her to speak. And I just thought that was such a clear sign about how they relate to each other. This is something where a member of the focus group, they're not even really colleagues in the normal sense, and they're not on the same team. But here's someone who noticed someone was not speaking up and wants to encourage that.
ZACH: Yeah.
HUI: I also have heard when someone expresses bad experience that they've had and another colleague says I'm really sorry to hear that that's has been your experience, even though his experience was different. So I have just so appreciated seeing those.
ZACH: Yeah. Absolutely. And you see, sometimes you see the opposite too. Sometimes you see people, sometimes you see people being a little combative or defensive and all of those things contribute additional layers to our analysis and our assessment. So yes, it's very much focus groups, the observational benefit of them, and sometimes even doing just observational exercises where it's not a focus group at all, but we just get an opportunity to see how people interact with each other day-to-day.
One of the best memories I have of this year in our work was one of the program assessments that we did where we were on site—which is by the way, kind of our preference to be able to again make something not a three or four- or six-month project; but to come on site and do it in a much tighter time frame. And the insight that we got in that, the one that I'm thinking of, just from walking around the office, just from being there and seeing how people treat each other and how they work was very impactful.
HUI: Yeah, to see the artifacts in the buildings.
ZACH: Yes, absolutely. Well, Hui, it's been a really challenging but exciting year. And thank you all for listening and for sticking with us over the course of the last 12 months. We're not going anywhere. We're going to take a little bit of a break for the holidays, but we'll be back in January with all kinds of other wonderful content and exciting guests and provocative topics. So thank you all for listening.
HUI: Thank you and happy holidays to all.
ZACH: And thank you all for tuning in to The Better Way? Podcast. For more information about this or anything else that’s happening with CDE Advisors, visit our website at www.CDEAdvisors.com, where you can also check out the Better Way blog. And please like and subscribe to this series on Apply or Spotify. And, finally, if you have thoughts about what we talked about today, the work we do here at CDE, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.