Apr 18, 2024 | 26 min read

Metrics That Drive Performance with Leon Chism

By: Patrick Emmons

ide_ep105_leon_chism_rectangle_blog

Today we're sharing another insightful presentation from our most recent Innovative Executives League Summit, where Leon Chism, the Vice President of Engineering at Evolve, delivered a powerful lesson on collecting critical metrics for organization-wide success.

As an experienced technologist and executive, Leon leads teams in unparalleled growth and innovation. In this presentation, Leon dives into how the collection of metrics examining speed and quality paired with human-driven evaluation and consistent reporting are the keys to success. 

In this episode, Leon first dives into DORA metrics and the significance of collecting and reporting those figures of speed and quality. He overviews the additional customization of the data he collects; in one example, he looks closely at aging reports to determine where processes are sticking and gains a live perspective on getting those tasks unstuck by allocating more resources. As the last place to observe metrics, Leon offers a compelling outlook on examining team balance and individual metrics. ("You want to measure the process and not the people.") In further support of optimizing processes and not people, Leon shares his perspective on leaderboards, comparison, and other human-oriented metric frameworks of note.

In the final segment, Leon answers audience questions ranging from setting WIP limits (never too low), developer satisfaction, and key aspects of the communication around metrics to create a shared understanding and identify the value beyond the data. 

  • (02:16) – DORA metrics
  • (07:39) – Aging Report
  • (10:15) – Balance and individual metrics
  • (12:22) – Metrics in the boardroom
  • (13:35) – SPACE Framework
  • (15:45) – Manual metric collection
  • (17:19) – Developer satisfaction
  • (18:48) – Gaming the metrics
  • (20:26) – WIP limits
  • (21:45) – Shared metrics and collaboration
  • (26:00) – Hardware, software, firmware
  • (27:05) – Communicating the metrics
  • (28:26) – Value beyond the data

About Our Guest

Leon Chism is the Vice President of Engineering at Evolve. As an experienced technologist and executive, he has led innovation and technology at Jellyvision, DialogTech, Rewards Network, Analyte Health, PowerReviews, and ORBITZ. He earned a bachelor’s degree from the Gies College of Business at the University of Illinois Urbana-Campaign.

Subscribe to Your Favorite Podcast

If you'd like to receive new episodes as they're published, please subscribe to Innovation and the Digital Enterprise in Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.


Podcast episode production by Dante32.

Full Show Transcript 

Patrick: Welcome pioneers of innovation. This is Patrick Emmons, your guide on this journey of discovery and advancement. In today's episode, we're thrilled to spotlight a gem from our recent Innovative Executive League Summit. For those of you new to the podcast, the Innovative Executive League is a prestigious, invitation-only circle of forward thinkers, entrepreneurs and changemakers united by their relentless pursuit of innovation.

Born from a vision we had over five years ago, this community aims to weave a tapestry of innovative minds across Chicago and beyond, fostering connections that propel us all forward. Our highlight today is a presentation by none other than Leon Chism. Leon is a standout speaker from our October Summit. Leon's session, Metrics that Drive Performance for CEOs and CTOs, left us all with invaluable insights and we're excited to share them with you here.

But first, let me paint you a picture of Leon. Imagine a technology leader who's very essence is innovation. Leon isn't just a holder of patents. He's a visionary who breathes life into teams, steering them towards strategic innovation and unparalleled growth. He's the mastermind behind AI-driven products that not only secure patents, but redefine market landscapes, ensuring profitability and stellar customer experiences.

Leon's expertise spans from enhancing organizational agility, to ensuring compliance with the most stringent security standards. He's a business sage guiding firms to successful exits and IPOs, all while championing Agile and Scrum to streamline product development. Most recently, Leon embarked on a new adventure as the VP of engineering at Evolve. Today, we're diving deep into Leon's presentation.

To ensure you get the full experience, we've even rerecorded the audience Q&A for maximum clarity. Here we go, buckle up. Let's dive in on an episode filled with groundbreaking ideas, and the wisdom of one of the sharpest minds in our community. Here we go.

Leon: So there's four metrics that are the big four from the DORA organization, two focus on speed, two focus on quality. The two that focus on speed are about change lead time and deployment frequency. A lot of the principles behind the DORA metrics come from lean manufacturing in the Toyota process. It's all about getting things through the process from, "Hey, this is a good idea," to it's in front of our customers and they're using it.

Or better, it's in front of our customers and they're paying for it and minimizing that time lag. Change lead time is from the moment the code's decided, "Hey, this is good. We should have it in production," to how long it's in production. Well, you want that time to be as short as possible. You want to reduce the time that inventory isn't in front of your users. The second is deployment frequency.

This is really a proxy for small batch sizes. The Toyota process and the lean process tells us that small batches move more quickly, they move more smoothly, they move more reliably, and they have fewer errors. Deployment frequency is really the best proxy we have for measuring batch size. So as deployment frequency goes up, batch size is going down, things are going better.

Then the check against that to make sure we're not just throwing crap into production, are these two quality metrics and one is change failure rate. The percentage of changes you make in production. This isn't just code deployments, it's firewall configuration changes, it's network configuration changes, it's database schema changes. The percentage of those changes that either need to be rolled back or have subsequent changes come after them to fix, to remediate an issue.

Then the last is mean time to resolution. When you have a production problem, how long does it take to go from when the problem started to when service is fully restored? I think what's really interesting about the DORA research and the big four metrics, is those metrics, the speed and the quality tend to move in the same direction. They're not moving against each other.

If you're old like I am, for decades we were taught that quality and speed are trade-offs. If you go fast, you're going to release low-quality software, and to release high-quality software you had to go slow. That's not true. The DORA metrics prove it, and they prove it year after year after year, that the companies and the teams that are performing the best with speed, are also the teams that are performing the best in terms of quality.

It turns out that with modern development practices, with the tooling that we have, with the focus on automation. The things you do to speed up and to automate things, are also the things you are doing to improve quality. This is a grab from the 2022 State of DevOps Report, which caused its own little bit of controversy. It was the first time since they've done the report, that the self-reported numbers actually got worse year over year.

Everyone's waited with bated breath to wait for the 2023 report and see if we're back on the right track or not. But you can see high-performing teams are releasing on demand, the lead time for changes is between a day and a week. Less than a day to restore service and the change failure rate between 0% and 15%. Those are the targets and you can see the low and the medium teams are obviously doing worse than that on every category, and you can see slower is worse.

In the environments that I would always start with the DORA metrics because they're benchmarked, because it's a great way to have a conversation about how your team is comparing overall to how it should be performing. In most environments I've worked in, it's not enough. It's necessary, but it's not sufficient. I've always extended those DORA metrics with other metrics that work at an organization-wide level, and we report these out.

We would report the DORA metrics out every month, and these metrics would go with it every month. System uptime, pretty self-explanatory. Escaped defects, the number of times we found a bug in production that wasn't knowingly released into production. If you find a bug in staging and you decide to release the software anyway, that's not an escape to defects. But finding it in production and unintentionally, that's escaped. These next two were interesting evolutions.

At the last place I was at, systems deployed not within a certain period of time. Agile tells us if something stinks, do it more frequently and you get better at it. We started intentionally tracking things that we hadn't released very frequently and they didn't need to be released, which was maybe okay. But we knew anything that hadn't been released in a quarter or six months or a year, was going to be a problem when we went to release it.

Something was going to go wrong and we need to allocate extra time for that, so we started tracking those. Then after our deployment, was something that we were using to track a lack of confidence in the system. For most of our systems, we would deploy anytime of the day, any day of the week. There were a few systems, a few kinds of changes that we weren't as confident about.

Tracking how often we stumbled across those, let us know where there was a problem in our automation, where there was a gap in our testing process. Where categorization is a little bit different, and this was used really in the conversations between product and technology for understanding where we were using our software engineering capacity. Were we using it for innovation, for driving new products and new features? Or were we using it for maintenance and security and compliance?

So understanding that balance, we as a management team, had targets we were trying to hit, but tracking that was important for us. Inevitably, in your environments you'll find things. Once you start tracking DORA metrics, you'll find other areas that you want to poke your head into and understand better. So extending DORA, I think, is a good idea once you've got those habits in place. DORA is great as a speedometer and warning lights on a dashboard.

It doesn't really tell you how to go faster and it doesn't really tell you why the service engine soon light is on. For that, you need a different set of metrics and there's a whole set of tools. It's a pretty robust, at this point, tools marketplace for pulling these metrics out. These are the ones that my teams have found most effective. These were metrics that we would use in our daily stand-up meetings.

We'd use it in one-on-ones with individual contributors. The first is an aging report. It's an aging report that shows how long each ticket in the current sprint is spent in its current state. Tickets are moving from left to right, from in development to deployed. For the teams, it really gave a live during the sprint perspective on where stories were spending more time than we thought they should be in a particular state.

For us, inevitably it was ready for test and in code review. Those were always where we were losing times. Queues are terrible in any kind of system and certainly lean highlights that. For us, this aging report was a great way to understand how many stories were in each state, and where a story was getting stuck so we could get it unstuck. The team could do whatever they needed to do to find the right resources.

For PRs, for pull requests and really code reviews, time to first review and the size and the complexity of the commit were the two metrics we were tracking. Again, small batch sizes move quickly. What small, uncomplicated pull requests going through? It turns out that when an engineer who's doing their own thing sees a pull request and sees that it's hundreds of lines of code spread across 10 files.

They're not really motivated to go and pick up that pull request and start doing the code review. Small code reviews get attention more quickly. They move through the system more quickly, and they get in front of your users more quickly. QA is another place where we lost a lot of time. Normally, for us it wasn't in the automated QA, it was in the manual QA process. In those environments for pull requests and for QA, often the solution is lowering a WIP limit.

We'd have a certain number of developers on a team, we'd set a limit that we couldn't have more cases in ready for test than we had testers. You couldn't have more cases in waiting for code review than you had code reviewers. That tended to put a stop to things really quickly and it got people reoriented that, "This isn't about me getting my work done. This is about the team getting the team's work done."

In a lot of environments without that kind of focus, without that mind shift, it becomes easy to let things sit. These metrics really pull out that that's a problem and help teams get reoriented. This last one, balance, we actually didn't use team-level metrics for this. We used individual level metrics. Individual metrics, we would track the number of days per week each developer was committing.

We would track the number of commits they are making per day. We would track the impact of those commits. A lot of the tools that operate in this space, have some metric that they've created for themselves. It's proprietary. They'll give you some idea, but not a lot of details. But how they calculate it to assess, in essence, how difficult of a code change was this? It's not just lines of code.

There's lots of factors they normally pull into it, but it's true that big impact code changes tend to move more slowly. They're harder to build, they're harder to build reliably, they tend to move through the system more slowly. That was something else we tried to keep an eye on, and then we kept an eye on each person's review activity. Going back to balance.

The reason it was important is we'd find teams where we'd have senior engineers and junior engineers, and the senior engineers were de facto the only ones doing reviews. Or there would be parts of the system that no one would want to even write the code, if it weren't that particular person on that particular team. Looking at the balance across the team, really would highlight those problems and help us figure out where we needed to recalibrate a team or reset expectations about code reviews.

Or who's working in particular parts of the system to make sure that things were spread more evenly. I would say on the topic of individual metrics, this is the last place to put metrics for your program. I would start with organization-wide changes or metrics, and I would start with metrics that measure the process and not the people, because that's where you want to optimize. You want to optimize the process in service of the people.

You don't want to try and optimize people. When you've got the organization level and the team-level metrics in place, we found value in individual metrics. We found it a great thing for managers to bring into one-on-one meetings to inform the conversations with folks. We never had leaderboards. We never tried to compare developer A and developer B. We never tried to compare team A and team B. I don't think those are reasonable.

Every position on the team is a little bit different. What we really care about is the team performance and not the individual engineer performance. Then each of these metrics has their own place for where they fit. The boardroom and at all company meetings, we always talked about DORA. We would share them company-wide every month with the explanation. It was normally the dashboard would take about five lines of the email.

There would be another page and a half of explanation, "Why did the numbers move the way they did? What are we doing about it? Was it good? Is it bad?" Constantly, a reminder about why we should care about this as a tech team, and why should we care about it as a company. We use the team-level metrics and the sprint reviews in daily stand-up meetings. They were great for shaking things loose and making sure that stories were moving through the process quickly.

We'd use the team metrics and we would set OKRs for them, if you use whatever your goal-setting tool de jure is at your company. We would have the teams set for themselves improvements in their core DORA metrics, whichever ones were lagging, whichever ones they thought would have the biggest impact for them. It was a quarterly thing for us to make sure that those numbers were moving and moving in the right direction.

Then we use these metrics for one-on-ones at every level, from my one-on-ones with my boss, all the way down to one-on-ones with individual contributors. Whatever the right metrics were, we would use that to inform those conversations. Since the release of DORA and for the popularization of that approach to metrics, some of the folks, most notably Nicole Forsgren, has gone on and created another framework.
She calls it the SPACE framework, which is an acronym for satisfaction and well-being, performance, activity, communication and efficiency. I think the thing that they were worried about is an overreliance on the automated metrics that DORA encourages. And leaving behind some of the more human side of productivity and the more human side of process and optimization, which really comes back into play here really with the satisfaction most notably.

This is a chart showing three different levels for an individual team and system across the five different facets of SPACE, different metrics that might plug in to cover this grid of 15. They don't recommend tracking 15. No one would suggest that, no one would do that, but the recommendation is to cover three of these categories. If you look closely, you'll see that DORA metrics really fit in activity and performance.

If you look really closely, you'll notice that code review shows up, I believe, in every single square on here in one form or another. Which is really done more as a point to prove that depending on what you're measuring about code reviews, they really satisfy all five of these categories. I think the environments where I've worked where there's a lack of trust, the automated metrics have really been the focus.

I think if I have a regret about how I've implemented metrics programs, it's not moving quickly enough to getting developer satisfaction surveys and thinking about developer experience explicitly. It was happening, it wasn't getting the same level of support from me and from my leadership team, and I think that was a mistake. I think we should have gone there sooner.

Nicole has been on record as saying that she's being part of the DORA organization and then part of Google. That at Google, when they found that the systematically collected metrics and the survey metrics disagreed, the survey metrics always turned out to be correct, which I think is an interesting finding.
It's not really where I'd start, but I think it's a great place to evolve to. I realize I've been prattling on. If there's questions, please ask.


Patrick: I was curious about gathering some of these metrics. I've seen some companies do it more manually. Do you have a recommended tool that collects most of these metrics?

Leon: That's a really interesting question. We were collecting them manually. When we started this, we either weren't aware that our tools could do this automatically, which is probably the case, or we weren't sure we were going to really build this into a habit. So we were doing it manually, and unfortunately, it backed us into a corner. Because what was important for us was month-over-month and quarter-over-quarter consistency in what we were collecting.

When we realized that we had tools in place that could automatically collect them. If we moved to that, then basically we would be blowing away at the time, I think we were looking at four or five months of collected data because the definitions were a little bit different. I think when we finally looked, we realized that maybe four or five different tools we were already using, either claimed to support or actually supported collecting DORA metrics.

I would highly recommend picking one of them and living with their definition of what these terms mean. Jira does some of it. I know GitLab does, GitHub probably does. We are using a tool called ActionableAgile, which has a lot of it as well. I think what's important is consistency. I'm not sure it matters tool A versus tool B. When you get into the developer metrics, it's a different question.

I've used GitPrime, which is now Pluralsight Flow. I've used Code Climate Velocity. There's other tools that play in that space. There's an open-source project called DevLake that's just getting spooled up. I liked Pluralsight Flow. I really like the experience of using Code Climate Velocity.

Patrick: I've had challenges getting management to act on developer satisfaction. How have you had success collecting that data and getting actionable, useful information?

Leon: Yeah. We were getting the data really informally and I think to our detriment, we weren't being very mindful about what questions we were asking, how we were asking, how we were collating that data. We had created a developer experience task force. It wasn't a team, it was a task force that people were, in essence, volunteering their times to, and they were proposing the projects.

Then when it came around to proposing time with product to get those projects on the roadmap, we were actually using how those efforts would impact DORA metrics, as the way to argue for getting time to do those projects. One of the more surprising things about this, is we on our own decided to start collecting and reporting DORA metrics. We started sending this out to the whole company.

It very quickly just became accepted practice that this is how we measure technical team productivity, and improving these numbers is an important goal. Just by doing that, it actually gave us a good way to have conversations about why does it matter if we use this tool or that tool? Or why do we need to remove this step from our process?

Patrick: Leon, what is the name of the book you mentioned?

Leon: Accelerate. It's got a much longer, actual name, but the shortened name is Accelerate. Nicole Forsgren, Jez Humble and Gene Kim are the three authors.

Patrick: How many metrics also is going to cause bias in an unhealthy environment? How do you strike a balance?

Leon: When you say bias, you're talking about people gaming the metrics? That's a great question. Where I've implemented metrics, I've not seen teams game the system. I've had people threaten to game the system. Specifically, one of the environments I was in, I very stupidly don't do this, started with individual level metrics before we started sharing. We had already been collecting DORA metrics.

We weren't sharing them, so people thought we were starting with developer level metrics. They were worried about leaderboards, and some of our engineers very quickly said, "Well, I'll write a script and I'll be committing 1,000 times a day." They didn't do it luckily, I have not seen it. I really think, again, this could be a dumb thing.

I encourage the teams to game these metrics. I think if there's a spectrum of too little and too much when it comes to lean, I think most people think they're way closer to too much than they are. I think the things that you would have to do like gaming a metric like deployment frequency. Think about what you'd have to automate in terms of test environments, in terms of test automation, in terms of automated deployment, please do it.

If you get to the point where you could release 1,000 times a day, great. Then we'll go back and we'll change the metric. But the tooling you would've had to create to get there, we're all benefiting from. Mean time to resolution is really tough to game. I guess the PR size could be gamed. We try to link PRs to stories. If you wanted to do a release, it needed to be linked to a story.

There's no quicker way to get a developer to stop something than have them go and write a Jira story to do it.

Patrick: You mentioned that there are numerous stories in a specific lane. You indicated that there are no more stories left in the QA lane leading you to handle QA tasks. Was it expected for the engineers to assume QA responsibilities in this scenario?

Leon: Exactly right. When we put in WIP limits, and another truth I've learned from this is WIP limits are never low enough. We had a team that went from four to two, and their throughput went up when they did it and they were shocked. The WIP limit was lower than the number of developers they had. What happens in those cases, is you can't move your story to the next thing. You need to do something to go and unblock that.

Sometimes it was engineers going and helping with manual testing. But more often, it was engineers thinking about, "How can I use my skills as a software developer, or as an SRE engineer, or whatever my role is, to make the people who are doing this more effective and more efficient?" So one environment went from having one staging environment to having 10 staging environments.

Then the next step beyond that, when the WIP limit was hit again, was having test environments that were spun up on demand, so that multiple test lanes could be happening at the same time. The engineers think, "Well, wait, what if I can't do my job? That means I can't do my job if I can't write code." Like, "No, your job isn't to write code. Your job is to help our customers. Right now, the way you help our customers is by helping QA."

Patrick: Can you explain and give examples of shared metrics within product teams and organizations?

Leon: These metrics, when we think about a team, despite how their org structure may work and who reports to who, to us, a functioning team is product managers, software engineers, QA and operations. Anything less than that isn't actually a team, because they don't have all the resources they need to think of a good idea and get it in front of our customers. These metrics were the responsibility of all of those people.

We didn't want to get into a situation where product had one set of metrics and one set of KPIs, and they had one set of OKRs and engineers had a different set. Because at the end of the day, we've got one team with one road. If you've got one team, you've got one roadmap. We can argue about what should be on that roadmap, how much work improving efficiency of the process? But that's really where that conversation needs to happen.

By having the whole team responsible for those metrics and moving them in the right direction, everyone got on board very quickly. And product was able to see that, "Oh wait, if we speed up this kind of development, maybe I don't see that as a loss of an engineer for two weeks to go and improve this thing. I see this as I get that time back in the next month."

Then after that, in essence, it's pure profit in terms of time. It was easy to have those conversations once we had these metrics rolled out and installed.

Patrick: How did the team collaboration begin? What steps were taken to establish coaching that extends beyond just the development team? Additionally, how did you ensure that metrics were applied across the entire team?

Leon: So the teams had been nominally operating that way when I got there, and nominally doing a lot of work in that sentence. They reported their results as a team. They were all in the room when we would do sprint reviews. They were all in the room when we're doing show-and-tells or whatever like those mechanisms are. But ultimately, at the end of the day, product actually did have different goals than the technology team.

Or sometimes product had goals and technology team's goals were just the product team's goals. When we started this process unilaterally within technology of collecting these metrics and promoting them, that quickly got the product managers on board with seeing that the team is responsible for these and they're responsible for them as well. It took a lot of collaboration with the product management leadership.
If product management leadership isn't on board, it's going to be tough to get buy-in. But with them on board, it was easy to get, at every level of the organization, to get buy-in from the individual contributors all the way up. These were metrics that mattered and collaborating on moving them in the right direction was good for them and was good for the whole company.

Patrick:
You touched on aspects of what I'm seeking, but focusing more on the larger organizations rather than individuals or high-level perspectives. How would you advise organizations with mixed methodologies at the team level? For instance, teams operating with Kanban, alongside others, utilizing different agile methodologies. All aiming for rapid, incremental progress, how should metrics be approached in such a context?

Leon: These metrics actually work equally well for sprinting teams or Kanban teams, or teams doing that hybrid in between. Actually, in these environments, teams were all three of those states but they were all reporting these metrics. We let the teams decide and a few of them switched back and forth. They thought the sprints were getting a little too staid, so they tried Kanban.

Kanban didn't let them have enough visibility in what they're committing to, so they went back. They're always on the hook for these metrics, that didn't change. For the people who were on Kanban, we just nominally said, "We're just going to talk about it in a two-week cycle." You're still not doing the sprint planning.

There's no retrospective, but we still split it up into two-week chunks to do those retrospectives, to make sure that we're reflecting on how the stories are moving through the process. I put the requirement on the teams that this needed to be reported every week and every month. They could decide how they wanted to organize themselves, sprint or Kanban, but these metrics needed to be reported.

Patrick: How do you envision applying these metrics for product teams operating in an embedded environment where software, hardware and firmware are all integrated?

Leon: Yeah, I don't have an informed opinion on that. This is an environment I've not worked in. I recognize that the challenges there are very, very different. I know teams are doing it. I've not been a part of it. Frankly, I have to go and poke around and look it up, and figure out how teams are doing it. I know that's an environment in which some of these ideas get bogged down.

I guess maybe the question would be can you bring the hardware into a virtual environment? Do you need to go to hardware to test the software that's embedded or is there an emulator that you could test in? Because if there is, maybe you're using, it's a little bit dangerous, but using a different definition of done.

It's not necessarily in embedded, but done means working in the emulator, if you're highly confident that the emulator then projects onto the hardware successfully. I know also in highly compliant environments, some of these things can be a challenge. A different set of research around how to do this in a PCI environment, but I know it can be done.

Patrick: Can you expand on point number one? Why is culture important?

Leon: Yeah. When rolling these out, I think it's important to really focus on how you communicate the metrics, and getting buy-in that the metrics matter. What I found is around the time I was getting sick of repeating over and over and over why it's these four metrics, why they predict tech team success, why they predict company success.

Around the time I was getting sick of it, the message was starting to stick with people. Before then, there were lots of people who would say, "I'm five months into this process." They'd say, "Why do we measure deployment frequency again? Why does that matter? I don't think that matters." I think getting people on board that lean is a reasonable way to think about the software development process.

That code and processes inventory and it needs to get out the door quickly, is an important part of getting buy-in for implementing these metrics and getting a team oriented around them. I think it also works together the way that the metrics become a great way to drive that culture. But you need some sort of shared understanding about what we care about in terms of software engineering.

What we care about in terms of product development to even start this process. That's really where it started for me was having a conversation with the C-suite about lean practices and why, in essence, inventory in working process is a bad idea.

Patrick: I thought data-driven was a good thing. Could you share why it might have negative aspects as well?

Leon: I think it depends. It depends on what you're doing with the data. If you go back to that McKinsey article, it would be very easy to make the jump that you take your bottom 10% of your engineers who have the least commits and show them the door. But what if those are your senior engineers who are doing all of the code reviewing? What if they're doing all the mentoring?

There's lots of ways that people add value. There's lots of good reasons why an engineer who normally commits five days a week and 20 times a day, didn't commit at all last week. You may forget that you assigned them to go and rewrite all the documentation for the API. You may forget that you had them go and investigate new APIs for product X, Y or Z.

It's the start for us, the data was the start of a story. "Okay, our numbers move from this to this. Why?" I would pull the DORA metrics at the organization level and the team level the week before they went out to the whole company. I would just send it to my leadership team, "Here are your numbers, please tell me a story." It took a few cycles for them to figure out what needed explanation.

What level we needed to get to, but it always came with a story. And whatever I sent the numbers out beyond the tech team, they always went with a story. Because I don't think it's necessarily easy for folks, especially folks when I'm five months in who don't know why we're tracking deployment frequency.
I don't know it's easy for them to interpret, "What does it mean that the number went up by 10% or down by 10%?" So it always came with a story.

Patrick: Leon had a phenomenal conversation with our audience at the summit, and he had some really great insights. I think it's safe to say everyone who attended learned something new. Thank you, Leon, for joining us.

If you'd like to receive new podcasts as they're published, you can subscribe by visiting our website at dragonspears.com/podcast, or find us on Apple Podcasts, Spotify or wherever you get your podcasts. This episode was sponsored by DragonSpears and produced by Dante32.

About Patrick Emmons

If you can’t appreciate a good sports analogy, movie quote, or military reference, you may not want to work with him, but if you value honesty, integrity, and commitment to improvement, Patrick can certainly help take your business or your career to the next level. “Good enough,” is simply not in his vernacular. Pat’s passion is for relentlessly pushing himself and others to achieve full potential. Patrick Emmons is a graduate of St. Norbert College with a Bachelor of Science degree in Computer Science and Mathematics. Patrick co-founded Adage Technologies in 2001 and in 2015, founded DragonSpears as a spin-off dedicated to developing custom applications that improve speed, compliance and scalability of clients’ internal and customer-facing workflow processes. When he is not learning about new technology, running a better business, or becoming a stronger leader, he can be found coaching his kids’ (FIVE of them) baseball and lacrosse teams and praising his ever-so-patient wife for all her support.

Recent Episodes

We interview leaders from early-stage start-ups to billion-dollar enterprises who distill their lessons from their victories and their failures. Learn how these high-performing leaders organize their teams, establish a growth-minded culture, and leverage new technologies such as DevOps and Cloud.