The Agile Within

When Metrics Misfire with Mark Paul-Sobral

Mark Metze Episode 83

Ever wondered why your Agile metrics might not be delivering the insights you need? Join us as we sit down with Mark Paul Sabral, an experienced Scrum Master from Sheffield, England, to uncover the hidden pitfalls of over-relying on metrics in Agile practices. Mark’s unique perspective, blending his background in illustration with Agile methodologies, sheds light on the importance of visual thinking and the dangers of letting metrics turn into targets. 

Struggling to find the right balance with your project metrics? In this episode, we explore how to effectively combine quantitative and qualitative measures to avoid common traps. With a compelling real-life example from an enterprise-level software project, we illustrate the benefits of balancing velocity with well-defined sprint goals and user journey maps. Learn how pre-mortems can help identify potential issues early, fostering a learning environment within your Agile teams and creating a more holistic view of progress.

Proper documentation can make or break large-scale initiatives. We delve into the critical role of documentation in project success, emphasizing the need for balance between performance and clear, effective documentation. Hear about the causal relationship between documentation creation and usage, and how to avoid the pitfalls of excessive, low-quality documentation. Finally, discover how automating metrics and simplifying data collection can streamline your processes, ensuring that your documentation efforts genuinely contribute to project goals and team efficiency. Don't miss out on these invaluable insights – connect with us on LinkedIn for further discussions!

Connect with Mark on LinkedIn:
https://www.linkedin.com/in/markpaulsobral/

Support the show


Follow us on LinkedIn:
https://www.linkedin.com/company/the-agile-within

Mark Metze:

Welcome to the Agile Within. I am your host, Mark Metze. My mission for this podcast is to provide Agile insights into human values and behaviors through genuine connections. My guests and I will share real-life stories from our Agile journeys, triumphs, blunders and everything in between, as well as the lessons that we have learned. So get pumped, get rocking. The Agile Within starts now. Well, I hope you're having a great day. This is Mark Metze, the host of the Agile Within. I have a host from Sheffield, England, today, by the name of Mark Paul- Sobral. So, mark, welcome to the Agile Within.

Mark Paul-Sobral:

Thank you very much, ark. It's a pleasure to be here, pleasure to be chatting with you.

Mark Metze:

So Mark and I linked up quite a few years ago I would say at least two, maybe three years ago, on a meetup, I think, and have stayed in touch. So I reached out with him and he was gracious enough to agree to be a guest here on the show. Mark, why don't you tell us a little bit about yourself?

Mark Paul-Sobral:

Yeah, I dare say it might even be four years. It feels like it was almost pre-COVID, right? So in a long time we've been coursing up um periodically, from time to time checking on each other. It's been great um. So I'm a scrum master. Um, as marcus said, I'm uk based. I've been a scrum master for a little over seven years now. Um, and, unlike um the traditional routes, a lot of scrum masters have come from um, where they might be an xba or product owner or x developer. Um, I actually have a background in illustration which is quite different to the topic that we've got here today. But yeah, I'm someone who really enjoys the human elements of Scrum and Agile, but also the visual elements of it. Visual thinking and Agile is something I'm very passionate about graphic recording and that kind of thing and I think that can tie in a little bit to visualizing data as well. But that's a little bit about me.

Mark Metze:

That's a great viewpoint to have. So, mark, you're from Sheffield. If I were coming to Sheffield for a day and I'd never been there which I haven't but hopefully I will get there one day. What's one thing that you would say that I couldn't miss doing?

Mark Paul-Sobral:

Funnily enough. I think what I'll probably say is you can get to Sheffield and then get out of it very quickly, and not because Sheffield isn't a great place to visit. It's known as the outdoor city. It's got loads of things to do, a fantastic cafe scene, which I also recommend exploring. But we also live about a 20-minute drive from the Peak District National Park here in England and it is a gorgeous place to visit.

Mark Paul-Sobral:

The Peak District National Park here in England and it is a gorgeous place to visit. It's got loads of nice scenery. You can go and catch some lovely sunsets over the peaks and the edges that it provides. If you're into trail running, like me, you can go and hit the trails there. You can go mountain biking, hiking or, if you want something a little bit more accessible, they've got loads of better carved out trails and paths that you can take the family to and enjoy. So that's probably what I would say like come to sheffield, enjoy the cafe scene and then go check out the peak district national park very enticing, so the title of today's episode is called when metrics misfire.

Mark Metze:

So, mark, tell us what you mean by that.

Mark Paul-Sobral:

So when metrics was fire, I think it speaks to a little bit of that shift that Scrum Masters and people in the Agile landscape have been seeing. So I don't know if it's the same over on your side of the pond, but there's been a bit of a shift from Scrum Masters and that kind of coaching and introduction of Agile and the core concepts and a little bit more towards roles that are being defined, like a delivery manager, for example, where it's much more about the delivery of, say, initiatives or projects, and that kind of changes the landscape a little bit for people like me scrum masters into being a little bit more data driven. Now, to answer your question a little bit better, I would say Goodhart's law represents that harm in when metrics can be well, can misfire. Goodhart's law I think the paraphrased version of it is once a measure becomes a target, it ceases to be a good measure, and I think the true quote actually captures that misfire a little bit better. So it says any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes, and I think the key words that I will take from that quote are pressure and for control purposes.

Mark Paul-Sobral:

So when we take a measure and, I think, as an example, something that's similar to like throughput, for example, it can really serve to check performance within the progress of a system, but not necessarily the wider context. So, for example, outcomes, and quite often metrics, can be quite enticing to simplify, because it can become a bit of a I suppose we've summarized this before pandora's box and therefore there's that kind of nature to regress a little bit and try and simplify the problem and pick one particular metric so that might be something like what points towards an initiative might be delivered and that on its own can provide us with an idea of like okay, when will any kind of work set be completed? But not when, necessarily, outcomes can be achieved.

Mark Metze:

Yeah. So what I'm hearing is we can lose sight of the goal. We get so obsessed with meeting the metric that we really are forgetting about what our end goal is in mind. And I know not all of our listeners out there are sports enthusiasts, but at the time of this recording we're right in the middle of the Olympics, so what better time to introduce a sports metaphor? Right time to introduce a sports metaphor right.

Mark Metze:

But if you have, let's say, for the game of basketball hopefully everybody is at least familiar a little bit with basketball and rebounding is where you actually go up and when a shot is missed and you grab what's called a rebound, you get the ball. That's called a rebound right. And so if you would incentivize a certain player because maybe they're taller, maybe they're stronger for getting more rebounds, then they just get obsessed with. That's all they want to do. If that's how they're incentivized, they take the ball away from their own teammates or they prevent their own teammates from getting rebounds and it goes to the other team, because that's what they are focused on.

Mark Metze:

Is that metric of getting more rebounds? Well, is that really what the team is after in the Olympics? No, they're after to get a gold medal right. Maybe this person's primary goal is to get more rebounds and that is to help the team. But look at the big picture. Keep the big picture in mind. Don't be so focused on this individual metric that you put in jeopardy your larger goal. That's where I'm taking this Goodhart's Law.

Mark Paul-Sobral:

Absolutely, and I think that's a good example. It's very topical of the time as well, and if we look at that rebound, a lot of people would put it, um, that exertive pressure to make it a goal, um, so they'll say, okay, what we're noticing is, if we get rebounds, we're going to get gold medals because we're going to win more games, right, um. So the constant analysis is something that really helps everybody to always be analyzing is this really serving towards, um, that ultimate goal, or is it more causal? Um, and I think, almost as a tie-in to good hearts law and quite adjacent to the example that you've given, you've got the cobra effect and you've got the anecdote of the cobra infestation in delhi and how the raj government in india at the time, based on that anecdote, introduced a bounty system and the bounty system dictated that if people, members of the public, were able to bring dead cobras to the government, they'd get a financial reward and that would be great and that would solve the infestation.

Mark Paul-Sobral:

But what did we see instead? Instead, what we saw was people started to notice that financial reward and, not really bearing in mind other impacts, just started to create and breed cobras and artificially changing the landscape in order to get a financial reward. Ultimately, in the anecdote, the british raj, caught on to that, cancelled the bounty system and the people, disheartened, released all these bred cobras back out into the wild, not only drawing the infestation back to square one, but even exacerbating the problem and making it even worse. So I suppose what we can draw from that is sometimes something like a rebound in basketball can be a proxy measurement that incentivizes people towards the goal, but it's not the actual goal itself. My question then becomes what should you incentivize if not rebounding, and should rebound still be incentivized, but contextualized with other things?

Mark Metze:

So that's a really good point. Before we get off to that, I do have to say the Cobra Effect story is so disturbing to me on so many levels as someone who is deathly afraid of snakes. I love animals, I am an animal lover, I have dogs, I have cats, but just from an early young age I have been absolutely petrified of snakes. So the thought of breeding cobras and then having cobras even even populate larger in your community, just absolutely I'm starting to sweat.

Mark Paul-Sobral:

Now, mark, I know, I know exactly what you mean. I can. I think we we can chalk that up to another thing that we have in common, because I am also definitely terrified of snakes, and I actually grew up in Brazil as well, and over there you have snakes everywhere, so you would have thought I'd be just as accustomed to it, to these anecdotal people breeding cobras, but no, I am terrified of anything that slithers.

Mark Metze:

Talk to us a little bit about, then, how you do combine these metrics, so that you aren't focusing just on the wrong metrics or incentivizing the wrong things. Do you have any examples? Because I imagine our listeners are thinking well, this is great that we're talking about basketball. We're talking about breeding cobras. I don't know anything about either one of those. Can you help me out with something more tangible that I can actually use in my real life? Do you have a real life example that you could expand upon some of my lived experiences and those who influenced my career Got it.

Mark Paul-Sobral:

But what I will say is I mentioned velocity already. I think in one of the scenarios where I've experienced good hearts law at play is working in an enterprise level business, where you might have large scale projects and projects that run for a long time and therefore need the influence of not one but maybe multiple teams as well. You can create a very big problem in a sense that you're you've got collaboration, you've got multiple minds injecting ideas into a project. Now that's all well and fine, but what you might find is things get lost along the way, especially if you're trying to use that metric of velocity towards a total amount of, say, story points to complete a, a project. So, say, you're trying to do something like a large server-based migration or you're creating a new um, a large scale platform, or like an ecosystem within a business. These things can be very common to us and they are a very big opportunity for that red carpet for good hearts law to come into. So one thing I was finding in such a project was, towards the late stages of those projects where multiple teams were working together, there was this huge creation of last minute bugs being unearthed through testing or additional requirements that came through and through analysis. What we found is using that metric of total points towards completion and just one more metric that's still very much performance driven which was predictability and output. Sprint on sprint, by teams, again using story points, became a very one-faceted way of looking at progress towards a goal. And so, yes, by all means you could have multiple teams that were rather predictable in terms of what they were willing to commit to a sprint and what they delivered by the end of an iteration. And at the same time, yes, you could see the points ticking along and they could even look like they were very much on track for completion, towards an estimated launch date, for example. But then what we found, as I already mentioned, we start to get those bugs and those late stage requirements coming out of the woodwork and then that delivery of project and the estimated launch date starts to get shifted along as these problems get raised and these initiatives can become quite ballooned and that can be a bit of a problem. So how can you address something like that? Like I said, we're looking at a very one-faceted part of the problem right by looking at just something like an output or throughput measure. So one of the ways that we can mitigate for it was to conduct something like a premortem, which is what we did, going forward into future projects. So we analyzed, okay, what can serve us as not one but multiple North Stars, if you will, of outputs towards a goal. So we didn't exactly get rid of something like velocity in sprints or velocity towards project completion, but we started to compound it.

Mark Paul-Sobral:

Still looking at progress towards completion as an output, we started to look at more qualitative measures. Completion as an output we started to look at more qualitative measures. So, for example, are sprint goals being well-defined and active contributors towards that output of progress towards completion? And another one as well was acceptance criteria being satisfied on each of those iterations, because, of course, you can have a team that is very much geared towards predictability or finishing their sprint and meeting their commitment, and if that becomes a predominant proxy towards an output, they are more likely to cut corners, and that's not something that a team should be held accountable for or be told off for. There's no characteristic element to it, right, especially now when we're talking within the realms of Agile, we should be looking for opportunities to learn and to discover. So those are extra inputs like sprint goals and acceptance criteria being satisfied, really held up as qualitative measures supporting a quantitative measure like velocity and sprints? And that was just one facet. Now, if you step back a little bit, further away from progress towards completion, what you should start to see is there are plenty of other outputs that feed towards that delivery of a project ultimate goal, which is fed by. I suppose what you would want in this particular scenario that I faced was a reduction in last minute bugs and requirements being raised.

Mark Paul-Sobral:

As we took that step back, we started to conduct those pre-mortems and we started to see that things like executable backlogs were other uh measures that we wanted to help hold in mind. And whilst you might think, yes, executable backlogs make sense, but is that something that you're necessarily going to put to a metric? No, it might be something a bit more boolean like do teams a, b and c that are contributing towards this um project have executable backlogs and have enough in that backlog to remain sustainable and keep going through that work? And if that is an output executable backlogs what inputs would influence it? So that's where we discovered that user journeys can be a huge tool. User journey maps, that is, to feed towards executable backlogs. And when you're working in those kind of large scale projects, especially within software, which is the context that I'm coming from, you might have architectural direction as a key element of the puzzle. So those two things are very much artifacts that we can measure their presence, not necessarily the quality to which they're done, necessarily, but that certainly can feed into those inputs themselves.

Mark Paul-Sobral:

As I start to describe this tale and this process that we went through, you start to see that it starts to become a little bit like a family tree.

Mark Paul-Sobral:

It can be a beautiful one or a very ugly one, depending on your perspective towards data, but you start to see that there is a causal relationship between inputs, outputs and outcomes.

Mark Paul-Sobral:

Those outcomes, like executable backlogs and progress term towards completion, together helps us to mitigate good hearts law. Going back to good hearts, what I would say is you've got three ways that people tend to respond to that pressure on a specific metric, and that tends to be they'll manipulate the data. So if we try to use, for example, the cobra effect as an example, someone might take the body of a cobra, chop it in half and say I've now got two cobras and I want more money for it. Is that feeding towards the cobra infestation solution? Probably not. Or, as they already did, they bred cobras, therefore warping the system. So those are the two likely outcomes. When you have that pressure on a metric, the third one becomes optimizing the system in order to more organically achieve the right set of outcomes. And that's what that pre-mortem and that analysis of the relationship between inputs, outputs and outcomes provides us. It provides that opportunity to more openly assess those relationships and optimize the system rather than game it.

Mark Metze:

So, mark, I believe that, yes, there are times when metrics are gained, like you say. You gave the example of the cobra infestation, where people are breeding cobras, but I find, because of certain metrics being in place, it's almost subconsciously done. It's not like that people are being malicious, it's just because they're being rewarded for certain either behaviors or certain actions, and so we have to be careful about what we and I'm giving air quotes, I always do that in a podcast which nobody can see but reward reward doesn't necessarily mean financial rewards, but how are you? And incentivizing is another hard word to not a great word to use, but how are you? Encouraging maybe is the best word.

Mark Paul-Sobral:

Absolutely so I think we can take a live example from what I was saying in my little tale there. So when you have teams that are trying to aim for predictability which is a very noble endeavor to take on right, predictability in the eyes of a wider business helps them create forecasts and understand when does delivery look like and what does it look like if a scope needs to be adjusted. So when there is no pressure to necessarily perform to a specific set of outputs. But predictability is there, great. But in order to be predictable, when that becomes the focused metric, it will incentivize in of itself even though it is a noble cause a bit of a diverse reaction.

Mark Paul-Sobral:

So people might say, for example, I know that our definition of done might include, um, smoke testing, for example, but we are going to take that out so that we can get those five points through the line and into the done column and achieve our predictability, because that's what we committed to. We committed to that five points alongside other things, and that will be the last piece of that puzzle. So incentivize. Incentivizing is very rarely financial in that same kind of sense and most frequently it will be because there is actually a worthwhile endeavor trying to be achieved, but we're not getting a that idea of the wider picture of like. Yes, we can focus just on predictability, but what could we sacrifice in our endeavors to achieve that?

Mark Metze:

I hope these are. These are things that come up in a retrospective, where teams are coming together and truly asking what's the right thing to do and truly taking a look and seeing are we really doing the right thing? You know, let's think about what we're, what's been done, let's think about what we've accomplished, and is that truly the best for our clients, for our customers, for our company, for us to deliver this piece of software or whatever that you're delivering? Is this what we should be doing?

Mark Paul-Sobral:

Absolutely, and I think frameworks like Scrum really can serve in our favor when it comes to that right. So we have the three pillars of Scrum already, and you were describing the retrospective, which falls within the two pillars of inspection and adaption. But then transparency comes into it as well and with it the Scrum values. Values like openness and respect really helps us to hold those questions and elevate them, and then it becomes the responsibility and, I suppose, the accountability, of leadership to not only nurture those values but also allow team members to uphold them and to be able to ask those bigger questions. Yes, predictability is a worthwhile metric to pursue, but what is the cost that we're suffering for at the moment? And is there a bigger outcome that we should be aiming for as a result? And nobody should be penalized or, more realistically, hindered from being able to ask those questions and have the rest of their colleagues, peers and leadership support them in pursuing the answers to those questions.

Mark Metze:

I have this mental picture in my mind of a group having a conversation, just like you're saying around. We need to be predictable. These smoke tests are going to take longer than we expected, so we're not going to be able to finish our sprint on time. So for this time we need to. Really, let's just skip the smoke tests.

Mark Metze:

There's going to be time to smoke test later and I can see the team arguing about that and, as people are, you know, you kind of have that group think where everybody is starting to convince everybody else. And then you have this one brave soul that stands up to say do we really, in the short term, do we want to risk our ability to deliver and find issues sooner? Do we really want to skip smoke testing now and find those issues later, when it's going to be crunch time? So that's what comes to my mind as I'm thinking of some. Maybe I need to write a book about this Mark, something along the oh, I was going to make that?

Mark Paul-Sobral:

Yeah, absolutely. The only problem would be how long could that book get? I think that's true. I think when we use these examples, it kind of ties into yes, goodhart's law is great, and you've got Campbell's law, which addresses the intensity of it all, and we can talk about that until the cows come home. Final piece of that puzzle, which is motivation and the scrum values and that you know, the courage for a team member to be raising the right questions in the right next direction, which is what we're promoting in agile right, it's always about being able to adapt towards the next best thing to deliver value.

Mark Paul-Sobral:

Motivation really kind of ties into it all, because you can have all that rich tapestry of data available to us but if there is no appetite to analyze or to learn from it, it's a worthless exercise. At the end of the day, people need to be bought in as to why they need to be doing this. Perhaps this can be something as simple as the bounty reward and something financial, but sometimes motivation can be driven by very non-financial means. I think Dan Pink covers that with things like mastery, purpose and meaning and other facets to motivation, which I would say is necessary in our pursuits to mitigate something like Goodhart's Law when it comes to being a data-driven company or organization. You've given us a great example here of talking about to mitigate something like Good Hearts Law when it comes to being a data-driven company or organization.

Mark Metze:

You've given us a great example here of talking about inputs leading to outputs and then also outcomes, and I'm curious to know do you have any other examples where maybe things seemed okay but then, as you got closer to the end, the team realized we're not on track anymore? We thought we were, but now we're further off than we thought we were. What other examples do you have from real life that you could share with us?

Mark Paul-Sobral:

Yeah. So continuing on from that same example of the endeavor of reducing last minute bugs and requirements coming in late stage into a project, last minute bugs and requirements coming in late stage into a project One of the other facets that we found in outputs was generally clarity on project outcomes for both stakeholders and the teams themselves. What we found was documentation became key as well. As I've already iterated a few times, multiple teams were necessary to deliver these kind of large scale initiatives and I think that's something that's quite familiar with quite a few different people that might be listening to the podcast. Documentation sometimes can be one of those first pieces of the puzzle of quality that can be dismissed in our efforts to achieve any particular metric, especially if we're talking about performance. A lot of people might anecdotally or maybe even using data, say actually nobody refers to documentation. So we spend time writing documentation, and for what? Maybe one or two people to read it, and I don't. I would never dismiss that. I think if there is data especially to support such a claim, then the question shouldn't be should we be writing documentation when we have that level of cross-team collaboration, but rather are we documenting the right stuff? So that led to the inputs towards that clarity. And for us, yes, we wanted to have documentation usage, because that was held against us as a reason to not document what becomes the right stuff, right? So what we did is we separated into three facets. It was document creation so how much time will we spend in creating documentation to support the onboarding of new teams or for teams to be introduced to new stacks or new products that they might be working on but also the usage, and those were two inputs that had a causal relationship. Causal relationship and what we found is, if documentation usage wasn't going up based on document creation, we needed to ask that question were the right documentation being created? And it sounds like no. So reviewing that and what inputs were required to feed those inputs? I mean, in a sense, we are document creation became a part of it and not everything needs to be documented, right, I think in agile, we're always trying to find a bit of a balance of delivery of valuable software over excessive documentation. Right, it's one of the articles in a manifesto and what I would say is to compound that yes, sometimes interaction is better than documenting.

Mark Paul-Sobral:

And that was the third input in that little part of our tree. So rating team collaboration was how we quantified that, if a team was pleased with the rating of their team collaboration and that was captured in retrospectives actually. So we just have a little corner where it's okay guys out of 10, how would you rate our cross team collaboration in this sprint? And they would just pop a little figure on there. We use that to measure are the teams collaborating the right kind of way and is that being really positive?

Mark Paul-Sobral:

If you're starting to tank, is it because there's enough understanding there and that could influence documentation creation, or we might find that a reduction of creating of sorry, a reduction of collaboration might actually be stemmed from the fact that people don't understand each other because of the absence of that information that could be captured through documentation. So between those three inputs we managed to improve clarity in projects for the teams themselves and therefore the stakeholders as well. How that influenced the late stage reduction of bugs and last minute requirements came about through documentation on things like governance, data storage, for example, things like that. That didn't have to be last minute considerations because a team would have thought about it and drawn in external stakeholders into the conversation to make sure those kind of NFRs were captured. Non-functional requirements.

Mark Metze:

So I can imagine another story here, mark. I can imagine the teams, these new teams, coming in and saying that they were having difficulty with coming up to speed because they had a lack of documentation. Then the team says, well, yes, we need to document more, we need to have more documentation so that it doesn't take time away from pairing with other experienced developers or once it had been around. And then I can imagine somebody going off and saying, okay, well, I really don't have time, I really need to be coding, I don't need to be documenting, so I'm just going to have either chat, gbt or some some language model. Generate some documentation for me. They're done, it's documented. Go look out in our confluence. All the documentation is there. And you see all this, this volumes and volumes of documentation. That really don't say a whole lot, but they were doing. What was asked was we need more documentation, right? So again, it's. Are you really doing the right thing?

Mark Paul-Sobral:

Absolutely.

Mark Paul-Sobral:

I think you can take that kind of snapshot of any of these metrics and then say, okay, that becomes a pejorative goal now, in which case we're not mitigating for Goodhart's law, we're just shifting the focus elsewhere, and I think that's where it can become a bit problematic.

Mark Paul-Sobral:

And don't get me wrong, I'm actually a big fan of using ai as a facilitative tool, not a replacement tool for driving the right kind of behaviors, right.

Mark Paul-Sobral:

So if we want to have documentation and we have other ways that are non-ai based for example, like please select swagger that help with automation of documentation. But again the question becomes are we going to be able to field that by having useless documentation being created but then not really referred to for a very long time, because people will go to it, see that it's not of the quality that they'd expect and then stop referring to it. So that's where that relationship between two pieces of metrics can help influence and discard each other if needed. What we found is, if the only way in to achieve other goals was to have, very at a glance, documentation that was created by AI, that's fine. We can remove that as a metric that influences our end goal, and I think that's probably the most underpinning, important mitigating factor for Goodhart's law, and that's that constant analysis of finding authentic metrics that feed towards the goal and pairing measures that counterbalance each other.

Mark Metze:

I love the counterbalances that you had with this, because what immediately comes to mind for me is you said is the documentation being used so we could be documenting the wrong thing, absolutely. So that's one aspect to look at. But then how much time are we spending documenting this? I think that's another great lens to look at, because you have some very analytical people, such as myself I can be this way sometimes who want to put the perfect Mona Lisa together for a piece of documentation, and I can spend hours and hours and hours putting together the perfect documentation that has the absolute best illustrations, the best of the best in there, and I can obsess over that. And am I really? Is there an opportunity cost that we're missing by over-documenting and gold plating? So I love that you have these different lenses to counterbalance the metrics.

Mark Paul-Sobral:

Yes, absolutely, and it's important to have that counterbalance and unearthing those metrics, and perhaps that can be isolated to a sole set of people's responsibilities, and perhaps this is where leadership comes in right. So, at least in my view, I believe that true leadership comes in providing the best environment for teams to thrive and aim towards those goals. So part of that to me would be to unearth and expose that piece of data so it becomes less onerous for teams to do so themselves, and it should be to some degree. It should be automatable, right? I think coming up with numbers should be very low effort and it should be automated when possible.

Mark Paul-Sobral:

And those qualitative pieces of data that might be populated through something like surveys, for example, should be very easy and a low effort, an example being that cross-team collaboration rating, where it is just two minutes of a retrospective, giving people time to just pop a little number and then we move on and somebody else can come over afterwards and aggregate that data across different teams or across different sprints and do that work so that they can then expose that to the team and say look, guys, this is what this is what the data looks like for you. Is this, in your opinion, good or bad? And I'd certainly welcome the opportunities to expose that data to teams, as well as that question of is this data helping and influencing our goals? At the moment, sometimes, another problem that we can find in trying to mitigate a good-ass law is trying to drive that authority that comes with being data-driven further up and away from people who have the most information being data-driven further up and away from people who have the most information.

Mark Metze:

Well, mark, this has been absolutely incredible. This is a fascinating discussion and I feel like we could just go on for the whole rest of the day, but unfortunately we don't have that much time. So, as we wrap up here in summary for our listeners, how do we avoid metrics misfiring on us?

Mark Paul-Sobral:

listeners. How do we avoid metrics misfiring on us? I think, to summarize on the whole discussion, what I would say is it's important to remember how teams, people, systems, I would say are likely to react to metrics having some form of pressure applied to achieve them. Is that manipulation of data, that manipulation of the system itself in order to achieve those outcomes? And the third one being what we want to aim for as a reaction, which is the optimization of that system.

Mark Paul-Sobral:

That rich tapestry of analyzing inputs, outputs and outcomes is one of the better ways for us to really take a stab at taking that pressure and spreading it across the whole system and looking at it from a macroscopic view, rather than try and focus microscopically on just one output that leads to one goal, and bearing in mind the different types of good hearts law as well. So we had that difference of having a proxy measure masquerading as the output towards a true goal, which is what the cobra effect was, and also the absence of analyzing that causal relationship. I think that's really important. And then to summarize just those kind of the pathways to that mitigation and having those kind of gems in your hands, would be to conduct those pre-mortems or you know, it can be in a form of like a future scale, future perspective or perhaps a noopsie session, things that can help us part the way, going forwards and treating it like a living document, right and just as the same as you would with that tapestry or tree of inputs, outputs and outcomes, and always measuring the that relationship to find that counterbalance between different inputs, different outputs and how they feed towards a goal.

Mark Paul-Sobral:

And I think it's quite helpful as well to just generally broaden your success measures and, with my example there that was, instead of just looking at progress towards completion in the form of story points, maybe look at other things like are the project outcomes clear enough to everybody or do we have executable backlogs throughout the project? So it's looking at that, broadening and changing that pressure from being in a focal point to being somewhere else. And I think we can analog from being in a focal point to being somewhere else. And I think we can analogize that in a way by saying if you're giving someone a massage and you just get your thumb and just drive it down into a particular point, sometimes that can be what's needed, but quite a lot of times it's going to feel very painful. So instead we spread those hands and, just like give a more nourishing massage that addresses multiple points in a more even distributed way.

Mark Metze:

Oh, great analogy. All right, so you've inspired me. I'm going to print out Good Heart's Law, I'm going to put it on my wall here for a period of time, so that I've got it up in front of me and I can see that every day and make sure that I am not putting the focus on the wrong metrics. I'm sure our listeners out there will want to get in touch with you if they have questions. What's the best way for them to do that?

Mark Paul-Sobral:

The best way will probably be through LinkedIn. I welcome any messages. This is a topic that's been something I'm very passionate about, so by all means, do reach out. I'd love to get into those uh, into those topics with anyone. I suppose I would serve as a reminder as well. Like, my approach and my examples are just one way of addressing the bigger problem. Right, the data that anyone can come up with for any kind of problem can look very different, and something like documentation could be one way to address it, but you might have something very different in order to address a very similar problem. But yeah, linkedin, send me a message. I'd love to get to know more people in the agile world Always good to make new friends.

Mark Metze:

Awesome. We'll make sure we'll put your profile in the show notes to make it easy for people to reach out to you. Well, this has been another fascinating episode. Again, Mark, thank you so much for being our guest. I've really enjoyed talking with you here about metrics and when they misfire. All right, everybody. That's it. This has been another episode of the Agile Within. We'll see everybody next time. Thanks for joining us for another episode of the Agile Within. If you haven't already, please join our LinkedIn page to stay in touch. Thanks for joining us for another episode of the Agile Within. If you haven't already, please join our LinkedIn page to stay in touch. Just search for the Agile Within and please spread the word with your friends and colleagues Until next time. This has been your host, Mark Metze.

People on this episode