Thoughts on Impact

In this post, I talk a lot about Microsoft, but it’s not only applicable to Microsoft. It’s once again “Connect Season” at Microsoft, a biannual-ish period when Microsoft employees are tasked with filling out a document about their core priorities, key deliverables, and accomplishments over the past year, concluding with a look ahead to theirContinue reading “Thoughts on Impact”

This content originally appeared on text/plain and was authored by ericlaw

In this post, I talk a lot about Microsoft, but it’s not only applicable to Microsoft.

It’s once again “Connect Season” at Microsoft, a biannual-ish period when Microsoft employees are tasked with filling out a document about their core priorities, key deliverables, and accomplishments over the past year, concluding with a look ahead to their goals for the next six months to a year.

The Connect document is then used as a conversation-starter between the employee and their manager. While this process is officially no longer coupled to the “Rewards” process, it’s pretty obviously related.

One of the key tasks in both Connects and Rewards processes is evaluating your impact— that is to say, what changed in the world thanks to your work?

We try to break impact down into three rings: 1) Your own accomplishments, 2) How you built upon the ideas/work of others, and 3) How you contributed to others’ impact. The value of #1 and #3 is pretty obvious, but #2 is just as important– Microsoft now strives to act as “One Microsoft”. This represents a significant improvement over what was once described to prospective employees as “A set of competing fiefdoms, perpetually at war” and drawn eloquently by Manu Cornet:

Old Microsoft, before “Building Upon Others” culture

By explicitly valuing the impact of building atop the work of others, duplicate effort is reduced, and collaboration enhances the final result for everyone.

While these rings of impact can seem quite abstract, they seem to me to be a reasonable framing for a useful conversation, whether you’re a new Level 59 PM, or a grizzled L67 Principal Software Engineer.

The challenge, of course, is that measurement of impact is often not at all easy.

Measures and Metrics

When writing the “Looking back” portion of your Connect, you want to capture the impact you achieved, but what’s the best way to measure and express that?

Obviously, numbers are great, if you can get them. However, even if you can get numbers, there are so many to choose from, and sometimes they’re unintentionally or intentionally misleading. Still, numbers are often treated as the “gold standard” for measuring impact, and you should try to think about how you might get some. Ideally, there will be some numbers which can be readily computed for a given period. For instance, my most recent Connect noted:

While this provides a cheap snapshot of impact, there’s a ton of nuance hiding there. For example, my prior Connect noted:

Does this mean that I was less than half as impactful this period vs. the last? I don’t think so, but you’d have to dig into the details behind the numbers to really know for sure.

Another popular metric is the number of users of your feature or product, because this number, assuming appropriate telemetry, is easy to compute. For example, most teams measure the number of “Daily Active Users” (DAU) or “Monthly Active Users” (MAU).

While I had very limited success in getting Microsoft to recognize the value of my work on my side project (the Fiddler Web Debugger), one thing that helped a bit was when our internal “grassroots innovation” platform (“The Garage”) added a simple web service where developers could track usage of any tool they built. I was gobsmacked to discover that Fiddler was used by over 35000 people at Microsoft, then more than one out of every three employees in the entire company.

Hard numbers bolstered anecdotal stories (e.g. the time when Microsoft’s CTO/CSA called me at my desk to help him debug something and I was about to guide him into installing Fiddler only to have him inform me that he “used it all the time.”)

When Fiddler was later being scouted for acquisition by devtool companies, I quickly learned that they weren’t particularly interested in the code — they were interested in the numbers: how many downloads (14K/day), how many daily active users, and any numbers that might reveal what were users were doing with it (enterprise software developers monetize better than gem-farming web gamers).

A few years prior, my manager had walked into my office and noted “As great as you make Fiddler, no matter how many features you add or how well you build them, nothing you do will ever have as much impact as you have on Internet Explorer.” And there’s a truth to that– while Fiddler probably peaked at single-digit millions of users, IE peaked at over a billion. When I rewrote IE’s caching logic, the performance savings were measured in minutes individually and lifetimes in aggregate.

Unfortunately, there’s a significant risk to making “Feature Usage” a measure of impact– it means that there’s a strong incentive for every feature owner to introduce/nag/cajole as many people as possible into using a feature. This often manifests as “First Run” ads, in-product popups, etc. Your product risks suffering a tragedy of the commons effect whereby every feature team is individually incentivized to maximize user exposure to their feature, regardless of appropriateness or the impact to users’ satisfaction with the product as a whole.

When a measure becomes a target, it ceases to be a good measure.

Goodhart’s Law

To demonstrate business impact, the most powerful metric is your impact on profitability, measured in dollars. Sadly, this metric is often extremely difficult to calculate: distinguishing the revenue impact of a single individual’s work on a massive product is typically either wildly speculative or very imprecise. However, once in a great while there’s a clear measure: My clearest win was nearly twenty years ago, and remains on my resume today:

Saving $156,000 a year in costs (while dramatically improving user-experience– a much harder metric to measure) at a time when I was earning around half of that sum was an incredibly compelling feather in my cap. (As an aside, perhaps my favorite example of this ever was found in the OKRs of the inventor of Brotli compression, who computed the annual bandwidth savings for Google and then converted that dollar figure into the corresponding numbers of engineers based on their yearly cost. “Brotli is worth <x> engineers, every year, forever.”)

Encouraging employees to evaluate their Profit Impact is also somewhat risky– oftentimes, engineers are not interested in the business side of the work they do and consider it somewhat unseemly — “I’m here to make the web safe for my family, not make a quick buck for a MegaCorp.” Even for engineers who consciously acknowledge the deal (“I recognize that the only reason we get to spend hundreds of millions of dollars building this great product we give away for free is because it makes the company more money somewhere“) it can be very uncomfortable to try to generate a precise profitability figure– engineers like accuracy and precision, and even with training in business analysis, calculation of profit impact is usually wildly speculative. You usually end up with a SWAG (silly wild-ass guess) and the fervent hope that no one will poke on your methodology too hard.

A significant competitive advantage held by the most successful software companies is that they don’t need to bother their engineers with the business specifics. “Build the best software you can, and the business will take care of itself” is a simple and compelling message for artisans working for wealthy patrons. And it’s a good deal for the leading browser business: when the product at the top of your funnel costs you 9 digits per year and brings in 12 digits worth of revenue, you can afford not to demand the artisans think too much about anything so mundane as money.


Of course, numbers aren’t the only way to demonstrate impact. Another way is to tell stories about colleagues you’ve rescued, customers you’ve delighted, problems you’ve solved and disasters you’ve averted.

Stories are powerful, engaging, and usually more interesting to share than dry metrics. Unfortunately, they’re often harder to collect (customers and partners are often busy and it can feel awkward to ask for quotes/feedback about impact). Over the course of a long review period, they’re also sometimes hard to even remember. Starting in 2016, I got in the habit of writing “Snippets”, a running text log of what I’ve worked on each day. Google had internal tooling for this (mostly for aggregating and publishing snippets to your team) but nowadays I just have a snippets.txt file on my desktop.

Both Google and Microsoft have an employee “Kudos” tool that allows other employees to send the employee (and their manager) praise about their work, which is useful for both visibility as well as record-keeping (since you can look back at your kudos for years later). I also keep a Kudos folder in Outlook to save (generally unsolicited) feedback from customers and partners on the impact of my work. One thing I’ve heard some departing Microsoft employees note (and experienced myself) is that they often don’t hear about the breadth of their impact until the replies to their farewell emails start pouring in. When I left Microsoft in 2012, I got some extremely kind notes from folks that I never expected to even know my name (some not even Fiddler users!).

Even when recounting an impact story, you should enhance it with numbers if you can. “I worked late to fix a regression for a Fortune 500 customer” is a story– “…and my fix unblocked deployment of Edge as the default browser to 30000 seats” is a story with impact.

A challenge with storytelling as an approach to demonstrating impact is that our most interesting stories tend to involve frantic, heroic, and extraordinary efforts or demonstrations of uncommon brilliance, but the reality is that oftentimes the impact of our labor is greater when competently performing the workaday tasks that head off the need for such story-worthy events. As I recently commented on Twitter:

We have to take care not to incentivize behaviors that result in great stories of “heroic firefighting” while neglecting the quiet work that obviates the need for firefighting in the first place. But quantifying the impact is hard– how do you measure the damage from a hurricane that didn’t happen?

My most recent Connect praised me as having done “a great job of being our last line of defense” which I found quite frustrating– while I do get a lot of visibility for fixing customer problems that have no clear owners, my most valuable efforts are in helping ensure that we fix problems before customers even experience them.


Related to this is the relationship of speed to impact— the sooner you make a course-correcting adjustment, the smaller that adjustment needs to be. Flag an issue in the design of the feature and you don’t have to update the code. Catch a bug in the code before it ships and no customer will notice. Find a bug in Canary before it reaches Beta and developers will not need to cherry-pick the fix to another branch. Fix a regression in Beta before it’s promoted to Stable and you reduce the potential customer impact by very close to 100%.

Similarly, any investment in tools, systems, and processes to tighten the feedback loop will have broad impact across the entire product. Checking in a fix to a customer-reported bug quickly only delights if that customer can benefit from that fix quickly.

Unfortunately, because speed reduces effort (a faster fix is cheaper), it’s too easy to fall into the trap of thinking that it had lower impact.

Effort != Impact

A key point arises here– impact is not a direct function of effort, which is just one of the inputs into the equation.

A friend once lamented his promotion from Level 63 to 64, noting “It’s awful. I can’t work any harder!” and while I’ve felt the same way, we also both know that even the highest-levelled employees don’t have more than 24 hours in their day, and most of them retain some semblance of work/life balance.

We’re not evaluated on our effort, but on our impact. Carefully selecting the right problems to attack, having useful ideas/subject matter expertise, working with the right colleagues, and just being lucky all have a role to play.

Senior Levels

At junior levels, the expectation is that your manager will assign you appropriate work to allow you to demonstrate impact commensurate with your level. If for some reason something out of your control happens (a PM’s developer leaves the team, so their spec is shelved) the employee’s “opportunity for impact” is deemed to be lower and taken into account in evaluations.

As you progress into the Senior band and beyond, however, “opportunity for impact” is implicitly deemed unlimited. The higher you rise, the greater the expectation that you will yourself figure out what work will have the highest impact, then go do that work. If there’s a blocker (e.g. a partner team declines to do needed work), you’re responsible for figuring out how to overcome that blocker.

Amid the Principal band, I’ve found it challenging to try to predict where the greatest opportunity for impact lies. For the first two years back at Microsoft, I was unexpectedly impactful, as (then) the only person on the team to have ever worked as a Chromium Developer– I was able to help the entire Edge team ramp up on the codebase, tooling, and systems. I then spent a year or so as an Enterprise Fixer, helping identify and fix deployment blockers preventing large companies from adopting Edge. Throughout, I’ve continued to contribute fixes to Chromium, investigate problems, blog extensively, and try to help build a great engineering culture. Many of these investments receive and warrant no immediate recognition– I think of them as seeds I’m optimistically planting in the hopes that one day they’ll bear fruit.

Many times I will take on an investigation or fix for a small customer, both in the hope that I’m also solving something for a large customer who just hasn’t noticed yet, and because there’s an immediate satisfaction in helping out an individual even if the process appears unscalable. While it cannot possibly be the most efficient method of maximizing impact, just helping, over and over, can bypass analysis paralysis or impactless naval gazing.

Do all the good you can,
By all the means you can,
In all the ways you can,
In all the places you can,
At all the times you can,
To all the people you can,
As long as ever you can.

Learning is an Investment

Taking time to learn new technologies, skills, or even your own codebase does not typically have an immediate impact on the organization. But it’s an investment in the future, and it can pay off unexpectedly, or fairly reliably, depending on what you choose to learn.

Similarly, sharing what you’ve learned is an investment– you’re betting that the overall value to the recipient or the organization will exceed the value of your preparation and presentation of information. But beyond the value in teaching others, teaching is a great way to learn a topic more fully yourself, as gaps in your understanding are exposed, and the most dangerous class of ignorance (“Things you know that just ain’t so“) are pointed out to you.

Do things that Scale (and things that don’t)

When planting seeds, you don’t know which might bear fruit. Spending an hour mentoring a peer may have no obvious impact right away. But perhaps you will learn something to improve your own impact, or perhaps years down the line your mentee will pass along a cool role opportunity in another division. You never know.

Still, if you seek to maximize your impact, finding opportunities to scale is crucial. If fixing a problem has impact X, documenting the fix or creating a process to ensure it never happens again may have impact 10X. Debugging a developer’s error may have impact Y, while writing a tool to allow any developer to diagnose their own errors may have impact 1000Y. Showing a colleague how to do something may have impact Z, while writing a blog post, a book, or giving a conference talk may have impact 100Z.

Personally, I like to start helping with things in ways that don’t scale, because it allows me to learn enough about the problem space to maximize the value of my contribution. It also allows me to discover whether my solution even needs to scale, and if it does, how best to build that scalable solution.


As you move up the ranks, one popular way to increase your impact is to become a manager. As a manager, you are, in effect, deemed partly responsible for the output of your team, and naturally the impact of a team is higher than that of an individual.

Unfortunately, measuring your personal contribution to the team’s output remains challenging– if you’re managing a team of star performers, would they continue to be star performers without you overhead? On the other hand, if you’re leading a team of underachievers, the team’s impact will be low, and there are limits to both the speed and scope of a manager’s ability to turn things around.

As a manager, your impact remains very much subject to the macro-environment– your team of high performers might have high attrition because you’re a lousy manager, or in spite of you being a great manager (because your team’s mission isn’t aligned with the employee’s values, because your compensation budget isn’t competitive with the industry, etc).

Beyond measuring your own impact, you’re now responsible for the impact of your employees– assigning or guiding them toward the highest impact opportunities, and evaluating the impact of the outcomes. You’re also responsible for explaining each employee’s impact to the other leaders as a part of calibrating rewards across the team. Perhaps unfortunately for everyone, this process is mostly opaque to individual contributors (who are literally not in the room), leaving your ICs unable to determine how effectively you advocated on their behalf beyond looking at their compensation changes.

Impact Alignment

One difficult challenge is that, “One Microsoft” aside, employee headcount and budgets are assigned by team. With the exception of some cross-division teams, most of your impact only “counts” for rewards if it accrues to your immediate peers, designated partner teams, or customers.

It is very hard to get rewarded for impact outside of that area, even if it’s unquestionably valuable to the organization as a whole.

Around 2009 or so, my manager walked into my office and irreverently asked “You’re an idiot, you know that right?” I conceded that was probably true, but asked “Sure, but why specifically?” He beckoned me over to the window and pointed down at the parking lot. “See that red Ferrari down there?” I nodded. He concluded “As soon as you thought of Fiddler, you should’ve quit, built it, and had Microsoft buy you out. Then you’d be driving that instead of a 2002 Corolla.” I laughed and noted “I’m no Mark Russinovich, and Microsoft clearly doesn’t want Fiddler anyway.” But this was a problem of organizational alignment, not value– Microsoft was using Fiddler extremely broadly and very intensely, but because it was not well-aligned with any particular product team, it received almost no official support. I’d offered it to Visual Studio, who made some vague mention of “investing in this area in some future version” and were never heard from again. I offered to write an article for MSDN Magazine, who rejected me on the grounds that the tool was “Not a Microsoft product” and thus not within their charter, despite its broad use exclusively by developers on Windows. Over the years, several leads strongly implied that my work on Fiddler was evidence that I could be “working harder at my day job.”

Nevertheless, in 2007, I won an Engineering Excellence award for Fiddler, for which I got a photo with Bill Gates, a crystal spike, an official letter of recognition, and $5000 for a morale event for my “team.” Lacking a team, I went on a Mediterranean cruise with my girlfriend.

Of course, there have been many non-official rewards for years of effort (niche fame, job opportunities, friendships) but because of this lack of alignment with my team’s ownership areas, even broad impact was hard for Microsoft to reward.


Our CEO once famously got in trouble for suggesting that employees passed over for promotion should be patient and “karma” would make things right. While the timing and venue for this observation were not ideal, it’s an idea that has been around at the company for decades. Expressed differently, reality has a way of being discovered eventually, and if you’re passed over for a deserved promotion, it’s likely to get fixed in the next cycle. In the other direction, one of the most painful things that can happen is a premature promotion, whereby you go from being a solid performer with level-appropriate impact to underachieving against higher expectations.

I spent six long years in the PM2 band before we had new leaders who joined the team and recognized the technical impact I’d been delivering for years; I was promoted from level 62 to 63 in five months.

In hindsight, I was too passive in evaluating and explaining my impact to leaders during those long years, and I probably could have made my case for greater rewards earlier if I’d spent a bit more energy on doing so. I had a pretty dismissive attitude toward “career management” and while I thought I was making things easier on my managers, the net impact was nearly disastrous– quitting in disgust because “they just don’t get it.”

How do you [maximize|measure|explain] your impact?


Bonus Content – Career Advice

Back in 2015, I gave a talk about what I’d learned in my career thus far. You can find the recording and deck of Lucking In on GitHub; most of the key points ultimately come down to maximizing impact.

For career-advice contrast, I also enjoyed Moxie Marlinspike’s post.

This content originally appeared on text/plain and was authored by ericlaw

Print Share Comment Cite Upload Translate
ericlaw | Sciencx (2023-10-05T02:48:23+00:00) » Thoughts on Impact. Retrieved from
" » Thoughts on Impact." ericlaw | Sciencx - Wednesday May 18, 2022,
ericlaw | Sciencx Wednesday May 18, 2022 » Thoughts on Impact., viewed 2023-10-05T02:48:23+00:00,<>
ericlaw | Sciencx - » Thoughts on Impact. [Internet]. [Accessed 2023-10-05T02:48:23+00:00]. Available from:
" » Thoughts on Impact." ericlaw | Sciencx - Accessed 2023-10-05T02:48:23+00:00.
" » Thoughts on Impact." ericlaw | Sciencx [Online]. Available: [Accessed: 2023-10-05T02:48:23+00:00]
» Thoughts on Impact | ericlaw | Sciencx | | 2023-10-05T02:48:23+00:00