The fallacy of measuring performance: Why tech companies get it wrong

In the fast-paced world of software development, performance measurement is presented to us as a “necessary solution”. Companies and management want to ensure that their teams are productive, efficient, and delivering value. However, the way tech companies approach performance metrics often does more harm than good, sacrificing software quality on the long run in the exchange for the illusion of efficiency and transparency.

The misuse of some common KPIs in software development

Many tech companies fall into the same trap of using simplistic key performance indicators (KPIs) to evaluate developers. These metrics include:

  • Story points completed per sprint: Something designed only for planning, not for performance evaluation.
  • Number of commits pushed to main: Incentivizing frequent, but not necessarily meaningful, contributions to the repository.
  • Lines of code written: Rewarding verbosity over readability and maintainability.
  • Bugs closed: Ignoring the root cause analysis and instead focuses on “just solving it”.

These metrics create perverse incentives. Developers naturally start gaming the system: inflating estimates to look more productive, making unnecessary commits, or prioritizing speed over robustness. Instead of improving efficiency, these metrics degrade the quality of the codebase and demotivate engineers who genuinely care about the quality of their code and its’ sustainability.

The complexity of measuring IT performance

Software development is not an assembly line. Unlike manufacturing, where output can be quantified in tangible units, software engineering is a creative and problem-solving discipline. Some of the most valuable contributions are not easily measurable:

  • Refactoring code to improve maintainability on the long run
  • Pair programming and mentoring junior developers, where both participants learn a lot from
  • Architectural decisions that prevent future tech debt
  • Writing clear documentation and tests to enhance knowledge sharing and code reliability
  • Identifying and mitigating security risks before they become a new incident

None of these tasks directly contribute to a quantifiable KPI like “tickets closed,” but they are crucial for building high-quality software and add value to the project you’re in. The failure to recognize this results in short-term efficiency at the cost of long-term sustainability.

At the moment of writing this piece, I haven’t seen a single quantifiable KPI with “hard numbers” which is able to cover all the mentioned items above to properly reward a software developer.

But wait, it gets even more complicated for IT consultants

In consultancy, measuring performance becomes even more complicated. Unlike in-house developers who work on a single product, IT consultants (such as me) navigate multiple clients, industries, and needs. Their work extends beyond code – it includes stakeholder management, business analysis, solution design, knowledge transfer, communication strategies and leadership support.

For a consultant, success isn’t about churning out more features; it’s about:

  • Understanding the client’s pain points and designing appropriate solutions
  • Bridging gaps between technical and non-technical teams, becoming an active voice and a trusted team member
  • Ensuring smooth handovers so clients can maintain the system after the engagement ends
  • Building trust and long-term relationships with clients, safeguarding the account and business success

None of these contributions can be boiled down to numbers. Trying to measure consulting work with standard software KPIs is not only misleading but also alienating. It disregards the nuances of client interactions, adaptability, and strategic thinking that make consultants valuable in the first place.

The so-called “soft skills” are, in my opinion, the most important ones a consultant or leader needs to have.

A better-but-not-yet-perfect approach to Performance Measurement

While traditional KPIs fail to capture real performance, other companies use a more holistic approach instead:

  • Team and manager feedback: Direct feedback from teammates and stakeholders provides a richer picture of contribution.
  • Customer satisfaction and business impact: Measuring the impact of solutions delivered, rather than the raw number of tasks completed.
  • Career growth and skills development: Recognizing contributions to team learning, mentoring, and upskilling.

The measurements above seem to do better at guiding IT consultants into a more reasonable performance and setting more meaninful goals to themselves, in my opinion they are still not perfect and should be used with extreme care and transparency by any company that uses it. Making sure that every participant is on the same page on how to build their own goals, how to strive towards them and offering them valuable mentoring during the entire growth process.

As far as qualitative performance indicators go, there is always room for misuse, misinterpration other issues, but that I can write on a future blog piece…

The goal of performance should not be to track meaningless numbers or indicators that tell the board of directors that the “company is doing ok” but to foster an environment where people are motivated to do their best work without having to “play the game”. People will go great lengths if they see meaning on what they’re doing.

In the end, why are you measuring performance?

Tech companies’ obsession with quantitative KPIs often leads to counterproductive behaviors that erode software quality and damage morale. In consultancy, the problem is even more problematic, as rigid performance metrics fail to capture the multifaceted nature of client work. Instead of fixating on numbers, organizations should focus on qualitative assessments, fostering a culture that values growth in skills, problem-solving, and long-term impact. Only then can we create meaningful and sustainable software development practices.