AI Alone Won’t Save Performance Reviews, but It Can Make Managers See More Clearly

Performance reviews are broken. Not just flawed or outdated, but fundamentally broken. Despite the
hours poured into self-assessments, calibrations and ratings, 95% of managers are dissatisfied with the
review process. We kill the annual review process only to resurrect it six months or a year later because
we need a way to tie pay to performance. Even more telling, fewer than one in five employees walk away
from a review feeling inspired to improve. If performance management is meant to fuel performance, the
system is clearly falling short.
Now, like with everything else, we’re asking: can AI help fix it?
I think it can, but only if we stop expecting AI to do what humans never could, give perfectly unbiased
feedback, and start asking it to reduce the biases humans already exhibit in the process today.
For the past two decades, I’ve seen what happens when performance systems fail. A brilliant engineer I
knew (let’s call her Tracy) helped half of her team deliver on a critical launch, but because she stayed
quiet during meetings, her manager only gave her a lukewarm review. On the contrary, CHROs tell me
about people they see get promoted because they were great at managing up and picking their peers for
360 reviews, but then they had to manage these folks out soon after announcing the promotion due to
major concerns that weren’t surfaced because the performance review process wasn’t built to identify
them.
We’ve normalized these inconsistencies, especially for the quiet, high-performing, high-potential
contributors. Some folks call them “shy-pos.” They don’t self-promote to their peers, but they are often the
silent glue holding teams together. And they’re often the first ones missed in traditional review cycles,
precisely because they aren’t optimizing for visibility.
Read More on Hrtech : AI Has Landed in HR. Is Your Organization Using It Wisely?
AI can bring objectivity and actually reduce human bias
There’s a lot of concern that AI increases bias, and in some areas, like recruiting, that’s absolutely true.
Algorithms trained on bad hiring data replicate the same exclusionary patterns we’ve carried for decades.
But performance management starts with a different kind of problem: it’s already deeply biased.
We’ve built performance reviews on top of multiple compounding biases:
● Idiosyncratic rater bias: Two managers rating the same employee will often come to very
different conclusions based on their own reference points. One manager’s “exceeds
expectations” is another’s “meets expectations.”
● Selection bias in 360s: Employees pick who reviews them. The people most likely to give
constructive or critical feedback often aren’t even invited to participate.
● Recency bias: A single great project, or single stumble, in the last quarter can overshadow years
of work.
● Halo and horn effect: One trait (good or bad) gets generalized across all areas of performance:
“they’re friendly, so they must also be a great leader.”
● Proximity bias: Those who work in closer physical or social proximity to leadership
disproportionately receive higher ratings, better feedback, and faster promotions.
● Gender and race-coded language: Women still hear “abrasive” while men get “assertive.” Black
employees are called “intimidating” or “not a cultural fit” far more often. Quiet, introverted workers
get labeled “not a leader” regardless of their influence.
Where AI actually can help, if grounded in better data
AI can help make managers see more clearly if it’s grounded in better inputs to increase objectivity and
reduce recency bias. When AI systems are trained on richer signals, like Organizational Network Analysis
(ONA) data that captures who people go to for help, or actual project data that reflects meaningful
contributions, it can reduce some of the human distortions that managers fall back on.
Today, most managers (especially newer ones) are overwhelmed and improvising. Feedback becomes
either overly generic (“team player”) or subtly coded (“aggressive,” “emotional,” or “quiet”). AI can support
here, not by replacing managers, but by starting with data from actual work performed across systems
like Jira, Asana, Salesforce, or Slack.
Imagine a world where AI helps draft the first version of a review based on someone’s full body of work:
who collaborated, who supported, who led complex problem-solving, and how teammates organically
relied on one another. The manager still adds context, nuance, and developmental feedback but starts
with evidence rather than memory.
This doesn’t eliminate the manager’s role; it makes their judgment better informed.
The cycle we keep repeating, but never fix
For decades, companies have swung between performance review models, from formal annual ratings to
continuous feedback, hoping each time that the next approach will finally fix the system. But none of
these models solve the core problem: the data itself has always been limited to a single manager’s view,
or a hand-picked set of peers.
ONA has rarely been applied to performance reviews at all, and that’s the missed opportunity. Instead of
asking a few people to give feedback, ONA broadens the lens to capture how someone is seen across
the full network they work in.
When used well, ONA is one of the most powerful ways to understand how work actually gets done. Not
who shouts the loudest, but who others trust, rely on, and learn from. In one of our earliest ONA cycles at
Confirm, we found that half the names flagged as top contributors weren’t even on leadership’s radar.
Imagine making company decisions, promotions or layoffs without that visibility, and yet, because it’s hard
to implement and easy to misuse, it’s repeatedly sidelined.
We need to reframe this way of thinking. Traditional 360s were built on shaky ground and are often
designed around performance theater: pick a few peers, gather glowing feedback and repeat. But active
ONA flips that, asking open-ended questions to a broader group to identify who needs more support or
who employees turn to for advice.
ONA isn’t the popularity contest that peer 360s are; it’s an impact map.
Can the ‘AI middle manager’ save money and good employees?
Mark Zuckerberg once joked about avoiding “managers managing managers, managing managers,
managing managers, managing the people who are doing the work.”
He’s not wrong.
A lot of performance management today feels like layers of process stacked on top of gut instinct, and
none of it is anchored in how work actually happens.
We hear the same frustrations from HR leaders: feedback is too sparse, too inconsistent, and too
disconnected from day-to-day work. Promotions get handed out based on perception and proximity, not
contribution. Compensation gets spread thinly across entire teams, regardless of who’s really driving
outcomes. Layoffs get justified by flawed ratings, or worse, no real data at all.
The result? We end up rewarding people who are good at managing up while missing the quiet
contributors actually holding the work together. It’s not just a people problem; it’s a math problem. For
knowledge workers, talent doesn’t follow a neat bell curve. It follows a power law, where a small
percentage of employees deliver a disproportionate share of the impact. Laszlo Bock called this out years
ago when he argued that companies should “pay unfairly,” because 10x contributors really do exist. And
studies like Maynard Goff’s continue to show how unevenly work and value are distributed inside
organizations.
The tragedy is that these quiet contributors, the ones who quietly step up, mentor others, solve hard
problems, often don’t ask for credit. Traditional systems don’t surface them. But AI, when grounded in
better data, can help. It won’t replace managers, but it can give them clearer eyes. It can ease feedback
fatigue, support newer managers who haven’t yet built deep judgment, and help ensure that the people
who make work actually work aren’t overlooked again. ONA helps the quiet contributors shine brighter
while also uncovering toxic people. And together, they help surface these things proactively.
I’m not hopeful about AI because it’s trendy. I’m hopeful because I’ve seen what happens when we finally
measure what matters. AI won’t fix broken systems on its own, but it can help us rebuild them, moving
from performance management to true performance mapping. From manager biases to more objective
data. From perception to reality. From who we think is driving impact to who actually is based on real,
objective data.
Catch more HRTech Insights: How AI Can (and is) Transforming Global Employment Practices
[To share your insights with us, please write to psen@itechseries.com ]
The post AI Alone Won’t Save Performance Reviews, but It Can Make Managers See More Clearly appeared first on TecHR.
Comments
Post a Comment