Table of Contents
Overview
As you continue sending, receiving, and completing reviews, you might notice a few scores popping up in your grids. If you're reminded of school mid-yearlies or annuals, we hope you're feeling nostalgic right now, not stressed! 😅
These scores are out of a 100, and aim to give a general sense of how the review went. For employees, it's a gauge of how their managers are perceiving their performance. For managers and HR admins, the score highlights how their employees self-evaluate, what’s actually going well, what might need attention, and what’s worth a gold star (and maybe a well-earned pay rise 💰😉).
The end goal? Everyone walks away from the review with a clearer, shared understanding of expectations and how well they're being met. 🤝
How Are Reviews Scored?
In Indigo’s Performance Management, you’ll come across three types of scores, all out of a 100:
Question scores: The score for an individual question.
Only numeric-type answers count here, which are Multiple Choice, Numeric Rating Scale, and the 🌟 Rating question.
Section scores: The total score of a section which is an average of all numeric question scores within that section.
Global performance score: The final score, likewise based on all section scores.
Each review will generate two versions of each score: one from the manager’s evaluation and one from the employee’s self-evaluation. Only the manager can see the employee's self-evaluation, but both are stored for future reference, with scoring breakdowns clearly available throughout the review process on various screens ... more on that in the next section.
At a glance, these scores look like simple grades or percentages that build on each other, and that’s exactly what they are for the most part. Scoring a performance review can be this straightforward.
However, to keep the process both fair and precise while allowing for flexibility, several other factors are considered behind the scenes. Each one plays a part in shaping the final outcome. These include:
⚖️ Score weighting, which is set by the template maker and determines how much influence each question has on its section score, and the impact of each section on the global score.
📋 The number of questions in each section, where the presence of more questions can by itself dilute the individual impact of each question.
👱 Who answers the question, whether it’s answered by the employee, the manager, or both, as this determines how many questions contribute to which version of the score.
Having all of these considerations means there’s some sophisticated maths under the hood—think lots of fractions and formulae—but don’t jump ship 🚢 just yet! If maths isn’t your thing, you’ll be glad to know that Indigo takes care of all the calculations in the background.
As a manager or an employee, you won't need to do anything different when sending or answering reviews to benefit from scoring. For HR admins and managers building your own templates, you'll just need to:
Make sure there’s at least one numeric question in your template.
Enable scoring for the template.
Lay out your questions into sections.
Assign score weighting.
📖 Tip: Need a hand setting up your template for scoring, with weighting?
If you're feeling calculation-curious 🤓 and wish to have a deeper look at the maths being crunched in the background, check out this secret article! 🤫
Viewing Scores
As mentioned earlier, scores in Performance Management come in three types: question, section, and global performance scores. Of the three, the global performance score is the one you’ll come across most often, appearing in different places depending on the current stage of the review:
Once a review reaches the Appraisal stage:
Once a review is Closed:
Once a review reaches Appraisal Stage, respondents also gain access to the View Review screen, your one-stop shop for seeing all three score types. What’s visible on this screen depends on who’s looking: employees see only the manager’s score, keeping the focus on key feedback. Managers see both scores side by side, making it easier to spot where opinions aligned and where they didn’t.
📋 Reminder: This screen is accessed:
Missing and Confidential Scores
Some questions and sections may display a blue ‘N/A’ instead of a score. This can happen for one of two reasons:
The question was a non-numeric type (e.g. short or long text).
The question or section's score weight was set to 0 (which tells Indigo to exclude it from scoring).
Not seeing a few questions you remember answering? They might be confidential, so hidden by default. To view them, click the hat and spectacles (incognito) icon. These questions will appear in yellow, instead of the usual blue.
👍 Don't worry! Confidential questions still count toward the section and global scores, even when hidden, so the numbers won’t change. What will change is the question numbering: for example, if the first question is confidential, showing it will shift all numbers down by one.
Detailed Score Views
You can click on any total (global performance) or section score to open a detailed scoring breakdown. If you're a manager, you can click on any of your scores or the employee’s to open this screen. Once opened, you can easily navigate between either evaluation anytime using the tabs in the top-right corner.
📋 Note: Sections appear as grey tabs—closed by default—so just click any tab as needed to open the section and view all the questions and their scores.
In the image above, some question and section scores might look worryingly low, even dipping below 50%, which can feel like a failing grade. But don’t be misled: these percentages don’t represent a pass or fail. Instead, they show how many ‘points’ the respondent has earned in the section, which directly contribute to the global performance score.
Take the Impact section, for example. If it's weighted twice as much as the Growth section, that means it contributes 66.67% to the final score, while Growth contributes 33.33%, together making up a full 100%.
So when the employee scores 59.25% in Impact, it’s not 59.25 out of 100 but 59.25 out of 66.67 possible points for that section. Meanwhile, Growth shows a modest 26.55%, which looks low at first glance, but is actually 26.55 out of the maximum 33.33 points. A strong result, just from a lighter-weighted section.
So even if individual scores look low at a glance, they may represent strong contributions within their weighted context.
💡 You've guessed it! This same logic applies one level down, too; individual question scores contribute to the section total based on their own weights, so everything scales up fairly.
🤔 Still thinking about it?
Where did that 33% and 67% come from?
How do question and section weights actually factor in?
Why are all these formulae haunting your dreams?
It's not magic but maths, crunching quietly behind the scenes.