As soon as reviewers begin submitting reviews, admins can start to analyze review data within the Results tab of the review.
Learn more about analyzing review analytics in our Administer Reviews and Leveraging Your Results webinar.
Before you start
- Multiple choice and multiple select questions analytics are unavailable within the Results tab. To view multiple choice and multiple select questions, export review responses. Learn more in Review Cycle Exports.
- The Results tab is only available for org chart and project-based review cycles. Reviews created by automated rules do not have a Results tab and cannot be analyzed.
- Fields in review results are frozen at the time a review cycle was launched. For example, an employee who was moved to a different department during the review cycle will appear under their original department.
This article contains the following topics:
View review results
- Navigate to Admin > Reviews > Performance reviews.
- Click to select the review cycle.
- Navigate to the Results tab.
- Click Select comparisons to choose up to ten questions you wish to analyze.
Filter your data
Use the filter bar at the top of the Results page to filter your data set by fields or review group (self, peer, upward, downward).
Adding multiple fields will require reviewees to meet both conditions, whereas applying multiple values within a single field will show reviewees who meet either condition. For example, filtering for Gender = Female and Department = Engineering + Product will show responses about all women in the engineering and product departments.
Group your data
Adjust how your information is being presented by selecting Group by and choosing one of the following options:
- Individual
- Manager
- Gender
- Department
- Review group
- Custom attribute
The Group by filter groups the review data by reviewees. In an example where we Group by Manager and filter for upward reviews.
Example:At Degree, Inc. Stephen manages Ami, who manages Adnan, who manages no one.
An admin wants to group results by Manager to view the analytics of a cycle with only upward feedback.
For this review cycle, the admin will see a grouping for Stephen who manages Ami because Ami had upward responses in her review (from Adnan). However, there would be no grouping for Ami because her direct report, Adnan, did not receive any upward reviews (Adnan does not have direct reports).
Lattice defaults to displaying the first 8 options for any grouping. To add more results to the visualization click the + button beneath the chart and check the boxes for the desired additions.
Explore your results
Results can be visualized with a bar graph, 9-box, heatmap, or distribution view. You can also download your visualizations as a PNG file, so that they can be easily used in presentations or documents.
Bar graph
The bar graph view shows results for a group of employees across multiple questions. This is helpful for comparisons of more than two questions.
The colors in the bars correspond to the icons below the chart. Hover over a bar to see the average rating for that group/individual.
Actual vs. normalized score
The bar graph gives the option to show the actual score or the normalized score.
- Actual Score: the actual average response to each question. For example, if you have a rating question that is out of 3 and another question that is out of 5, you would be able to see an average rating out of 3 or 5 when hovering.
- Normalized Score: the score if all questions were placed on the same scale expressed as a percentage of 100. This is helpful if you are looking to compare the responses for rating questions with different response scales.
9-Box
The 9-box view compares the results of two different questions across a group by placing them on an x and y-axis. Each data point within the view shows where the group value average falls for each question. Hover over each point to see the average scores for the group.
Heatmap
The Heatmap view compares different groups against each other across multiple questions. It provides the most data in a single visualization. You can apply filters to cut your data even further to compare specific employee groupings.
Distribution
The Distribution view helps evaluate the effectiveness of the questions you're asking in your reviews by showing how many employees received each possible rating for up to four comparisons you've selected.
Apply filters for fine-grained insights about the performance of specific groups of employees.
Actual vs. normalized count
The Distribution view gives the option to view an actual count or normalized count for the questions selected
- Actual Count: The number of times a rating was chosen
- Normalized Count: The percentage of times a rating was chosen
A note on question merging behavior in review analytics
In the scenario where a review cycle has multiple templates with the same underlying question, Lattice will consolidate those questions in review analytics with this behavior:
- For regular and goals questions: If questions use the same question text (case insensitive) between templates, they will be aggregated in the results calculations as the same question. Questions with different text are naturally treated as different questions between templates.
-
For competency attached questions: When a review question from different templates references the same competency across reviewees (reviewees are assigned to different tracks that share a competency), these competency questions are aggregated in the cycle analytics. This will happen regardless if templates have different question text or rating scales.
- Note: If the question configurations are different between these templates, it may result in discrepancies or unexpected results in analytics.
- Example: If two templates have questions referencing the same competency but use different rating scales (e.g., 1-5 vs 1-10), the results will aggregate the results of both set of questions' answers instead of displaying them as separate questions. If a base rating scale is displayed in the UX, it will be the one from the template that was created first. This may result in metrics like " avg score of 5.5 out of 5", where the template using the 5 rating scale was created first.