Lattice provides detailed reporting on performance review cycles that managers can view for their teams or departments. As soon as reviewers submit their responses, you can see analytics for the replies.
Table of contents
- Before you start
- Navigate to review results
- Group and filter results
- Bar graph view
- 9-box view
- Heatmap view
- Distribution view
Before you start
- Review data will be different based on visibility. Managers of managers can view their direct and indirect reports' review data, while managers can view only their direct team's data.
- Review result analytics only apply to ratings, competencies, goal questions, and weighted scores.
- Multiple choice and multiple select questions are not included in analytics. Super admins can view sentiment scoring for open-ended questions.
- Fields in review results are frozen at the time a review cycle is launched. For example, an employee who was moved to a different department during the review cycle will appear under their original department.
Navigate to review results
- Navigate to the Reporting page on the discovery navigation.
- Enter the Reviews section.
- Find the review cycle and select View Progress.
- Click View results of cycle.
Group and filter results
Select Comparison
The first step to viewing results is to choose your comparison questions. You can select up to 10 questions to compare.
- Select only two comparison questions to view data via the 9-box view.
- Select up to four comparison questions to view data via the distribution view.
Filter Your Data
Next, filter and group your results by one of the default fields, including tenure, manager, department, and review group.
Note: Only admins can filter by custom fields.
Managers can filter based on department or manager. However, it's recommended that you group by Individual when possible to get a more relevant data set.
You can stack filters for different fields to get to the exact cut of data you want. For example, stacking Tenure = 0-3 months and Department = Engineering will show responses for new employees in the engineering department.
Explore and analyze results
You can examine your results with a bar graph, 9-box, or heatmap. You can also download your visualizations as a PNG file, so that they can be easily used in presentations or documents.
Bar Graph View
Best for comparing data for more than two questions
The Bar Graph view lets you see how a group of responders is doing across all questions. Each group is noted by a different colored bar. Hover over each bar to see a group's average rating for the question.
Note: You can export the table view for the Bar graph as a CSV by clicking Export CSV at the top of the table.
Add more groups to the dataset
Some groupings may include a large number of individual options. Lattice will default to showing the first eight options. To add more groups:
- Click the + sign.
- Selecting any other groups to include in the dataset.
Actual vs. Normalized score
While viewing the bar graph, you can show the Actual score or the Normalized score.
-
Actual Score: the Actual Score is the actual average response to each rating question.
- For example, if you have a rating question out of 3 and another question out of 5, you will see each question's average rating out of 3 or 5 when hovering.
-
Normalized Score: the Normalized score allows you to view all questions on the same scale by converting the scores to a percentage of 100. This allows a fairer comparison between questions with different rating scales.
- For example, a rating of 3 out of 3 will show a 100% and a 3 out of 5 will show a 60%.
9-Box View
Best for comparing two different questions
The 9-box view compares how specific groups responded to two different questions. For example, you can determine how each manager's team on average scored based on the two comparisons you select by grouping by Manager and then hovering over each dot to see the exact average for each team.
Heatmap View
Best for comparing different groups of responders against each other across more than one question
The heatmap view allows you to find areas of improvement for each group using color-coding. The lowest score (or most significant negative delta) will be the brightest red, and the highest score (or most significant positive delta) will be the brightest deepest green.
- Actual score: This is the actual average response for each question for each group.
- Delta: This is the score difference for each group from the team average.
From here, you can continue to apply filters to cut your data even further to help find the specific area of improvement. For example, if you find that a department is scoring lower on average than the other departments in your org, you can continue to cut the data deeper by grouping by Manager to see if a specific team is having trouble.
It is not possible to filter on the field you are currently comparing on. So, for example, if you are grouping by department, you cannot apply a Department = Engineering filter, as it would remove the comparison feature.
Note: You can export the heatmap as a CSV by clicking Export CSV within the view.
Distribution View
Best for understanding the distribution of pre- and post-calibrated rating questions
The distribution graph is a histogram that shows how often a different value in your data set appears, allowing you to see a bell curve of your questions. This is a great way to check if the scoring is too lenient or harsh by determining if there are too many 1s or 5s in the distribution. Note, if you did not input a post-calibrated rating for an employee, even if it is the same as the pre-calibrated rating, it will not appear in the post-calibrated rating section.
Use filters to cut the data even further. For example, below, we have filtered by the Customer Success department to view their bell curve before and after calibration. Notice that there is no bell curve pre-calibration, and we have been lenient on scoring. We can see that calibration has worked as we compare to the post scores for Behavior Rating.
Actual vs. Normalized count
As you are looking at your counts, you will have the ability to decide how to show your count and can choose between Actual and Normalized count.
- Actual Count: the actual times each score was submitted
- Normalized Count: the percentage of each score submitted based on all submissions
A note on question merging behavior in review analytics
In the scenario where a review cycle has multiple templates with the same underlying question, Lattice will consolidate those questions in review analytics with this behavior:
- For regular and goals questions: If questions use the same question text (case insensitive) between templates, they will be aggregated in the results calculations as the same question. Questions with different text are naturally treated as different questions between templates.
-
For competency attached questions: When a review question from different templates references the same competency across reviewees (reviewees are assigned to different tracks that share a competency), these competency questions are aggregated in the cycle analytics. This will happen regardless if templates have different question text or rating scales.
- Note: If the question configurations are different between these templates, it may result in discrepancies or unexpected results in analytics.
- Example: If two templates have questions referencing the same competency but use different rating scales (e.g., 1-5 vs 1-10), the results will aggregate the results of both set of questions' answers instead of displaying them as separate questions. If a base rating scale is displayed in the UX, it will be the one from the template that was created first. This may result in metrics like " avg score of 5.5 out of 5", where the template using the 5 rating scale was created first.