Lattice provides detailed reporting on performance review cycles that managers can view for their teams or departments. As soon as reviewers submit their responses, you can see analytics for the replies.
Before you start
- Review data will be different based on visibility. Managers of managers can view their direct and indirect reports' review data, while managers can view only their direct team's data.
- Review result analytics only apply to ratings, competencies, goal questions, and weighted scores.
- Multiple choice and multiple select questions are not included in analytics. Super admins can view sentiment scoring for open-ended questions.
- Fields in review results are frozen at the time a review cycle is launched. For example, an employee who was moved to a different department during the review cycle will appear under their original department.
Navigate to review results
- Navigate to the Reporting > Reviews section.
- Find the review cycle and select View Progress.
- Select View results of cycle.
Choosing your questions
- Choose questions that you would like to view the results of by selecting Add.
- From the list of questions, select the question and then the review directions you want to visualize.
- Once you have selected all questions and directions, select Done.
Group and filter results
Filter Your Data
Next, filter and group your results by one of the default fields, including tenure, manager, department, and review group.
Note: Only admins can filter by custom fields.
Managers can filter based on department or manager. However, it's recommended that you group by Individual when possible to get a more relevant dataset.
You can stack filters for different fields to get to the exact cut of data you want. For example, stacking Tenure = 0-3 months and Department = Engineering will show responses for new employees in the engineering department.
Explore and analyze results
You can examine your results with a bar graph, 9-box, or heatmap. You can also download your visualizations as a PNG file, so that they can be easily used in presentations or documents.
Datapoint bar graph
The bar graph provides a visual for the dataset you selected. You can change the view by adjusting the following:
- Timeframe: View data for the last 3, 6, 12, or custom months.
- Bar, Segmented bar, or Line: View your dataset using different graph types.
Employee table
The employee table gives more detail on how each employee was rated for each question and direction added.
Distribution
The distribution graph is a histogram that shows how often a different value in your data set appears, allowing you to see a bell curve of your questions. This is a great way to check if the scoring is too lenient or harsh by determining if there are too many 1s or 5s in the distribution.
Average Score
The average score graph shows the average score each employee received across the review cycle. You can select the Compare across dropdown to view the average scores for employees assigned to default and custom fields.
Box View
Best for comparing two different questions
The 9-box view compares how specific groups responded to two different questions. For example, you can determine how each manager's team on average scored based on the two comparisons you select by grouping by Manager and then hovering over each dot to see the exact average for each team.
Heatmap View
The heatmap view allows you to find areas of improvement for each group using color-coding. The lowest score (or most significant negative delta) will be the brightest red, and the highest score (or most significant positive delta) will be the brightest deepest green.
- Actual score: This is the actual average response for each question for each group.
- Delta: This is the score difference for each group from the team average.
From here, you can continue to apply filters to cut your data even further to help find the specific area of improvement.
It is not possible to filter on the field you are currently comparing on. So, for example, if you are grouping by department, you cannot apply a Department = Engineering filter, as it would remove the comparison feature.
A note on question merging behavior in review analytics
In the scenario where a review cycle has multiple templates with the same underlying question, Lattice will consolidate those questions in review analytics with this behavior:
- For regular and goals questions: If questions use the same question text (case insensitive) between templates, they will be aggregated in the results calculations as the same question. Questions with different text are naturally treated as different questions between templates.
- For competency attached questions: When a review question from different templates references the same competency across reviewees (reviewees are assigned to different tracks that share a competency), these competency questions are aggregated in the cycle analytics. This will happen regardless if templates have different question text or rating scales.
- Note: If the question configurations are different between these templates, it may result in discrepancies or unexpected results in analytics.
- Example: If two templates have questions referencing the same competency but use different rating scales (e.g., 1-5 vs 1-10), the results will aggregate the results of both set of questions' answers instead of displaying them as separate questions. If a base rating scale is displayed in the UX, it will be the one from the template that was created first. This may result in metrics like " avg score of 5.5 out of 5", where the template using the 5 rating scale was created first.