Get a high-level overview of review reporting for your organization
As a manager or manager of managers, making data-driven decisions for your direct and indirect reports is essential. Lattice allows you to do this via review cycle reporting for your organization. As soon as reviewers start submitting their responses, you will immediately see analytics for the replies.
Table of contents
- Before you start
- Navigate to review results
- Group and filter results
- Bar graph view
- 9-box view
- Heatmap view
- Distribution view
Before you start
Review data will be different based on visibility. Managers of managers will have visibility to their direct and indirect reports' review data while managers can view only their direct team's data. Please keep in mind that review result analytics only apply to rated questions and scored attributes. Multiple choice and multiple select questions are not included in analytics at this time. Your account admin can view sentiment scoring for open-ended questions.
Navigate to review results
Step 1: Navigate to the Reporting page on the discovery navigation.
Step 2: Enter the Reviews section.
Step 3: Find the review cycle and select View Progress.
Step 4: Enter the Results tab.
Group and filter results
The first step to viewing results is to choose your comparison questions or scores. You can select up to 10 rating questions or scored attributes. If you want to view data via the 9-box view, select only 2 comparisons. If you wish to view data via the distribution view, select up to 4 comparisons.
Filter Your Data
Next, filter and group your results by one of our default fields, including tenure, manager, department, and review group.
Although managers can filter based on department or manager, we recommend grouping by Individual when possible to get a more relevant data set.
You can stack filters for different fields on top of each other to get to the exact cut of data that you want. For example, stacking Tenure = 0-3 months and Department = Engineering will show responses for new employees in the engineering department.
Please note, only admins can filter by custom attributes.
Explore and analyze results
You can examine your results by a bar graph, 9-box, or heatmap.
Bar Graph View
Best for comparing data for more than two questions
The Bar Graph view allows you to see how a group of responders is doing across all questions.
Each group is noted by a different colored bar so you can easily visualize how each group responded. Hover over each bar to see a group's average rating to the question or score (e.g., R&D team scoring an average of 2.93/5 for this rating question).
Add additional groups to the dataset
Some groupings may include a large number of individual options. Lattice will default to showing the first 8 options. You can add more groups by clicking on the + sign and selecting any other groups to include in the dataset.
Actual vs. Normalized score
While viewing the bar graph, you can show the Actual score or the Normalized score.
- Actual Score: the Actual Score scores the actual average responses to each rating question. For example, if you have a rating question out of 3 and another question out of 5, you will see an average score out of 3 or 5 when hovering.
- Normalized Score: the Normalized score allows you to view all questions on the same scale (as a percent of 100). A normalized score is helpful if you are looking to compare the responses for rating questions or scored attributes on a different scale. For example, if you have one rating question out of 3 and a scored attribute out of 5, the scored attribute will always have a higher score and under-represent the rating question. Normalizing a score turns everything into a percent, allowing you to compare attributes with different scales.
Best for comparing two different questions
The 9-box view is the best bet if you wish to compare how specific groups responded to two different questions. For example, you can determine how each manager's team on average scored based on the two comparisons you select by grouping by Manager and then hovering over each dot to see the exact average for each team.
Best for comparing different groups of responders against each other across more than one question
The heatmap view allows you to quickly find areas of improvement for each group using color-coding. The lowest score (or most significant negative delta) will be the brightest red, and the highest score (or most significant positive delta) will be the brightest deepest green.
- Actual score: This is the actual average response for each question for each group.
- Delta: This is the difference in scores for each group from the team average.
From here, you can continue to apply filters to cut your data even further to help find the specific area of improvement. For example, if you find that a department is scoring lower on average than the other departments in your org, you can continue to cut the data deeper by grouping by Manager to see if a specific team is having trouble.
Please note, you cannot filter on the field that you are currently comparing on. So, for example, if you are grouping by department, you cannot apply a Department = Engineering filter.
Best for understanding the distribution of scored attributes (pre and post-calibrated) and rating questions
The distribution graph is a histogram that shows how often a different value in your data set appears, allowing you to see a bell curve on your questions or scores. This is a great way to check if the scoring is too lenient or harsh by determining if there are too many 1s or 5s in the distribution.
Use filters to cut the data even further. For example, below, we have filtered by the Customer Success department to view their bell curve before and after calibration. Notice that there is no bell curve pre-calibration, and we have been lenient on scoring. We can see that calibration has worked as we compare to the post scores for Behavior Rating scored attribute.
Please note that you will be unable to “Group By” any other groups because the groups are automatically set to the response (rating scale).
Actual vs. Normalized count
As you are looking at your counts, you will have the ability to decide how to show your count and can choose between Actual and Normalized count.
- Actual Count: the actual times each score was submitted
- Normalized Count: the percentage of each score submitted based on all submissions