Get a high level overview of reporting in your review cycle
As soon as reviewers start submitting their responses, Admins will immediately be able to see analytics around these responses! We offer several views to help you discover insights around your people and organization. To access review cycle analytics, click into the appropriate review cycle and click "Results" found on the top navigation bar within the cycle.
Please note, review analytics only apply to rated questions and scored attributes. Multiple choice/select are not included in analytics at this time. Open ended questions include a sentiment analysis feature. You can read more about Sentiment Analysis here!
You can select up to 10 rating questions or scored attributes to view the analytics for.
Filtering Your Data
The filter bar at the top of the Results page allows you to filter your results by any user attribute that you have uploaded to Lattice. This includes our default fields (gender, age, department, etc), custom fields that you have uploaded into Lattice, and by various performance metrics.
You can stack filters for different fields on top of each other to get to the exact cut of data that you want. For example, stacking Gender = Female and Department = Engineering will show responses from all the women in the engineering department.
Filtering your data into different groups
When analyzing your review, you can adjust how your information is being grouped by clicking on the “Group by” filter. Here, you will see a list of all employee fields that are currently in Lattice, including any custom fields that you may have created
Exploring your results
You can examine your results by bar graph, 9-box, or heatmap.
Bar Graph View
The Bar Graph view allows you to see how a group of responders are doing across all questions. This is helpful if you have more than 2 questions you are looking to compare.
The colors in the bars show you how each group responded. To see what a group's average rating was to that question, hover over that particular color in the bar.
Please note, if you are looking at the responses for more than 3 questions, this bar will become a side bar graph rather than a vertical bar graph, as shown below.
When viewing the graphs, you can add more groups by clicking on the “+” sign (Lattice defaults to the first 8 options):
While viewing the bar graph, you can show the "Actual score" or the "Normalized score."
- Actual Score: the Actual Score scores the actual average responses to each rating question. For example, if you have a rating question that is out of 3 and another question that is out of 5, you would be able to see an average score out of 3 or 5, when hovering.
- Normalized Score: a "Normalized" score allows you to view all questions on the same scale (as a percent of 100). This is helpful if you are looking to compare the responses for rating questions or scored attributes on a different scale. For example, if you have one rating questions out of 3 and a scored attribute (or rating question) out of 5, looking at the bar graph the scored attribute would always have a higher score and under represent the rating question. Normalizing a score turns everything into a percent, allowing you to compare attributes with different sales.
The 9-Box view is best for comparing two different questions. For example, if you wanted to compare every department against these two questions, you navigate to the 9-box view and then on the "Group by" dropdown and select "Department." From here, you'll be able to hover over each dot to see where the department fell in regards to these two questions.
The Heatmap view is best for comparing different groups of responders against each other across more than one question.
While you're looking at a heatmap, you can still apply filters to cut your data even further. For example, after group by department, if you want to see how women in each of your departments feel, you could filter on Gender = Female in the filter bar. The heatmap then shows just responses to the questions from women across each department.
One thing to note is that you cannot filter on the field that you are currently comparing on. For example, if you are comparing across department, you then cannot apply a department = Engineering filter.
To export this heatmap view to a CSV, you can do so by clicking on the "Export" button.
The Distribution view is best for understanding the distribution of scores attributes (pre and post calibrated), rating questions, and sentiment. It is perfect to check the distribution of scored attributes pre and post calibration, allowing you to see the bell curve distribution.
While you're looking at the distribution, you can still apply filters to cut your data even further. For example, after selecting to compare pre and post calibration scores for the scored attribute "Behavior Rating," if you want to see how the Customer Success employees were rated, you could filter on Department = Customer Success in the filter bar. The distribution histogram then shows just how often each different rating was given for this scored attribute. In this example, we can see that calibration has worked as we compared the pre scores to the post scores for Behavior Rating scored attribute.
As you are looking at your counts, you will have the ability to decide how to show your count and can choose between "Actual Count" = actual account of the time that score was submitted or "Normalized Count" = a percentage based on the scores submitted. One thing to note is that you'll be able to graph up to 4 data points. Also, keep in mind, that you cannot change the “Group By” because for this graph the groups are automatically set to the response (rating scales).