The 360-Degree Feedback Process De-Brief

When providing a 360 report to a participant (person being rated), an attachment explaining the report can be helpful.  It provides consistency of interpretation amongst participants, and can be a reference even after an in-person de-briefing process.  Here is one example of such an attachment that can be used as a template (I’ve included screen shots from the ActiveView 360 system):

One generally stated outcome of a 360-degree feedback process is to improve the behavior of a leader within selected competencies in order to move both the leader and the organization towards success.   You have just been through this feedback process, and now have to read the results and understand the messages within the numbered scores.


  • When reviewing your report, look for balanced feedback.  Balance is imperative; it is just as important to identify the things you do well as it is to point out areas for improvement.  In some cases, participants focus on the lower scores, but this might not in your best interest. For example, there may an individual who is rated low on the “teamwork” category.  If this person is not part of any teams, these low ratings may be of little concern to this person. Therefore, this person may want to concentrate on other competencies.
  • Assume that raters take their role very seriously.
  • Sometimes you may think:  “Somebody was really upset with me that day and I know they gave me bad scores.”  Since the scores are anonymous, you can’t know this to be true.  Instead, look at the relativity of the scores.  Look at the four to six highest scoring questions and ask yourself, “Do I believe this is where I excel?”  Then look at the four to six lowest scoring questions, and ask yourself if these scores make sense.  If you agree to the relativity of the scores, this may help in the interpretation of the report.


On page XX of your report you’ll see the introduction to your report.  In addition to some notes on how to review this document, you’ll see information on the expectation of what to do with the feedback from this process.

Mean Score by Competency by Relationship

On page XX of your report you’ll see a chart like the one above.  This chart above provides an overview of the feedback, organized by competencies (sometimes called topics).  These scores are rollup averages of the individual question scores within each competency (e.g., there are six questions in “Communication,” the competency score is a roll-up average of those questions within the “Communication” competency).  Understanding this section will allow for a foundation on which to understand the rest of the report.

The rating scale used in your assessment was a 4-point scale: 1=Strongly Disagree, 2=Disagree; 3=Agree, 4=Strongly Agree.  There was also an N/A for raters to select if they did not feel like they had enough information to rate you on a certain question.  The N/A scores do not count in the ratings.

  • In the illustration there is a 4 point response scale; in this case a higher score is a better score.
  • This report includes an overview of the Self scores, as well as the Supervisors, Peers and Team Member scores.
  • The Total column is the average of all rater scores (the Self scores are not factored into this average). On a 4-point scale, generally, scores of 3.1 – 3.3 are good scores, scores of 3.4 – 3.7 are very good scores, and scores of 3.8 and above are outstanding scores.  Any score less than 3.1 may be an area for improvement. However, this is just a generalization.
  • Review how many people there are in each category; if there are only one or two people in one of the relationship categories, be sure to keep that in mind when weighing their scores in your discussion.
  • In going through this report, highlight several areas that you would like to understand in more detail.
  • When reviewing the Overview Report, look at the Self scores and how they compare to other relationships scores (e.g. Supervisors, Peers etc.).  This can provide insight into perception issues between raters and the leader; the larger the gap, the greater the perception issue.

Unfavorable – Favorable Report by Topics
On page XX of your report you’ll see an Unfavorable / Favorable report.  On the 4-point response scale, the percentage of raters who selected the bottom two scale options for the questions in the competency (in this case “Strongly Disagree” and “Disagree”) are in red, or Unfavorable. The percentage of raters who selected the top two scale options for the questions in the competency (in this case “Agree” and “Strongly Agree”) are in green, or Favorable. Essentially red is bad, green is good.  No self-scores are included in this graph.

Unfavorable Favorable Graph by Topics

This graph is important is for several reasons:

  • Mean scores, as seen in the Overview Report, are a good first indicator of performance.  If someone gets a 3.8 (on a 4-point scale; 4 = high) it indicates very high scores, and if they receive a 1.9 (on a 4-point scale; 4 = high) it indicates very low scores.  However, if a participant has a mean score of 2.5, without more information it is impossible to decipher if most raters gave the leader a scores of 2 or 3, OR if about half of raters gave the leader a 1 and about half gave the leader a 4. In both scenarios the mean score would be 2.5 (approximately), but the message would be very different.
  • In prioritizing issues, look for percent Favorables (greens) that are 85%-90% and above and then look for percent Unfavorables (reds) that are 10% and above.
  • This graph is effective for visual learners.  Instead of charts with a potentially overwhelming number of scores, this graph assembles the data into an easily understandable format.
  • As you think forward to the action-planning phase, there is a strategy in determining what to do about the scores.  Do you want to create an action plan that helps build on clearly defined strengths, or do you want to work on improving your weaknesses?  There is no right answer, and scores must be assessed as they relate to the individual leader.  The action plan you create may combine some of both of these strategies.

Mean Scores by Competency by Question
This section of the report (on pages XX – XX) breaks down each competency/topic by its questions.

Look at this part of the report with more emphasis on specific areas within each competency. Are you over or under-rating yourself?   This report is particularly useful in identifying specific questions which may have raised or lowered your scores within a given competency.

There is a chart and associated graph of the questions for a single competency on each page.

Ranking Report
This report (page XX) shows question text with the associated competency and the mean scores given by all your raters. The scores in the chart are arranged from high scores to low scores.

There is typically a natural break (not a statistical break), both at the top of the chart and at the bottom of the chart.  Look for where this natural break occurs for you.

Gap Report
The Gap Report (page XX) measures the difference between the self scores and the combined rater scores.  This Gap Report is sorted by the size of the gap (total raters’ scores (Others Mean) subtracted from the Self Scores).

In looking at the gap column, a positive gap reveals that you under-valued yourself, while a negative gap reveals where you over-rated yourself.  It is important to look for gaps of more than .8 (either positive or negative), and discuss why there might be a perception difference.

This last section of the report starts on page XX.  This qualitative part of the report that may help you understand some of the quantitative scores.  Please do not try to figure out who said what – remember, you are looking for themes that you can learn from.