10 Tips for Successful 360s – Part 1

by Drew Bird, MSc, MA. from ClearPoint Leadership

10 Tips for Successful 360s – Part One

Multi-rater or 360 degree feedback (360) tools are a great way to get feedback from others. Not only do tools like an EQ 360 provide a mechanism for people to provide feedback anonymously (apart from the Manager group), they also create a framework around which development opportunities can be identified, and strategies to address those opportunities created.

Those new to 360 feedback often get bogged down in the detail of the instrument. Understanding the tool is important, but there are many other considerations if a 360 process is to provide maximum benefit to both the client, and the subject.

Before we start, a I will clarify some terminology. I will refer to the person who contracts with you to administer and debrief a 360 as the client. The Subject is the person who is the focus of a 360. A Rater is someone who provides feedback through a 360 tool.

In this two-part blog, I will run down ten of the most important success factors that I have found when using 360s in client organizations.

  1. Be Clear on Why. Many people understand the value of a 360, and that they enable collection of feedback that might otherwise be unavailable. But simply getting feedback is not (in my opinion) a good enough reason to use a 360. Using a 360 to identify development opportunities is great! Using a 360 so that a manager can determine remedial actions for a poor performer, not so much. In cases like that, a 360 is often simply a delay or diversion tactic when everyone knows what really needs to happen.
  2. Use the right tool. With so many tools available, there is a 360 for practically every dimension of performance. The challenge, then, is picking the right one for the client’s needs. Some tools, like an EQ 360 are great for helping people develop their core self-awareness, relationship skills, and so on. It’s not as great as a tool for assessing someone’s project management or financial analysis skills (though I have heard arguments that attempt to justify any tool for every use.  There is an old phrase that goes ‘if all you have is a hammer all you see are nails.’) Either equip yourself with a range of 360 tools so that you can provide clients with assessment that best suits their needs, or be honest with clients that the tools you have will not work for them.
  3. Get free and informed consent (in writing). Subjects should submit to a 360 process willingly (how likely are they to use the results in a productive way if they are being coerced into taking it?) They should also know why a 360 is being administered, and who gets to see the results. If you don’t know, and you should (see point 1), find out. It is highly likely that the subject will ask you. I have a baseline for all 360s that I run – the results of a 360 belong to the Subject (not the client – an important distinction). When I complete the debrief I formally ‘hand over’ the results to the subject. After that, the subject can do with them what they will. Any exceptions to this are agreed upon at the outset with both the Client and the Subject.
  4. Help participants understand what the specific 360 is about. Most people have heard of a 360 and they may have done one before, but they may not be sure about the specifics of THIS one – what scales are being assessed, what’s the basis (scientific or otherwise) of the process, what does the report look like. Providing a sample report, and spending 15 minutes on the phone with the Subject at the very beginning of the process helps them understand what THIS 360 report is about, which can be a big help in picking the right raters. It is also the first step in preparing them for what they will see come debrief time.
  5. Coach subjects on selecting raters.  The most common thing I hear from Subjects is that they want to hear ‘the truth’, or get ‘real feedback’. This appears to be another way of saying ‘ask people who will rate me harshly’. There are two confusing pieces to this. First, when the ‘constructive’ feedback comes in, people refute it by saying things like “well, they don’t like me anyway”. Second, the truth about a leader (if such a thing is ever possible to hear) generally includes the whole gamut of feedback – good and not so good.  With this in mind, encourage subjects to pick a representative sample, not to go after one type of feedback or another. In terms of numbers I believe the more the merrier, within reason. If someone has seven peers, they should invite them all. Twelve direct reports? Sure, that’s a little high, but how will the subject justify inviting some people and not others to participate, and what will their rationale be for selection? Easier to invite them all and let them self-select as to whether or not they will participate. I only apply this guideline to Peer and Direct Report rater groups. Other non-manager rater groups that are supported by many tools I leave to the subject to figure out.

That’s it for this blog. I will cover points six through ten in the next posting on Monday, June 3rd.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: