Validity of 360 Surveys? … Not a Simple Question
Potential users shopping around for a good 360 process will often ask the question, “what is the validity of your 360 process?”. These potential users know that validity is a good thing in measurement theory but are clueless about what they are actually asking. First, the question needs to be modified to ask, “valid for what purpose?”. A measurement is not intrinsically valid. Validity needs a context. Secondly, the question needs to be clarified to specify what kind of validity.
There are three approaches to establishing validity:
- Predictive Validity – Predictive validity is established by showing a statistical relationship between one measure (the predictor) and another measure (the criterion). With selection assessment instruments the most common criterion is job performance. The validity question in this context is, “Does performance on this selection test have a statistically significant relationship with subsequent job performance?.” However, this does not make sense for a 360 measure since it is, in fact, a measure of job performance itself. The intent of a 360 process is typically to drive competency improvements. The appropriate question for a 360 instrument given its intended purpose is, “does your 360 feedback process have a statistically significant relationship with subsequent competency improvements?”
- Content Validity – Content validity is established by showing that the content of a given instrument is a representative sample of a larger target content domain. In the case of a 360 process the larger domain are job behaviors that drive job success. Content validity is most commonly established through a process called job analysis. A typical job analysis process would include incumbent and manager interviews. These interviews would focus on identifying the specific behaviors associated with effective performance on the job. These behaviors are then grouped into competencies. So, with 360 instruments, the appropriate content validity question is, “does your 360 content reflect effective job behaviors…how were the competencies and behaviors in your 360 process developed?”.
- Construct Validity – Construct validity is established by showing a relationship between one measure and other measures that purport to measure the same construct. In the case of 360 measurements, the underlying construct is competency job performance. Construct validity would be based on the correlation between the 360 data and other measures of job performance. The appropriate construct validity question is, “does your 360 data actually measure job competency job performance…how well does your 360 measurements compare to other measures of job performance?”
Here are my answers to the three implicit questions behind, “Is your 360 valid?”
If you are talking about predictive validity, research has shown the biggest single predictor of subsequent competency development after a 360 is the level of self-other agreement. Our 360 process has demonstrated significantly higher self-other agreement than traditional 360 approaches.
If you are talking about content validity, our competency models are based on hundreds of actual job analyses and show substantial content overlap with other commercially popular models developed in a similar manner.
If you are talking about construct validity, OMNIview provides employers with the capability to compare competency performance with another measure of job performance. We provide a nine block report capability with our 360 process that categorizes individuals into nine cells based on their competency performance on one axis and manager ratings of results performance on the job on the other axis. It has been our experience that the individuals who achieve the highest results on the job also are also the individuals with the highest competency performance.
There you have it…three different answers to what appears to be a simple question.