Brown and Hauenstein (2005) developed awg (1) to overcome limitation of other indexes of agreement that are correlated to the end of average ratings. The closer the average rating is to the reference point (i.e., the end of the group average), the smaller the variance in these ratings and the greater the match. This disrupts all the IRA statistics mentioned above with the average value of the group and thus makes them incomparable with different means between groups. As a result, Brown and Hauenstein presented awg (1), which uses as a zero distribution the maximum possible variance (i.e. maximum dissent) below the average of a group: LeBreton and Senter (2008) suggested that the IRA`s interpretation standards might follow the general logic presented by nunnally (1978); also see Nunnally and Bernstein, 1994). In particular, the Cutoff criteria should be stricter when decisions have a significant impact on those affected (for example. B performance evaluation for decision-making in administration). If so, LeBreton and Senter (2008) added that the Cutoff criteria should take into account the type of theory that underlies the aggregation of multi-level research and the quality of the measurement (for example. B, the newly introduced measures can be expected to be lower than those of the established measures). For application to the rwg family, the following standards were recommended: 0-0.30 (no match), 0.31-0.50 (low agreement), 0.51-0.70 (moderate agreement), 0.71-0.90 (strong agreement) and 0.91-1.0 (very strong agreement).
While these standards will have different effects and meanings for different types of rwg and NULL distributions (taking into account the numbers1,1, 2, 2), LeBreton and Senter (2008) have proposed standards for all forms of rwg. There is therefore a strong „deterrence“ to report versions of rwg that lead to the appearance of a lower IRA (for example. B using a normal distribution for NULL; LeBreton and Senter, 836). Nevertheless, they asked the researchers to choose the most appropriate rwg using the theory (particularly for the identification of an appropriate zero distribution), in the hope that professional judgment will be a priority. Future research will show whether or not researchers LeBreton and Senter (2008) adopt recommended practices. Klein, K. J., Conn, A.B., Smith, D.B., and Sorra, J. S. (2001). Do everyone agree? A study of the intergroup agreement in employees` perception of the work environment. J.
Appl. Psychol. 86, 3-16. doi: 10.1037/0021-9010.86.1.3 Provides a simple and direct index of the agreement. LeBreton, J.M., and Senter, J. L. (2008). Answers to 20 questions about the reliability between the boards and the Interrater agreement. organ.
Res. Methods 11, 815-852. doi: 10.1177/1094428106296642 Using `mv2`, a value of 1.0 means a complete match; 5 indicates an agreement that corresponds to the uniform zero distribution; and 0 gives maximum theoretical dissent.
Comments are closed.