Measuring and Promoting Inter-Rater Agreement of Teacher and Principal Performance Ratings [electronic resource] / Matthew Graham, Anthony Milanowski and Jackson Miller.

As states, districts, and schools transition toward more rigorous educator evaluation systems, they are placing additional weight on judgments about educator practice. Since teacher and principal observation ratings inherently rely on evaluators' professional judgment, there is always a questio...

Full description

Saved in:
Bibliographic Details
Online Access: Full Text (via ERIC)
Main Authors: Graham, Matthew, Milanowski, Anthony T. (Author), Miller, Jackson (Author)
Format: Electronic eBook
Language:English
Published: [S.l.] : Distributed by ERIC Clearinghouse, 2012.
Subjects:

MARC

LEADER 00000nam a22000002u 4500
001 b7341160
003 CoU
005 20120703162551.5
006 m d f
007 cr |||||||||||
008 120201s2012 xx |||| ot ||| | eng d
035 |a (ERIC)ed532068 
035 |a (MvI) 3V000000514262 
040 |a ericd  |c MvI  |d MvI 
099 |a ED532068 
099 |a ED532068 
100 1 |a Graham, Matthew. 
245 1 0 |a Measuring and Promoting Inter-Rater Agreement of Teacher and Principal Performance Ratings  |h [electronic resource] /  |c Matthew Graham, Anthony Milanowski and Jackson Miller. 
260 |a [S.l.] :  |b Distributed by ERIC Clearinghouse,  |c 2012. 
300 |a 33 p. 
500 |a Sponsoring Agency: Department of Education (ED).  |5 ericd. 
500 |a Abstractor: As Provided.  |5 ericd. 
500 |a Educational level discussed: Elementary Secondary Education. 
516 |a Text (Reports, Descriptive) 
520 |a As states, districts, and schools transition toward more rigorous educator evaluation systems, they are placing additional weight on judgments about educator practice. Since teacher and principal observation ratings inherently rely on evaluators' professional judgment, there is always a question of how much the ratings depend on the particular evaluator rather than the educator's actual performance. To help states, districts, and schools measure and maximize consistency of evaluator observation ratings, this paper: (1) draws a distinction between inter-rater reliability and interrater agreement, (2) reviews methods for calculating inter-rater reliability and agreement, (3) reviews thresholds for inter-rater agreement scores, and (4) identifies practices that can improve inter-rater reliability and inter-rater agreement. Two appendixes present: (1) More on Intra-Class Correlations; and (2) Frame-of-Reference Training Outline. (Contains 7 footnotes and 7 tables.) [This report was produced by the Center for Educator Compensation Reform (CECR).] 
524 |a Online Submission.  |2 ericd. 
650 0 7 |a Interrater Reliability.  |2 ericd. 
650 0 7 |a Evaluators.  |2 ericd. 
650 0 7 |a Observation.  |2 ericd. 
650 0 7 |a Principals.  |2 ericd. 
650 0 7 |a Administrator Evaluation.  |2 ericd. 
650 0 7 |a Teacher Evaluation.  |2 ericd. 
650 0 7 |a Measurement.  |2 ericd. 
650 0 7 |a Training.  |2 ericd. 
650 0 7 |a Selection.  |2 ericd. 
650 0 7 |a Accountability.  |2 ericd. 
650 0 7 |a Scoring Rubrics.  |2 ericd. 
650 0 7 |a Pilot Projects.  |2 ericd. 
650 0 7 |a Video Technology.  |2 ericd. 
650 0 7 |a Correlation.  |2 ericd. 
700 1 |a Milanowski, Anthony T.,  |e author. 
700 1 |a Miller, Jackson,  |e author. 
856 4 0 |u http://files.eric.ed.gov/fulltext/ED532068.pdf  |z Full Text (via ERIC) 
907 |a .b73411607  |b 07-06-22  |c 04-02-13 
998 |a web  |b 04-02-13  |c f  |d m   |e -  |f eng  |g xx   |h 0  |i 1 
956 |a ERIC 
999 f f |i 2adcf678-6dd9-52b9-a66a-56c133461dda  |s 63295cce-458e-5d26-8745-f2afd4f51aab 
952 f f |p Can circulate  |a University of Colorado Boulder  |b Online  |c Online  |d Online  |e ED532068  |h Other scheme  |i web  |n 1