![](images/graphics/blank.gif)
Kappa statistic considerations in evaluating inter-rater reliability between two raters: Which, when and context matters
7
lượt xem 2
download
lượt xem 2
download
![](https://tailieu.vn/static/b2013az/templates/version1/default/images/down16x21.png)
In research designs that rely on observational ratings provided by two raters, assessing inter-rater reliability (IRR) is a frequently required task. However, some studies fall short in properly utilizing statistical procedures, omitting essential information necessary for interpreting their findings, or inadequately addressing the impact of IRR on subsequent analyses’ statistical power for hypothesis testing.
Chủ đề:
Bình luận(0) Đăng nhập để gửi bình luận!
![](images/graphics/blank.gif)
CÓ THỂ BẠN MUỐN DOWNLOAD