{"id":712,"date":"2020-05-18T07:28:00","date_gmt":"2020-05-18T11:28:00","guid":{"rendered":"http:\/\/pressbooks.library.upei.ca\/montelpare\/?post_type=chapter&#038;p=712"},"modified":"2020-08-24T14:19:14","modified_gmt":"2020-08-24T18:19:14","slug":"measures-of-association-part-2-the-kappa-statistic","status":"publish","type":"chapter","link":"https:\/\/pressbooks.library.upei.ca\/montelpare\/chapter\/measures-of-association-part-2-the-kappa-statistic\/","title":{"raw":"Measures of Association -- Part II: The Kappa Statistic","rendered":"Measures of Association &#8212; Part II: The Kappa Statistic"},"content":{"raw":"<h1>Part II: The Kappa Statistic to Measure Agreement<\/h1>\r\nGiven that the results of the McNemar Chi-Square statistic, calculated in the previous chapter, were not significant, then the question becomes, \"if the outcome variables representing the results of a participant's performance on each test are not statistically significant in their difference, does that necessarily mean that the outcome scores are in agreement?\"\r\n\r\nSince the Kappa statistic is a measure of agreement we can test this notion using the Kappa statistic applied to the fourfold or 2 x 2 table. Converse to the McNemar Chi-square which processes the data in the off-diagonal elements (cell \"b\" and cell \"c\"), the Kappa computations focus on the data in the major diagonal from upper left to lower right (cell \"a\" and cell \"d\"), examining whether counts along this diagonal differ significantly from what is expected to occur by chance. If no agreement exists between the counts on the major diagonal then we would expect the proportion of individuals scoring high or low on the lab and field tests to be similar.\r\n\r\nSimilar to the computation of the McNemar Chi-square, the Kappa statistic uses the data from the row and column probabilities of the 2 x 2 table.\u00a0 The exact computations for Kappa are specifically shown as follows:\r\n<ol>\r\n \t<li>COMPUTE ROW PROPORTIONS<\/li>\r\n<\/ol>\r\nRow 1 Proportion:\u00a0\u00a0 p1. = (a+b) \u00f7 N;\u00a0 p1. = (23+12) \u00f7 86 = 0.41\r\n\r\nRow 2 Proportion:\u00a0\u00a0 p2. = (c+d) \u00f7 N;\u00a0 p2. = (19+32) \u00f7 86 = 0.59\r\n\r\nColumn 1 Proportion: p.1 = (a+c) \u00f7 N; p.1 = (23+19) \u00f7 86 = 0.49\r\n\r\nColumn 1 Proportion: p.2 = (b+d) \u00f7 N; p.2 = (12+32) \u00f7 86 = 0.51\r\n<ol start=\"2\">\r\n \t<li>COMPUTE THE [latex]P_{i}[\/latex] TERMS<\/li>\r\n<\/ol>\r\n<strong>OBSERVED: <\/strong>[latex]({\\pi}) \\textit{obs} [\/latex]\r\n\r\n[latex]({\\pi}) \\textit{obs} [\/latex]: <em>the observed term of the main diagonal elements<\/em>\r\n\r\n[latex]({\\pi}) \\textit{obs} [\/latex] = ((cell a) \u00f7 N) + ((cell d) \u00f7 N);\r\n\r\n[latex]({\\pi}) \\textit{obs} [\/latex] = ((23\u00f786) + (32 \u00f7 86));\r\n\r\n[latex]({\\pi}) \\textit{obs} [\/latex] = (.27+.37);\r\n\r\n[latex]({\\pi}) \\textit{obs} [\/latex] =\u00a0 0.64\r\n\r\n<strong>EXPECTED<\/strong>: [latex]({\\pi}) \\textit{exp} [\/latex]\r\n\r\n[latex]({\\pi}) \\textit{exp} [\/latex]:\u00a0 the expected term of the main diagonal elements\r\n\r\n[latex]({\\pi}) \\textit{exp} [\/latex]\u00a0 = ((p1. * p.1) + (p2. * p.2);\r\n\r\n[latex]({\\pi}) \\textit{exp} [\/latex]\u00a0 = ((0.41 * 0.49) + (0.59 * 0.51));\r\n\r\n[latex]({\\pi}) \\textit{exp} [\/latex]\u00a0 = (0.20 + 0.30);\r\n\r\n[latex]({\\pi}) \\textit{exp} [\/latex]\u00a0 = (0.50);\r\n<ol start=\"3\">\r\n \t<li>COMPUTE KAPPA [latex]({\\kappa}) [\/latex]<\/li>\r\n<\/ol>\r\nKappa = (( [latex]({\\pi}) \\textit{obs} - ({\\pi}) \\textit{exp}) \u00f7 (1- ({\\pi}) \\textit{exp} [\/latex]\u00a0 ))\r\n\r\nKappa = ((0.64 \u2013 0.50) \u00f7 (1- 0.50))\r\n\r\nKappa = (0.14\u00a0 \u00f7 0.50)\r\n\r\nKappa = 0.28\r\n\r\nThe computed Kappa value is\u00a0 <strong>\u03ba <\/strong>= 0.28.\u00a0 Our next task is then to determine if this is a true measure of agreement or an agreement that can happen by chance.\u00a0 Therefore, in order to evaluate this Kappa statistic we need to determine if the computed value is significantly different than 0.\r\n\r\nWe can do this by first computing the standard error of the Kappa statistic and then using this value to determine the z statistic for Kappa and comparing the value to the normal curve.\u00a0\u00a0 Recall that 95% of scores on the normal curve are &lt; \u00b11.96.\u00a0 Therefore, if our Z<strong>\u03ba <\/strong>score is between -1.96 and +1.96 then we would accept the null hypothesis that\u00a0 <strong>\u03ba<\/strong>=0.\r\n\r\nTo compute the standard error for our computed <strong>KAPPA SCORE<\/strong> we use the following procedure under the null hypothesis that Ho: <strong>k<\/strong>=0\r\n<ol start=\"4\">\r\n \t<li>COMPUTE THE SUM OF PROPORTIONS<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\np1. = 0.41; p.1 = 0.49; p2. = 0.59; p.2 = 0.51\r\n\r\nsumP = (p1. * p.1 * (p1. + p.1)) + (p2. * p.2 * (p2. + p.2));\r\nsumP = (0.41 * 0.49 * (0.41 + 0.49)) + (0.59 * 0.51 * (0.59 + 0.51));\r\nsumP = (0.20 * (0.90)) + (0.30 * (1.10));\r\nsumP = (0.18) + (0.33);\r\n\r\nsumP = (0.51);\r\n<ol start=\"5\">\r\n \t<li>COMPUTE THE STANDARD ERROR<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\nstd error = 1\/((1- ) * \u00a0)*\r\n\r\nstd error = 1\/((1- 0.5) * \u00a0) *\r\nstd error = 1\/((0.5) * \u00a0) * 0.49\r\n\r\nstd error = 0.22 * 0.49\r\n\r\nstd error = 0.106\r\n\r\nUse the following formula to compute Z\u03ba which is the z score for Kappa, under the null hypothesis of Ho: k=0:\r\n\r\nzKappa = (kappa\/ stderr1) \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 zKappa = (0.28\/ 0.106)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 zKappa = 2.65\r\n\r\nConsidering that 2.65 is greater that 1.96 we can say that the zKappa is within the region of rejection in regard to the null hypothesis stated as Ho:k=0 and therefore we can say that there is agreement between the lab and field test.\r\n\r\nFinally, we can also determine the significant difference of our Kappa estimate from\u00a0 0 by using the standard error to compute the 95% confidence intervals for the Kappa statistic as follows:\r\n<ol start=\"6\">\r\n \t<li>COMPUTE THE STANDARD ERROR AND 95% CONFIDENCE INTERVAL<\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\nUse the following measurement terms taken from the McNemar Chi-square table:\r\n<div align=\"center\">\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td>p1. = 0.41<\/td>\r\n<td>P11 = 23\/86=0.27<\/td>\r\n<\/tr>\r\n<tr>\r\n<td>p.1 = 0.49<\/td>\r\n<td>p12 = 12\/86=0.14<\/td>\r\n<\/tr>\r\n<tr>\r\n<td>p2. = 0.59<\/td>\r\n<td>p21 = 19\/86 = 0.22<\/td>\r\n<\/tr>\r\n<tr>\r\n<td>p.2 = 0.51<\/td>\r\n<td>p22 = 32\/86 =0.37<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/div>\r\n&nbsp;\r\n\r\nAterm = (p11*(1-(p1. + p.1)*(1-kappa))**2 + p22*(1-(p2. + p.2)*(1-kappa))**2);\r\n\r\nAterm = (0.27*(1-(0.41 + 0.49)*(1-0.28))**2 +\u00a0 0.37*(1-(0.59 + 0.51)*(1-0.28))**2);\r\n\r\nAterm = (0.27*(1-(0.9)*(1-0.28))**2 + 0.37*(1-(1.10)*(1-0.28))**2);\r\n\r\nAterm = (0.27*(1-(0.9)*(0.72))**2 + 0.37*(1-(1.10)*(0.72))**2);\r\n\r\nAterm = (0.27*(1-(0.648))**2 + 0.37*(1-(0.792))**2);\r\n\r\nAterm = (0.27*(0.352)**2 + 0.37*(0.208)**2);\r\n\r\nAterm = (0.27*(0.124) + 0.37*(0.043));\r\n\r\nAterm = (0.033 + 0.02);\r\n\r\nAterm = (0.049);\r\n\r\nBterm=((p12*(p.1 + p2.)2 + p21*(p.2 + p1.)2)*(1-kappa)2);\r\n\r\nBterm=((0.14*(0.49 + 0.59)2 + 0.22*(0.51 + 0.41)2)*(1-0.28)2);\r\n\r\nBterm=((0.14*(1.08)2 + 0.22*(0.92)2)*(0.72)2);\r\n\r\nBterm=((0.14*(1.17) + 0.22*(0.84))*(0.52));\r\n\r\nBterm=((0.16 + 0.185)*(0.52));\r\n\r\nBterm=(0.179);\r\n\r\nCterm=((kappa - *(1-kappa))**2);\r\n\r\nCterm=((0.28 \u2013 0.5*(0.72))2);\r\n\r\nCterm=(0.0064)\r\n\r\nA + B + C= (Aterm + Bterm + Cterm);\r\nA + B + C= (0.049 + 0.179 + 0.0064);\r\n\r\nA + B + C= 0.23\r\n\r\nCompute the standard error used in the computation of the confidence interval:\r\n\r\nstderr = \u00a0 =\u00a0 \u00a0 =\u00a0 \u00a0 =\u00a0 0.01\r\n\r\nci95LL = (kappa - 1.96*(stderr));\r\n\r\nci95LL = (0.28 \u2013 1.96 * 0.01);\r\n\r\nci95LL = (0.28 \u2013 0.022);\r\n\r\nci95LL = (0.258)\r\n\r\nci95UL = (kappa + 1.96*(stderr2));\r\n\r\nci95UL = (0.28 + 1.96 * 0.01);\r\n\r\nci95UL = (0.28 + 0.022);\r\n\r\nci95UL = (0.302);\r\n\r\nIf the upper and lower limits of the 95% confidence interval do not include 0 then we can say that the Kappa value is significantly different from 0.\r\n\r\nThe SAS program to produce KAPPA in the 2 x 2 matrix was handled by the McNemar Chi-Square, where a=23, b=12, c= 19, d=32. Since the data were entered as cell summary data and not strings of raw data, the weight &lt;dependent variable&gt; format is used to read each cell value. The essential option is \/AGREE which produces the Kappa measure of agreement.\r\n\r\nPROC FREQ;\r\n\r\nTABLES ROW*COL \/AGREE;\r\n\r\nWEIGHT OUTCOME;\r\n\r\nRUN;\r\n\r\n&nbsp;\r\n\r\n<strong>Statistics for Table of ROW BY COL<\/strong>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n<div align=\"center\">\r\n<table>\r\n<thead>\r\n<tr>\r\n<td><strong>Simple Kappa Coefficient<\/strong><\/td>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr>\r\n<td><strong>Kappa<\/strong><\/td>\r\n<td>0.2759<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>ASE<\/strong><\/td>\r\n<td>0.1024<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>95% Lower Conf Limit<\/strong><\/td>\r\n<td>0.0752<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>95% Upper Conf Limit<\/strong><\/td>\r\n<td>0.4767<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/div>","rendered":"<h1>Part II: The Kappa Statistic to Measure Agreement<\/h1>\n<p>Given that the results of the McNemar Chi-Square statistic, calculated in the previous chapter, were not significant, then the question becomes, &#8220;if the outcome variables representing the results of a participant&#8217;s performance on each test are not statistically significant in their difference, does that necessarily mean that the outcome scores are in agreement?&#8221;<\/p>\n<p>Since the Kappa statistic is a measure of agreement we can test this notion using the Kappa statistic applied to the fourfold or 2 x 2 table. Converse to the McNemar Chi-square which processes the data in the off-diagonal elements (cell &#8220;b&#8221; and cell &#8220;c&#8221;), the Kappa computations focus on the data in the major diagonal from upper left to lower right (cell &#8220;a&#8221; and cell &#8220;d&#8221;), examining whether counts along this diagonal differ significantly from what is expected to occur by chance. If no agreement exists between the counts on the major diagonal then we would expect the proportion of individuals scoring high or low on the lab and field tests to be similar.<\/p>\n<p>Similar to the computation of the McNemar Chi-square, the Kappa statistic uses the data from the row and column probabilities of the 2 x 2 table.\u00a0 The exact computations for Kappa are specifically shown as follows:<\/p>\n<ol>\n<li>COMPUTE ROW PROPORTIONS<\/li>\n<\/ol>\n<p>Row 1 Proportion:\u00a0\u00a0 p1. = (a+b) \u00f7 N;\u00a0 p1. = (23+12) \u00f7 86 = 0.41<\/p>\n<p>Row 2 Proportion:\u00a0\u00a0 p2. = (c+d) \u00f7 N;\u00a0 p2. = (19+32) \u00f7 86 = 0.59<\/p>\n<p>Column 1 Proportion: p.1 = (a+c) \u00f7 N; p.1 = (23+19) \u00f7 86 = 0.49<\/p>\n<p>Column 1 Proportion: p.2 = (b+d) \u00f7 N; p.2 = (12+32) \u00f7 86 = 0.51<\/p>\n<ol start=\"2\">\n<li>COMPUTE THE [latex]P_{i}[\/latex] TERMS<\/li>\n<\/ol>\n<p><strong>OBSERVED: <\/strong>[latex]({\\pi}) \\textit{obs}[\/latex]<\/p>\n<p>[latex]({\\pi}) \\textit{obs}[\/latex]: <em>the observed term of the main diagonal elements<\/em><\/p>\n<p>[latex]({\\pi}) \\textit{obs}[\/latex] = ((cell a) \u00f7 N) + ((cell d) \u00f7 N);<\/p>\n<p>[latex]({\\pi}) \\textit{obs}[\/latex] = ((23\u00f786) + (32 \u00f7 86));<\/p>\n<p>[latex]({\\pi}) \\textit{obs}[\/latex] = (.27+.37);<\/p>\n<p>[latex]({\\pi}) \\textit{obs}[\/latex] =\u00a0 0.64<\/p>\n<p><strong>EXPECTED<\/strong>: [latex]({\\pi}) \\textit{exp}[\/latex]<\/p>\n<p>[latex]({\\pi}) \\textit{exp}[\/latex]:\u00a0 the expected term of the main diagonal elements<\/p>\n<p>[latex]({\\pi}) \\textit{exp}[\/latex]\u00a0 = ((p1. * p.1) + (p2. * p.2);<\/p>\n<p>[latex]({\\pi}) \\textit{exp}[\/latex]\u00a0 = ((0.41 * 0.49) + (0.59 * 0.51));<\/p>\n<p>[latex]({\\pi}) \\textit{exp}[\/latex]\u00a0 = (0.20 + 0.30);<\/p>\n<p>[latex]({\\pi}) \\textit{exp}[\/latex]\u00a0 = (0.50);<\/p>\n<ol start=\"3\">\n<li>COMPUTE KAPPA [latex]({\\kappa})[\/latex]<\/li>\n<\/ol>\n<p>Kappa = (( [latex]({\\pi}) \\textit{obs} - ({\\pi}) \\textit{exp}) \u00f7 (1- ({\\pi}) \\textit{exp}[\/latex]\u00a0 ))<\/p>\n<p>Kappa = ((0.64 \u2013 0.50) \u00f7 (1- 0.50))<\/p>\n<p>Kappa = (0.14\u00a0 \u00f7 0.50)<\/p>\n<p>Kappa = 0.28<\/p>\n<p>The computed Kappa value is\u00a0 <strong>\u03ba <\/strong>= 0.28.\u00a0 Our next task is then to determine if this is a true measure of agreement or an agreement that can happen by chance.\u00a0 Therefore, in order to evaluate this Kappa statistic we need to determine if the computed value is significantly different than 0.<\/p>\n<p>We can do this by first computing the standard error of the Kappa statistic and then using this value to determine the z statistic for Kappa and comparing the value to the normal curve.\u00a0\u00a0 Recall that 95% of scores on the normal curve are &lt; \u00b11.96.\u00a0 Therefore, if our Z<strong>\u03ba <\/strong>score is between -1.96 and +1.96 then we would accept the null hypothesis that\u00a0 <strong>\u03ba<\/strong>=0.<\/p>\n<p>To compute the standard error for our computed <strong>KAPPA SCORE<\/strong> we use the following procedure under the null hypothesis that Ho: <strong>k<\/strong>=0<\/p>\n<ol start=\"4\">\n<li>COMPUTE THE SUM OF PROPORTIONS<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p>p1. = 0.41; p.1 = 0.49; p2. = 0.59; p.2 = 0.51<\/p>\n<p>sumP = (p1. * p.1 * (p1. + p.1)) + (p2. * p.2 * (p2. + p.2));<br \/>\nsumP = (0.41 * 0.49 * (0.41 + 0.49)) + (0.59 * 0.51 * (0.59 + 0.51));<br \/>\nsumP = (0.20 * (0.90)) + (0.30 * (1.10));<br \/>\nsumP = (0.18) + (0.33);<\/p>\n<p>sumP = (0.51);<\/p>\n<ol start=\"5\">\n<li>COMPUTE THE STANDARD ERROR<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p>std error = 1\/((1- ) * \u00a0)*<\/p>\n<p>std error = 1\/((1- 0.5) * \u00a0) *<br \/>\nstd error = 1\/((0.5) * \u00a0) * 0.49<\/p>\n<p>std error = 0.22 * 0.49<\/p>\n<p>std error = 0.106<\/p>\n<p>Use the following formula to compute Z\u03ba which is the z score for Kappa, under the null hypothesis of Ho: k=0:<\/p>\n<p>zKappa = (kappa\/ stderr1) \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 zKappa = (0.28\/ 0.106)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 zKappa = 2.65<\/p>\n<p>Considering that 2.65 is greater that 1.96 we can say that the zKappa is within the region of rejection in regard to the null hypothesis stated as Ho:k=0 and therefore we can say that there is agreement between the lab and field test.<\/p>\n<p>Finally, we can also determine the significant difference of our Kappa estimate from\u00a0 0 by using the standard error to compute the 95% confidence intervals for the Kappa statistic as follows:<\/p>\n<ol start=\"6\">\n<li>COMPUTE THE STANDARD ERROR AND 95% CONFIDENCE INTERVAL<\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p>Use the following measurement terms taken from the McNemar Chi-square table:<\/p>\n<div style=\"margin: auto;\">\n<table>\n<tbody>\n<tr>\n<td>p1. = 0.41<\/td>\n<td>P11 = 23\/86=0.27<\/td>\n<\/tr>\n<tr>\n<td>p.1 = 0.49<\/td>\n<td>p12 = 12\/86=0.14<\/td>\n<\/tr>\n<tr>\n<td>p2. = 0.59<\/td>\n<td>p21 = 19\/86 = 0.22<\/td>\n<\/tr>\n<tr>\n<td>p.2 = 0.51<\/td>\n<td>p22 = 32\/86 =0.37<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Aterm = (p11*(1-(p1. + p.1)*(1-kappa))**2 + p22*(1-(p2. + p.2)*(1-kappa))**2);<\/p>\n<p>Aterm = (0.27*(1-(0.41 + 0.49)*(1-0.28))**2 +\u00a0 0.37*(1-(0.59 + 0.51)*(1-0.28))**2);<\/p>\n<p>Aterm = (0.27*(1-(0.9)*(1-0.28))**2 + 0.37*(1-(1.10)*(1-0.28))**2);<\/p>\n<p>Aterm = (0.27*(1-(0.9)*(0.72))**2 + 0.37*(1-(1.10)*(0.72))**2);<\/p>\n<p>Aterm = (0.27*(1-(0.648))**2 + 0.37*(1-(0.792))**2);<\/p>\n<p>Aterm = (0.27*(0.352)**2 + 0.37*(0.208)**2);<\/p>\n<p>Aterm = (0.27*(0.124) + 0.37*(0.043));<\/p>\n<p>Aterm = (0.033 + 0.02);<\/p>\n<p>Aterm = (0.049);<\/p>\n<p>Bterm=((p12*(p.1 + p2.)2 + p21*(p.2 + p1.)2)*(1-kappa)2);<\/p>\n<p>Bterm=((0.14*(0.49 + 0.59)2 + 0.22*(0.51 + 0.41)2)*(1-0.28)2);<\/p>\n<p>Bterm=((0.14*(1.08)2 + 0.22*(0.92)2)*(0.72)2);<\/p>\n<p>Bterm=((0.14*(1.17) + 0.22*(0.84))*(0.52));<\/p>\n<p>Bterm=((0.16 + 0.185)*(0.52));<\/p>\n<p>Bterm=(0.179);<\/p>\n<p>Cterm=((kappa &#8211; *(1-kappa))**2);<\/p>\n<p>Cterm=((0.28 \u2013 0.5*(0.72))2);<\/p>\n<p>Cterm=(0.0064)<\/p>\n<p>A + B + C= (Aterm + Bterm + Cterm);<br \/>\nA + B + C= (0.049 + 0.179 + 0.0064);<\/p>\n<p>A + B + C= 0.23<\/p>\n<p>Compute the standard error used in the computation of the confidence interval:<\/p>\n<p>stderr = \u00a0 =\u00a0 \u00a0 =\u00a0 \u00a0 =\u00a0 0.01<\/p>\n<p>ci95LL = (kappa &#8211; 1.96*(stderr));<\/p>\n<p>ci95LL = (0.28 \u2013 1.96 * 0.01);<\/p>\n<p>ci95LL = (0.28 \u2013 0.022);<\/p>\n<p>ci95LL = (0.258)<\/p>\n<p>ci95UL = (kappa + 1.96*(stderr2));<\/p>\n<p>ci95UL = (0.28 + 1.96 * 0.01);<\/p>\n<p>ci95UL = (0.28 + 0.022);<\/p>\n<p>ci95UL = (0.302);<\/p>\n<p>If the upper and lower limits of the 95% confidence interval do not include 0 then we can say that the Kappa value is significantly different from 0.<\/p>\n<p>The SAS program to produce KAPPA in the 2 x 2 matrix was handled by the McNemar Chi-Square, where a=23, b=12, c= 19, d=32. Since the data were entered as cell summary data and not strings of raw data, the weight &lt;dependent variable&gt; format is used to read each cell value. The essential option is \/AGREE which produces the Kappa measure of agreement.<\/p>\n<p>PROC FREQ;<\/p>\n<p>TABLES ROW*COL \/AGREE;<\/p>\n<p>WEIGHT OUTCOME;<\/p>\n<p>RUN;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Statistics for Table of ROW BY COL<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div style=\"margin: auto;\">\n<table>\n<thead>\n<tr>\n<td><strong>Simple Kappa Coefficient<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Kappa<\/strong><\/td>\n<td>0.2759<\/td>\n<\/tr>\n<tr>\n<td><strong>ASE<\/strong><\/td>\n<td>0.1024<\/td>\n<\/tr>\n<tr>\n<td><strong>95% Lower Conf Limit<\/strong><\/td>\n<td>0.0752<\/td>\n<\/tr>\n<tr>\n<td><strong>95% Upper Conf Limit<\/strong><\/td>\n<td>0.4767<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n","protected":false},"author":56,"menu_order":5,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-712","chapter","type-chapter","status-publish","hentry"],"part":614,"_links":{"self":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapters\/712","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/wp\/v2\/users\/56"}],"version-history":[{"count":10,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapters\/712\/revisions"}],"predecessor-version":[{"id":747,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapters\/712\/revisions\/747"}],"part":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/parts\/614"}],"metadata":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapters\/712\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/wp\/v2\/media?parent=712"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/pressbooks\/v2\/chapter-type?post=712"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/wp\/v2\/contributor?post=712"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.library.upei.ca\/montelpare\/wp-json\/wp\/v2\/license?post=712"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}