calcAUC                 package:puma                 R Documentation

_C_a_l_c_u_l_a_t_e _A_r_e_a _U_n_d_e_r _C_u_r_v_e (_A_U_C) _f_o_r _a _s_t_a_n_d_a_r_d _R_O_C _p_l_o_t.

_D_e_s_c_r_i_p_t_i_o_n:

     Calculates the AUC values for one or more ROC plots.

_U_s_a_g_e:

     calcAUC(scores, truthValues, includedProbesets = 1:length(truthValues))

_A_r_g_u_m_e_n_t_s:

  scores: A vector of scores. This could be, e.g. one of the columns of
          the statistics of a 'DEResult' object. 

truthValues: A boolean vector indicating which scores are True
          Positives. 

includedProbesets: A vector of indices indicating which scores (and
          truthValues) are to be used in the calculation. The default
          is to use all, but a subset can be used if, for example, you
          only want a subset of the probesets which are not True
          Positives to be treated as False Positives. 

_V_a_l_u_e:

     A single number which is the AUC value.

_A_u_t_h_o_r(_s):

     Richard D. Pearson

_S_e_e _A_l_s_o:

     Related methods 'plotROC' and 'numFP'.

_E_x_a_m_p_l_e_s:

     class1a <- rnorm(1000,0.2,0.1)
     class2a <- rnorm(1000,0.6,0.2)
     class1b <- rnorm(1000,0.3,0.1)
     class2b <- rnorm(1000,0.5,0.2)
     scores_a <- c(class1a, class2a)
     scores_b <- c(class1b, class2b)
     classElts <- c(rep(FALSE,1000), rep(TRUE,1000))
     print(calcAUC(scores_a, classElts))
     print(calcAUC(scores_b, classElts))

     ## The function is currently defined as
     function (scores, truthValues, includedProbesets = 1:length(truthValues)) 
     {
         predictions <- prediction(scores[includedProbesets], truthValues[includedProbesets])
         AUC <- performance(predictions, "auc")@y.values[[1]]
       }

