| ldaCMA {CMA} | R Documentation |
Performs a linear discriminant analysis under the assumption
of a multivariate normal distribution in each classes (with equal, but
generally structured) covariance matrices. The function lda from
the package MASS is called for computation.
For S4 method information, see ldaCMA-methods.
ldaCMA(X, y, f, learnind, models=FALSE, ...)
X |
Gene expression data. Can be one of the following:
|
y |
Class labels. Can be one of the following:
WARNING: The class labels will be re-coded to
range from |
f |
A two-sided formula, if |
learnind |
An index vector specifying the observations that
belong to the learning set. May be |
models |
a logical value indicating whether the model object shall be returned |
... |
Further arguments to be passed to |
An object of class cloutput.
Excessive variable selection has usually to performed before
ldaCMA can be applied in the p > n setting.
Not reducing the number of variables can result in an error
message.
Martin Slawski ms@cs.uni-sb.de
Anne-Laure Boulesteix boulesteix@ibe.med.uni-muenchen.de
McLachlan, G.J. (1992).
Discriminant Analysis and Statistical Pattern Recognition.
Wiley, New York
compBoostCMA, dldaCMA, ElasticNetCMA,
fdaCMA, flexdaCMA, gbmCMA,
knnCMA, LassoCMA, nnetCMA,
pknnCMA, plrCMA, pls_ldaCMA,
pls_lrCMA, pls_rfCMA, pnnCMA,
qdaCMA, rfCMA, scdaCMA,
shrinkldaCMA, svmCMA
## Not run: ### load Golub AML/ALL data data(golub) ### extract class labels golubY <- golub[,1] ### extract gene expression from first 10 genes golubX <- as.matrix(golub[,2:11]) ### select learningset ratio <- 2/3 set.seed(111) learnind <- sample(length(golubY), size=floor(ratio*length(golubY))) ### run LDA ldaresult <- ldaCMA(X=golubX, y=golubY, learnind=learnind) ### show results show(ldaresult) ftable(ldaresult) plot(ldaresult) ### multiclass example: ### load Khan data data(khan) ### extract class labels khanY <- khan[,1] ### extract gene expression from first 10 genes khanX <- as.matrix(khan[,2:11]) ### select learningset set.seed(111) learnind <- sample(length(khanY), size=floor(ratio*length(khanY))) ### run LDA ldaresult <- ldaCMA(X=khanX, y=khanY, learnind=learnind) ### show results show(ldaresult) ftable(ldaresult) plot(ldaresult) ## End(Not run)