Minimum Volume Embedding (MVE) is a nonlinear dimension reduction algorithm that exploits semidefinite programming (SDP), like MVU/SDE. Whereas MVU aims at stretching through all direction by maximizing \(\sum \lambda_i\), MVE only opts for unrolling the top eigenspectrum and chooses to shrink left-over spectral dimension. For ease of use, unlike kernel PCA, we only made use of Gaussian kernel for MVE.
an \((n\times p)\) matrix or data frame whose rows are observations and columns represent independent variables.
an integer-valued target dimension.
size of \(k\)-nn neighborhood.
bandwidth for Gaussian kernel.
an additional option for preprocessing the data.
Default is "null". See also aux.preprocess
for more details.
stopping criterion for incremental change.
maximum number of iterations allowed.
a named list containing
an \((n\times ndim)\) matrix whose rows are embedded observations.
a list containing information for out-of-sample prediction.
Shaw B, Jebara T (2007). “Minimum Volume Embedding.” In Meila M, Shen X (eds.), Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics March 21-24, 2007, San Juan, Puerto Rico, 460--467.
if (FALSE) {
## use a small subset of iris data
set.seed(100)
id = sample(1:150, 50)
X = as.matrix(iris[id,1:4])
lab = as.factor(iris[id,5])
## try different connectivity levels
output1 <- do.mve(X, knn=5)
output2 <- do.mve(X, knn=10)
output3 <- do.mve(X, knn=20)
## Visualize two comparisons
opar <- par(no.readonly=TRUE)
par(mfrow=c(1,3))
plot(output1$Y, main="knn:k=5", pch=19, col=lab)
plot(output2$Y, main="knn:k=10", pch=19, col=lab)
plot(output3$Y, main="knn:k=20", pch=19, col=lab)
par(opar)
}