Here are some useful codes I wrote. You can freely use them for noncommercial purposes but please comply with the copyright notice.

Download the C++ codes in the ScalaGAUSS website

This is an MPI program for computing the matrix-vector product with the Matern kernel. It is based on a tree code algorithm proposed in A Fast Summation Tree Code for Matern Kernel. The implementation is documented in A Parallel Tree Code for Computing Matrix-Vector Products with the Matern Kernel.

Download the Matlab and C++ codes in the ScalaGAUSS website

ScalaGAUSS is a set of scalable tools for analyzing large-scale spatiotemporal data modeled as a Gaussian process/random field. The project addresses the computational challenges in statistical data modeling and analysis of the data in a very large scale. A number of publications arose from this project, and for more information please see the project website.

Download the Matlab codes (version: 2011.04.08)

Since I do not see a simple Matlab code of the BPCG solver on the Web, I am willing to share my implementation. This program is good for prototyping and proof of concepts when developing algorithms based on this solver, or for educational purposes.

Download the Matlab codes (version: 2010.11.10)

This is a Matlab program for computing a function of a large sparse matrix times a vector. The underlying technique is a least squares polynomial approximation to the function, and the method can be applied to a general arbitrary function. For more information, see my SISC paper Computing $f(A)b$ via Least Squares Polynomial Approximations.

Download the C++ codes (version: 2009.10.13)

This is a C++ program that efficiently computes an approximate kNN graph for high dimensional data via divide and conquer. For more information, see my JMLR paper Fast Approximate $k$NN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection.

Download the Matlab codes (version: 2009.01.20)

This toolkit provides Matlab functions to compute the low rank orthogonal approximation of a given tensor. For more information, see my SIMAX paper On the Tensor SVD and the Optimal Low Rank Orthogonal Approximation of Tensors.

Download the Matlab codes (version: 2008.12.10)

This toolkit provides a Matlab function that has a similar functionality to the standard svds() function, but consumes much less memory and runs significantly faster than svds() for sparse matrices and large dense matrices. The computation of the SVD is based on the Lanczos algorithm. The toolkit also provides a similar function for those low-rank-modified matrices. The provided functions are not meant to replace the standard svds() function for two reasons: (1) The provided functions can only compute the largest/smallest singular values/vectors, but not other ones (although modifications to meet such requirements are easy); (2) The provided functions have almost no error checks so one has to use them at caution.

Known issues: The codes were originally developed for large scale data analysis tasks (such as dimensionality reduction), where the largest singular components are in concern. The underlying algorithm is known to be not good for computing the smallest components.

Download the Matlab codes (version: 2011.07.08)

This is a handy Matlab function that solves the quadratic program with box constraints

where H is positive definite. The function is implemented based on the two-metric projection method, specifically tailored for the positive definite Hessian. This function outperforms the general purpose Matlab function quadprog(). It solves a system of 2000 variables in about 10 seconds on a P4 3.0GHz CPU. By duality, it *may* also solve convex quadratic programs with linear constraints

This code was the prototype implementation in the paper Y. Sheng, T. Yapo, and B. Cutler. Global Illumination Compensation for Spatially Augmented Reality. Eurographics 2010.