LIBPMF -- A Library for Large-scale Parallel Matrix Factorization
Version 1.42 is released on Jan 03, 2020. Make Python 3 interface compatiable.
Version 1.41 is released on April 24, 2014. A small bug in the R interface is fixed.
Version 1.4 is released on Sep 23, 2013. A matlab interface is included.
Version 1.3 is released on Aug 28, 2013. The option to support nonnegative constraints is included.
Version 1.2 is released on July 18, 2013. We fix some bugs and include both Python and R interfaces.
Version 1.1 is released on April 27, 2013. We improve the efficiency, fix the compile issue on Mac machines, and support arbitrary input ordering of ratings.
The Program
LIBPMF implements the CCD++ algorithm, which aims to solve large-scale matrix factorization problems such as the low-rank factorization problems for recommender systems.
Download
LIBPMF implements the CCD++ in C++ with OpenMP and provides command-line, python, and R interfaces.
Download the targball and extract the files. On a UNIX system with GCC 4.0 or above, compile the program using the provided Makefile
> make
[Usage]: omp-pmf-train [options] data_dir [model_filename]
options:
-s type : set type of solver (default 0)
0 -- CCDR1 with a fundec stopping condition
-k rank : set the rank (default 10)
-n threads : set the number of threads (default 4)
-l lambda : set the regularization parameter lambda (default 0.1)
-t max_iter: set the number of iterations (default 5)
-T max_iter: set the number of inner iterations used in CCDR1 (default 5)
-e epsilon : set inner termination criterion epsilon of CCDR1 (default 1e-3)
-p do_predict: do prediction or not (default 0)
-q verbose: show information or not (default 0)
-N do_nmf: do nmf (default 0)
For example, to train with the data toy-example/ with 4 threads, you can use
> ./omp-pmf-train -n 4 toy-example/
Please see REAEME attached in the tarball for more details.
For the users of Python and R, please refer the detailed README in each
separate subdirectory.
Please acknowledge the use of the code with a
citation.
Scalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems,
Hsiang-Fu Yu, Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon
IEEE International Conference of Data Mining, 2012.
Download:
[pdf]
@inproceedings{hfy12a,
title ={Scalable Coordinate Descent Approaches to Parallel
Matrix Factorization for Recommender Systems},
author={Hsiang-Fu Yu and Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon},
booktitle = {IEEE International Conference of Data Mining},
year = {2012}
}
Bug reports and comments are
always appreciated. We would like to know who showed interest in our
work, feel free to contact us.