|Title||Accelerating Collaborative Filtering for Implicit Feedback Datasets using GPUs|
|Publication Type||Conference Paper|
|Year of Publication||2015|
|Authors||Gates, M., H. Anzt, J. Kurzak, and J. Dongarra|
|Conference Name||2015 IEEE International Conference on Big Data (IEEE BigData 2015)|
|Conference Location||Santa Clara, CA|
In this paper we accelerate the Alternating Least Squares (ALS) algorithm used for generating product recommendations on the basis of implicit feedback datasets. We approach the algorithm with concepts proven to be successful in High Performance Computing. This includes the formulation of the algorithm as a mix of cache-optimized algorithm-specific kernels and standard BLAS routines, acceleration via graphics processing units (GPUs), use of parallel batched kernels, and autotuning to identify performance winners. For benchmark datasets, the multi-threaded CPU implementation we propose achieves more than a 10 times speedup over the implementations available in the GraphLab and Spark MLlib software packages. For the GPU implementation, the parameters of an algorithm-specific kernel were optimized using a comprehensive autotuning sweep. This results in an additional 2 times speedup over our CPU implementation.
Accelerating Collaborative Filtering for Implicit Feedback Datasets using GPUs