Show simple item record

AuthorMuravev A.
AuthorTran D.T.
AuthorIosifidis A.
AuthorKiranyaz S.
AuthorGabbouj M.
Available date2020-03-03T06:19:36Z
Publication Date2018
Publication NameProceedings - International Conference on Image Processing, ICIP
ResourceScopus
ISSN15224880
URIhttp://dx.doi.org/10.1109/ICIP.2018.8451082
URIhttp://hdl.handle.net/10576/13220
AbstractThe massive size of data that needs to be processed by Machine Learning models nowadays sets new challenges related to their computational complexity and memory footprint. These challenges span all processing steps involved in the application of the related models, i.e., from the fundamental processing steps needed to evaluate distances of vectors, to the optimization of large-scale systems, e.g. for non-linear regression using kernels, or the speed up of deep learning models formed by billions of parameters. In order to address these challenges, new approximate solutions have been recently proposed based on matrix/tensor decompositions, randomization and quantization strategies. This paper provides a comprehensive review of the related methodologies and discusses their connections.
Languageen
PublisherIEEE Computer Society
SubjectApproximate kernel-based learning
Approximate Nearest Neighbor Search
Hashing
Low-rank Approximation
Neural Network Acceleration
Vector Quantization
TitleAcceleration Approaches for Big Data Analysis
TypeConference Paper
Pagination311 - 315


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record