Learning and Storing the Parts of Objects: IMF

Ruairí de Fréin

Research output: Contribution to conferencePaperpeer-review

6 Citations (Scopus)

Abstract

A central concern for many learning algorithms is how to efficiently store what the algorithm has learned. An algorithm for the compression of Nonnegative Matrix Factorizations is presented. Compression is achieved by embedding the factorization in an encoding routine. Its performance is investigated using two standard test images, Peppers and Barbara. The compression ratio (18:1) achieved by the proposed Matrix Factorization improves the storage-ability of Nonnegative Matrix Factorizations without significantly degrading accuracy (≈ 1-3dB degradation is introduced). We learn as before, but storage is cheaper.
Original languageEnglish
DOIs
Publication statusPublished - 2014
EventIEEE International Workshop on Machine Learning for Signal Processing - Reims, France, Reims, France
Duration: 01 Jan 2014 → …
http://mlsp2014.conwiz.dk/home.htm#.VDE3necjEbI

Conference

ConferenceIEEE International Workshop on Machine Learning for Signal Processing
CityReims, France
Period01/01/2014 → …
Internet address

Keywords

  • Adaptive algorithms.
  • Data-driven adaptive systems and models;
  • Learning theory and techniques;
  • compression;
  • matrix factorization;

Fingerprint

Dive into the research topics of 'Learning and Storing the Parts of Objects: IMF'. Together they form a unique fingerprint.

Cite this