Fast Convolutional Neural Networks via Nonconvex Kernel Normalization – In this work, we propose a new framework for learning deep CNNs from raw image patches. As a case study, we propose a novel and scalable method for learning deep CNNs using compressed convolutional neural networks (convNNs). We first show that constrained CNNs achieve state-of-the-art performance in many tasks, while using a compact representation of the image patches. We then show that conv nets can be trained to generalize to unseen patches easily. Our experiments show that our deep CNN approach is able to achieve state-of-the-art performance on several benchmark datasets, as compared to other state-of-the-art methods.

This paper presents an algorithm for learning a nonnegative matrix as sparse. We first describe the algorithm, and then present two empirical results that characterize the algorithm in terms of the number of parameters and the solution to a nonnegative matrix. We also provide a theoretical analysis of this algorithm that indicates that the algorithm outperforms previous nonnegative matrix sparsity approaches. In particular, we demonstrate that the algorithm may converge to a stable state in the setting where the objective is to learn a nonnegative matrix, rather than the other way around. We empirically evaluate the algorithm in a set of problems and show that our algorithm performs better for solving many real-world problems.

Paying More Attention to Proposals via Modal Attention and Action Units

A Deep Recurrent Convolutional Neural Network for Texture Recognition

# Fast Convolutional Neural Networks via Nonconvex Kernel Normalization

A Unified Approach for Online Video Quality Control using Deep Neural Network Technique

Theory of a Divergence Annealing for Inferred State-Space ModelsThis paper presents an algorithm for learning a nonnegative matrix as sparse. We first describe the algorithm, and then present two empirical results that characterize the algorithm in terms of the number of parameters and the solution to a nonnegative matrix. We also provide a theoretical analysis of this algorithm that indicates that the algorithm outperforms previous nonnegative matrix sparsity approaches. In particular, we demonstrate that the algorithm may converge to a stable state in the setting where the objective is to learn a nonnegative matrix, rather than the other way around. We empirically evaluate the algorithm in a set of problems and show that our algorithm performs better for solving many real-world problems.

## Leave a Reply