Abstract. Over the last decade, sparse representation, dictionary learning, and deep artificial neural networks have dramatically impacted on the signal processing and machine learning areas by yielding state-of-the-art results in a variety of tasks, including image enhancement and reconstruction, pattern recognition and classification, and automatic speech recognition. In this talk, we touch on these subjects by presenting a brief introduction to them, as well as introducing new algorithms and perspectives. Specifically, we will introduce efficient algorithms for sparse recovery and dictionary learning, which are mostly based on proximal methods in optimization. Furthermore, we will present a new algorithm to systematically design large artificial neural networks using a progression property. This is a greedy algorithm that progressively adds nodes and layers to the network. We will also talk about an effective method, inspired by available dictionary learning techniques, to reduce the number of training parameters in neural networks, thereby facilitating their use in applications with limited memory and computational resources. More connections among sparse representation, dictionary learning, and deep neural networks will also be discussed.