New release:Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure
- interactive & dynamic
- step-by-step implementation
- incredible performance, yet no C++ hell (!)
- Intel & AMD CPUs (MKL-DNN)
- Nvidia GPUs (CUDA and cuDNN)
- AMD GPUs (yes, OpenCL too!)
- Clojure (it’s magic!)
- Java Virtual Machine (without Java boilerplate!)
- complete source code
- beautiful typesetting (see the sample chapters)
## Table of Contents
### Part 1: Getting Started
4-6 chapters, (TO BE DETERMINED)
### Part 2: Inference ([AVAILABLE])
#### Representing layers and connections ([AVAILABLE])
#### Bias and activation function ([AVAILABLE])
#### Fully connected inference layers ([AVAILABLE])
#### Increasing performance with batch processing ([AVAILABLE])
#### Sharing memory ([AVAILABLE])
#### GPU computing with CUDA and OpenCL ([AVAILABLE]
### Part 3: Learning ([AVAILABLE]
#### Gradient descent and backpropagation ([AVAILABLE]
#### The forward pass ([AVAILABLE]
#### The activation and its derivative ([AVAILABLE]
#### The backward pass ([AVAILABLE]
### Part 4: A simple neural networks API ([AVAILABLE]
#### Inference API ([AVAILABLE]
#### Training API ([AVAILABLE]
#### Initializing weights ([AVAILABLE]
#### Regression: learning a known function ([AVAILABLE]
### Part 5: Training optimizations (IN PROGSESS)
#### Weight decay ([AVAILABLE]
#### Momentum and Nesterov momentum ([AVAILABLE]
#### Adaptive learning rates ([AVAILABLE]
#### Regression: Boston housing prices (SOON)
#### Dropout (SOON)
#### Stochastic gradient descent (SOON)
#### Classification: IMDB sentiments (SOON)
### Part 6: Tensors (TO BE DETERMINED, BUT SOON ENOUGH)
#### Tensors, Matrices, and ND-arrays (TBD)
#### Tensors on the CPU with MKL-DNN (TBD)
#### Tensors on the GPU with cuDNN (TBD)
#### Tensor API (TBD)
### Part 7: Convolutional layers (TBD)
4-6 Chapters, (TBD)
### Part 8: Recurrent networks (TBD)
4-6 Chapters, (TBD)