Autoplay
Autocomplete
Previous Lesson
Complete and Continue
321. Convolutional Neural Networks in One Dimension
1 .Introduction
Get started
1.1 1D convolution for neural networks, part 1: Sliding dot product
1.2 1D convolution for neural networks, part 2: Convolution copies the kernel
1.3 1D convolution for neural networks, part 3: Sliding dot product equations longhand
1.4 1D convolution for neural networks, part 4: Convolution equation
1.5 1D convolution for neural networks, part 5: Backpropagation
1.6 1D convolution for neural networks, part 6: Input gradient
1.7 1D convolution for neural networks, part 7: Weight gradient
1.8 1D convolution for neural networks, part 8: Padding
1.9 1D convolution for neural networks, part 9: Stride
Article: 1D convolution for neural networks
2. Coding a convolution block
2.1 Convolution in Python from scratch (5:44)
2.2 Comparison with NumPy convolution() (5:57)
2.3 Create the convolution block Conv1D (6:54)
2.4 Initialize the convolution block (3:29)
2.5 Write the forward and backward pass (3:27)
2.6 Write the multichannel, multikernel convolutions (7:28)
2.7 Write the weight gradient and input gradient calculations (8:26)
3. Build a small convolutional neural network
3.1 Create the blips data set (10:38)
3.2 Collect all the blocks we'll need (4:33)
3.3 Connect the blocks into a network structure (4:18)
3.4 Training, evaluation, and reporting (3:49)
3.5 OneHot and Flatten Blocks and Logging (4:43)
3.6 Inspect text summary and loss history (4:09)
3.7 Inspect convolution layers and evaluate model (6:27)
4. Add the finishing touches: ReLU, Pooling, Batch Normalization
4.1 The ReLU block (3:31)
4.2 One dimensional max pooling block (6:02)
4.3 One dimensional max pooling computations (5:57)
4.4 How Batch Normalization works (4:00)
4.5 Online Batch Normalization, initialization (5:38)
4.6 Online Batch Normalization, forward pass (6:17)
4.7 Online Batch Normalization, backward pass (9:50)
4.8 All together now (5:37)
5. Prepare the electrocardiography data
5.1 Get the data (4:09)
5.2 How the data was collected (3:00)
5.3 Electrocariography signals (5:14)
5.4 Ancillary information (3:38)
5.5 Explore the ECG signals (5:37)
5.6 Select records to include (4:58)
5.7 Select labeled classes of heartbeat to include (6:57)
5.8 Create data loader module (6:56)
5.9 Decide sizes of training, tuning, testing sets (4:22)
5.10 Populate training, tuning, testing sets (3:38)
5.11 Prepare the data for use in Cottonwood (7:07)
6. Build a convolutional neural network for heartbeat classification
6.1 Create a baseline model (6:50)
6.2 Model repeatability (4:20)
6.3 Repeatability results (6:12)
6.4 Tuning learning rates, optimization (8:04)
6.5 Tuning learning rates, results (3:31)
6.6 Tuning minibatch size (10:06)
6.7 Tuning activation function (9:51)
6.8 Batch normalization (6:53)
6.9 Tuning kernel size and number (3:56)
7. Advanced model tuning
7.1 Hard max operator and classification (4:20)
7.2 Confusion matrices, precision, and recall (6:28)
7.3 Cottonwood code for HardMax and ConfusionLogger (9:32)
7.4 Tune classifier using mean precision (3:22)
7.5 Add a second convolution layer (3:17)
7.6 Layer-by-layer learning rates (4:55)
7.7 Add a second data channel (6:24)
7.8 Augment the data with its derivatives (8:34)
8. Putting it all together
8.1 Fail loudly (7:52)
8.2 Train and evaluate the finished model (3:46)
8.3 Visualize the results (v1) (4:42)
8.4 Inspect the results (v1) (5:00)
8.5 Augment the input (v2) (4:35)
8.6 Filter the classification results (v3) (4:17)
8.7 Polish the visualization (v4) (4:54)
8.8 Weaknesses and strengths of our classifier (8:27)
Wrap up (2:00)
7.3 Cottonwood code for HardMax and ConfusionLogger
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock