Booking options
£125.99
£125.99
On-Demand course
5 hours
All levels
This course is designed around three main activities for getting better results with deep learning models: better or faster learning, better generalization to new data, and better predictions when using final models. Take this course if you're passionate about deep learning with a solid foundation in this space and want to learn how to squeeze the best performance out of your deep learning models.
Deep learning neural networks have become easy to create. However, tuning these models for maximum performance remains something of a challenge for most modelers. This course will teach you how to get results as a machine learning practitioner. The course starts with an introduction to the problem of overfitting and a tour of regularization techniques. Learn through better configured stochastic gradient descent batch size, loss functions, learning rates, and to avoid exploding gradients via gradient clipping. After that, you'll learn regularization techniques and reduce overfitting by updating the loss function using techniques such as weight regularization, weight constraints, and activation regularization. Post that, you'll effectively apply dropout, the addition of noise, and early stopping, and combine the predictions from multiple models. You'll also look at ensemble learning techniques and diagnose poor model training and problems such as premature convergence and accelerate the model training process. Then, you'll combine the predictions from multiple models saved during a single training run with techniques such as horizontal ensembles and snapshot ensembles. Finally, you'll diagnose high variance in a final model and improve the average predictive skill. By the end of this course, you'll learn different techniques for getting better results with deep learning models. All the resource files are uploaded on the GitHub repository at https://github.com/PacktPublishing/Performance-Tuning-Deep-Learning-Models-Master-Class
Introduction to the problem of overfitting and regularization techniques
Look at stochastic gradient descent batch size, and other concepts
Learn to combat overfitting and an introduction of regularization techniques
Reduce overfitting by updating the loss function using techniques
Effectively apply dropout, the addition of noise, and early stopping
Diagnose high variance in a final model and improve average predictive skill
This course is for developers, machine learning engineers, and data scientists that want to enhance the performance of their deep learning models. This is an intermediate level to advanced level course. It's highly recommended that the learner be proficient in Python, Keras, and machine learning.
A solid foundation in machine learning, deep learning, and Python is required to get better results out of this course. You are also recommended to have the core machine learning libraries in Python.
This is a hands-on guide and comprehensive course. It is a playbook and a workbook intended for you to learn by doing and then apply your new understanding to your own deep learning Keras models. To get the most out of the course, it's recommended to work through all the examples in each tutorial. If you watch this course like a vie, you'll get little out of it.
In the applied space, machine learning is programming and programming is a hands-on sport.
A hands-on and comprehensive course for getting better results with deep learning models * Resource files to reinforce learning from an industry expert * Understand how to combine the predictions from multiple models saved during a single training run
https://github.com/PacktPublishing/Performance-Tuning-Deep-Learning-Models-Master-Class
Mike West is the founder of LogikBot. He has worked with databases for over two decades. He has worked for or consulted with over 50 different companies as a full-time employee or consultant. These were Fortune 500 as well as several small to mid-size companies. Some include Georgia Pacific, SunTrust, Reed Construction Data, Building Systems Design, NetCertainty, The Home Shopping Network, SwingVote, Atlanta Gas and Light, and Northrup Grumman. Over the last five years, Mike has transitioned to the exciting world of applied machine learning. He is excited to show you what he has learned and help you move into one of the single-most important fields in this space.
1. Introduction
2. Optimal Learning
3. Optimal Generalization
4. Optimal Predictions