**Discover deep learning with Python and TensorFlow**

It can be hard to get started with machine learning, particularly as new frameworks like TensorFlow start to gain traction across enterprise companies. If you have no prior exposure to one of the most important trends impacting how we do data science in the next few years, this path will help you get up to speed. It specifically focuses on getting you up and running with TensorFlow, after up-and-running coverage of Python and Deep Learning in Python with Theano.

**About the Author**

*Daniel Arbuckle*

- Daniel Arbuckle holds a Doctorate in Computer Science from the University of Southern California, where he specialized in robotics and was a member of the nanotechnology lab. He now has more than ten years behind him as a consultant, during which time he’s been using Python to help an assortment of businesses, from clothing manufacturers to crowdsourcing platforms. Python has been his primary development language since he was in High School. He’s also an award-winning teacher of programming and computer science.

*Saimadhu Polamuri*

- Saimadhu Polamuri is a data science educator and the founder of Data Aspirant, a Data Science portal for beginners. He has 3 years of experience in data mining and 5 years of experience in Python. He is also interested in big data technologies such as Hadoop, Pig, and Spark. He has a good command of the R programming language and Matlab. He has a rudimentary understanding of Cpp Computer vision library (opencv) and big data technologies.

*Eder Santana*

- Eder Santana is a PhD candidate on Electrical and Computer Engineering. His thesis topic is on Deep and Recurrent neural networks. After working for 3 years with Kernel Machines (SVMs, Information Theoretic Learning, and so on), Eder moved to the field of deep learning 2.5 years ago, when he started learning Theano, Caffe, and other machine learning frameworks. Now, Eder contributes to Keras: Deep Learning Library for Python. Besides deep learning, he also likes data visualization and teaching machine learning, either on online forums or as a teacher assistant.

*Dan Van Boxel*

- Dan Van Boxel is a data scientist and machine learning engineer with over 10 years of experience. He is most well-known for Dan Does Data, a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research articles and presented findings at the Transportation Research Board and other academic journals.

- A firm understanding of Python and the Python ecosystem

- Build Python packages to efficiently create reusable code
- Become proficient at creating tools and utility programs in Python
- Use the Git version control system to protect your development environment from unwanted changes
- Harness the power of Python to automate other software
- Distribute computation tasks across multiple processors
- Handle high I/O loads with asynchronous I/O for smoother performance
- Take advantage of Python's metaprogramming and programmable syntax features
- Get to grips with unit testing to write better code, faster
- Understand the basic data mining concepts to implement efficient models using Python
- Know how to use Python libraries and mathematical toolkits such as numpy, pandas, matplotlib, and sci-kit learn
- Build your first application that makes predictions from data and see how to evaluate the regression model
- Analyze and implement Logistic Regression and the KNN model
- Dive into the most effective data cleaning process to get accurate results
- Master the classification concepts and implement the various classification algorithms
- Get a quick brief about backpropagation
- Perceive and understand automatic differentiation with Theano
- Exhibit the powerful mechanism of seamless CPU and GPU usage with Theano
- Understand the usage and innards of Keras to beautify your neural network designs
- Apply convolutional neural networks for image analysis
- Discover the methods of image classification and harness object recognition using deep learning
- Get to know recurrent neural networks for the textual sentimental analysis model
- Set up your computing environment and install TensorFlow
- Build simple TensorFlow graphs for everyday computations
- Apply logistic regression for classification with TensorFlow
- Design and train a multilayer neural network with TensorFlow
- Understand intuitively convolutional neural networks for image recognition
- Bootstrap a neural network from simple to more accurate models
- See how to use TensorFlow with other types of networks
- Program networks with SciKit-Flow, a high-level interface to TensorFlow

Get a high-level view of what this course will do for you.

- A quick overview of each section
- A preview of the results

Get a functional Python development environment.

- Picking up a suitable version for working
- Setting up the environment variables
- Making sure everything works as expected

Learn how to perform quick experiments and access documentation.

- Getting to know the operating system prompt
- Accessing the Python prompt
- Accessing the documentation with the help function

Learn how to easily download and install third-party packages.

- Running through the basic usage of packages
- Installing packages in the home directory
- Managing and removing installed packages

Discover available resources so that you don't have to reinvent the wheel.

- Using the web interface
- Using pip's search command
- About licenses and legalities

Learn the filesystem structure that defines a Python package.

- Creating the package folder
- Creating the __init__.py file
- Importing the new package

Add code files to the package.

- Selecting filenames
- The namespace packages
- Package structure versus package API

Combine code from multiple modules.

- Importing the syntax
- Dealing with import cycles
- Differences between Python 2 and Python 3

Include data alongside the modules in your package.

- Where to store the files
- Using the pkgutil.get_data command
- Transforming the data into text

Make your code more readable for yourself and others using Python's communal coding standard.

- Spaces versus tabs
- Understanding the code layout
- Using naming conventions to perfection

Manage changes, versions, and branches in your source code.

- Undoing changes you've made to the code
- Working with branches
- Understanding merging

Create a development area that remains stable for the duration of your development process.

- Advantages of development in a virtual environment
- Setting up a virtual environment
- Activating and using a virtual environment

Format your docstrings to maximize their usefulness.

- Understanding the basic layout
- Using the reStructuredText command
- Exporting documentation to HTML

Run the examples in your docstrings as tests.

- Benefits of executing examples from docstrings
- How to write the examples
- How to run the examples

Create a package entry point to make the package executable.

- Using __main__.py
- Using if __name__ == '__main__'
- An interactive software pipeline – the first step

Create full-featured command-line parsers.

- Understanding the basic usage of the command line arguments
- Adding command line switches and arguments
- An interactive software pipeline – the second step

Get the input and provide the output while the program is running.

- Using the print(), input(), getpass, and pprint commands
- Using the cmd module
- An interactive software pipeline – the third step

Execute and interact with other programs.

- Using the call(), check_call(), and check_output() functions
- Understanding the Popen class
- An interactive software pipeline – the fourth step

Reduce the effort needed to run executable packages.

- Launching via shell script
- Launching via a batch file
- An interactive software pipeline – the last step

Use a high-level interface to distribute computational tasks to worker processes and collect the results.

- Understanding the strengths and weaknesses of multiprocess computation in Python
- Using the ProcessPoolExecutor and Future objects
- Using the wait and as_completed functions

Use a mid-level interface to create cooperative parallel processes.

- Launching processes
- Sending data between processes
- Keeping processes synchronized

The API looks a lot like concurrent.futures, but it's doing something very different.

- What cooperative multitasking is
- What yield from means
- What all this means for I/O bound programs

How to get the asyncio scheduler running and add asynchronous tasks.

- Creating coroutines
- Creating an event loop, adding tasks, running the loop, and shutting it down
- Checking out an example skeleton by running several tasks until you decides to end the program

How asyncio futures behave and what to do with them.

- Learning the normal usage pattern
- Understanding iteration, coroutines, and Futures
- Coroutines versus functions that return Futures

How to use synchronization, waiting, and communication channels.

- What Lock and Semaphore is
- Using the as_completed, gather, wait, and wait_for functions
- Learning the use of Queue, LifoQueue, PriorityQueue, and JoinableQueue

Easily use stream sockets to communicate.

- Creating a client-side connection
- Creating a server-side connection
- Running an example ping-pong client and server

Automatically post-process functions.

- Adding attributes to a function
- Wrapping a function
- Knowing more about decorators that accept parameters

Add new semantics to functions.

- Adding annotations to a function
- How to access the annotations
- Using annotations in decorators

Automatically post-process classes.

- Manipulating a class
- Wrapping a class
- Using a class as declarative data

Change what it means to be a class.

- Classes that are not instances of “type”
- Altering the class's namespace
- Inheritable special behavior

Set up special rules for regions of code.

- Running code when execution enters and leaves a block
- Using the @contextlib.contextmanager decorator
- Writing context managers as classes

Control what happens when attributes are accessed.

- Running code when an attribute is accessed
- Using @property
- Writing descriptors as classes

Get more benefits out of testing with less work.

- Letting the computer do the work
- Keeping tests localized
- Letting the tests tell us what we need to work on

Write tests, run them, and understand the results.

- Running some basic tests
- Using the assertion methods
- Checking out the test fixtures

Use mock objects to keep tests local.

- Simple mock objects
- Checking for proper behavior
- Using patch

Run tests even more easily with test discovery.

- Letting unittest find the tests
- Controlling how tests are found
- Modules are imported when they are searched for tests

Take advantage of more complete test discovery and integrated reporting.

- Letting Nose find even more tests
- Code coverage
- Running tests in multiple processes

We need to lay the groundwork for the course, and for this, we need a strong understanding of the concepts of data mining.

- Understand what data mining is and learn about it in simple terms
- Know why learning data mining is important and how it can help us
- Take a look at the processes involved and associated with data mining

It's time to deep-dive into the core concepts of data mining. For that, we need a breakdown of the important topics.

- Go through the basic keywords or commonly used terms in data mining
- Take a look at the concepts of supervised and unsupervised learning
- Get to know the different supervised and unsupervised algorithms and data mining applications

There are plenty of programming languages available. However, there is a reason Python is a good choice; understand just that with the help of this video.

- Take a look at the great features of the Python programming language
- Understand why we should use the Python programming language for our course
- Walk through the Python installation setup in the Linux operating system

To boost up the expertise level of Python programming, there is a need to introduce the fruitful basics of the Python programming language, which will help in the upcoming sections.

- Take a look at Python's conditional statements
- Dive into Python loops
- See how to write functions in Python

We will get introduced to IPython, which is an important step in our journey.

- Get an introduction to IPython
- Take a look at the IPython installation commands

Solving real-life problems using data mining algorithms requires a lot of scientific computing. So, there is a need to learn about Numpy packages, as they are specially built for scientific computing.

- Get an introduction to the Python Numpy library
- Install commands and run scripts to check the Numpy installation
- Learn to use the Python Numpy library

Working with tabular data is a painful process until we get some hands-on experience with a tabular data analysis library. pandas is specially built for tabular data analysis. So let's get introduced to the pandas data analysis library.

- Learn a few things about pandas
- Take a look at the pandas installation commands
- Step into the basics of the Python data analysis library, pandas

In the space of data science, visualization plays a key role as the results obtained after applying different data mining algorithms have to be visualized to understand them. And visualization of data gives a clear picture about the data we are working on. So let's take a look at the Python visualization library.

- Learn about the matplotlib library
- Execute the Matplotlib installation commands
- Generate a line plot and histogram using matplotlib

Introducing scikit-learn, an extraordinary and widely used python library that contains many in-built data mining algorithms.

- Dive into scikit-learn and see what all it can do
- Check out the various available models in scikit-learn
- Execute the scikit-learn installation commands

Data preprocessing techniques comprise data cleaning and preprocessing. Let's take a look at data cleaning and its importance.

- Understand the need for preprocessing data
- Take a look at the various stages in data preprocessing
- Dive into the first stage of data preprocessing, which is data cleaning.

Take a look at other data preprocessing techniques, such as data integration, data reduction, and data transformation.

- Understand the need for data integration and the challenge faced during data integration
- Get introduced to data reduction with the help of an example
- Dive into data transformation

Get insights into linear regression, extend the areas where linear regression is efficient, and finally visualize a clear picture of the linear regression model.

- Understand where one can use the linear regression algorithm
- Take a look at the simple linear regression model
- Extend the understanding of simple linear regression to learn about the linear regression model

You'll probably face challenging problems while modeling the linear regression model. This video will give you a clear idea about those model fitting problems to evaluate different regression models and come up with ways to pick the best regression model.

- Understand the underfitting problem and the methods to overcome it
- Take a look at the overfitting problem and peek into overcoming it
- Learn about residual plots and how to find the best regression models

Let's take a look at the usage of Python data mining libraries. We'll extend the usage of Python data mining algorithms by implementing a simple linear regression model in Python to predict house prices.

- Create data to build the linear regression model
- Build a linear regression model with the data we created
- Predict new house prices using the linear regression model

scikit-learn is a data mining algorithm library that can be used to implement the multi-regression model to predict television show viewers.

- Take a look at a television viewers dataset to understand the different features in it
- Build a multi-linear regression model with the data we created
- Predict the new episode viewers using a linear regression model

Introducing the use of logistic regression algorithm to solve classification problems. Extend your knowledge by the clear understanding of basic concepts of logistic regression to build it.

- Get an introduction to logistic regression and understand the differences between linear regression and logistic regression
- Take a look at logistic function
- Know how the logistic function is being used in the logistic regression model to predict the target value

Introducing K-nearest neighbors algorithm, a classification algorithm, extended by understanding special cases in the k nearest neighbors classifier model, and followed by introducing different distance measure metrics.

- Understand the basic concept of K-nearest neighbors classifier
- Take a look at the special cases in K-nearest neighbors classifier model
- Know what a different distance measure metric is and how to choose the K value

Introducing the support vector machine algorithm by explaining the key concepts in support vector machine such as hyper planes, support vectors, and margins.

- Understanding the key concepts in support vector machine such as hyper planes, support vectors, margins
- Learn how to find the optimal hyperplane
- Discuss the advantages and disadvantages of using the support vector machine algorithm

Implementation of the logistic regression model using python data mining libraries. This is extended by understanding the ANES 1996 dataset. Using implemented logistic regression model to predict whom the voter going to vote.

- Understand the dataset features and targets
- Implement two different logistic regression models
- Compare the two fitted models' accuracy on a test dataset

K-NN classifier implementation using the python scikit-learn library, introducing iris data, and finally predicting the iris category using the implemented k-NN classifier.

- Introducing the famous classification data, which is iris data, by extending the understanding of the features and target labels of the iris dataset
- Fit the k-NNclassifier on the iris dataset
- Compute the k-NN classifier's accuracy over the test dataset

What is Deep Learning, and when is it the way to go?

- Describe what Deep Learning is
- Define an optimal Deep Learning problem set up: supervised learning.
- Wrap up with all that it takes to train a successful Deep Learning model

How to avoid programming Deep Learning from scratch? Let’s take a look at it in this video.

- Define what it takes to train a large neural network: large datasets and seamless GPU programming.
- Take an overview of the options of open source Deep Learning framework
- Choose a python library that is both powerful and simple to use

How to get our first deep neural network trained?

- Use MNIST, a dataset that is both simple and large enough for educational purposes.
- Write the Python code
- Train the network and visualize the results

How are neural networks trained?

- Define a neural network
- Present the backpropagation algorithm
- Show how a backpropagation-powered network layer would look in Python

How can we avoid making a differentiation of functions and make backpropagation easier?

- Learn what Theano is
- Write the Theano functions
- Calculate the gradients automatically with Theano

How do Keras and other libraries use Theano work behind the scenes?

- Define a simple optimization problem
- Write the Theano functions
- Optimize the model using only Theano

How does Keras work?

- Show a Keras basic model
- Understand how layers are connected in Keras
- Show how a model is compiled and optimized

How does Keras work? How does one write a basic, fully connected neural network layer in Keras?

- Show Keras’ base layer class
- Initialize a fully connected layer
- Understand how to get outputs from a dense layer

Understand what convolutional neural networks are and how to use them. How can we write convolutional layers with Python?

- Understand what convolutional and pooling layers are
- Take a look at how to use convolutional and pooling layers in Theano
- Take a look at how to use convolutional and pooling layers in Keras

How can we solve complex image datasets (for example, cats versus dogs) without training a full model from scratch?

- Define a problem and dataset representing the problem as an input-output mapping
- Get a pretrained deep neural network and load it in Python
- Extract features from the input dataset with the deep network and classify these features

How does Keras work?

- Load the values into Theano
- Use sklearn-theano, which is a ready-made solution.
- Extract features with sklearn-theano

We will solve complete image datasets with pretrained models: classifying cats versus dogs.

- Download the dataset
- Convert raw images into features
- Train a Keras model on the features

How to write a loop in Theano?

- Define the "step" function
- Iterate the step with "scan"
- Compile and test the for/scan output

How can one define neural network layers with internal states?

- Define what recurrent neural network equations are
- Implement the simple RNN with Theano
- Compile and test the RNN

Recurrent or convolutional: How can one know which layer they should use?

- Define the main uses of convolutional layers
- Define the main uses of recurrent layers
- Define the rules of thumb for choosing each layer

How can we classify sentiments from text?

- Load the words as scalars
- Embed scalars into a vector space
- Train recurrent neural network on phrase-sentiment pairs using the embedded vectors as input

How can we automatically describe an image in English?

- Extract image features using deep neural networks
- Use the features as input to an RNN that outputs text
- Train RNN on proposed captions for each image

This video gives an overview of the entire course.

TensorFlow is a new machine library that is probably not installed on our operating system by default. This video will guide you through installing TensorFlow locally or remotely.

- Note TensorFlow as a new machine learning library with documentation
- Install TensorFlow locally via pip
- Discuss how SageMathCloud supports TensorFlow

Before we can use TensorFlow for deep learning, we need to understand how TensorFlow handles basic objects and operations. This video will walk you through a few computations.

- Understand TensorFlow objects as multidimensional “tensors”
- Describe TensorFlow computation as specifying a graph
- Execute all or part of a graph using a “session”

Learning any library from documentation can be challenging, so we're going to build a practical machine learning classifier with TensorFlow. We'll start with a simple logistic regression classifier and build up from there.

- We will classify images of characters by font
- Logistic regression is a simple way to “score” possible fonts
- We will quickly code the entire classifier in TensorFlow

Though we have a classifier, we need to compute weights so that our model is accurate. For this, we can use TensorFlow to specify and optimize a loss function. TensorFlow will then use this to find good weights.

- Build a TensorFlow “categorical cross-entropy” function to optimize weights
- Train the model with gradient descent over many epochs
- Evaluate model accuracy and weights

Using single pixels as features limits us to model essentially linear phenomena. To model non-linear things such as font styles involving several pixels, we will use neural networks to transform our inputs into non-linear combinations for use in a logistic regression classifier.

- Explain simple non-linear transformations of values with activation functions
- Implement a single neuron in TensorFlow
- Describe the structural nature of a neural network

Now that we understand neural networks, we can see how TensorFlow makes them easy to implement and train.

- Implement the neural net in TensorFlow
- Briefly describe how backpropagation trains the weights
- Use TensorFlow to train the neural net

Now that we've trained a neural network, we should inspect it closely to understand the accuracy and weights.

- Let's examine the test accuracy over many epochs of training
- Create a confusion matrix to understand model errors
- Inspect and visualize the learned weights

A single hidden layer is good, but you may find the number of neurons growing prohibitive in order to model very complex features. To combine features more easily, we expand the network in depth rather than width. It's true deep learning with multiple hidden layers.

- Instead of adding more neurons, let's add a new layer.
- Deciding the number of neurons and layers is a hard experiment!
- Train the deep neural network. This may take some time.

With our deep neural net trained, we should take time to check its accuracy and understand the features it's extracting.

- Verify model accuracy on training and testing data
- Visualize and understand the weights of the input layer
- Visualize and understand the weights of the output layer

Particularly in images, the features that we want to find can occur anywhere among the pixels. Convolutional neural nets allow us to train one set of weights to search small windows of an image for a feature.

- Understand convolution as sliding windows with one set of weights
- Sets of weights allow pulling of many features from one window
- Handle multiple pixel channels (like RGB colors)

Understanding the theory of convolutional layers is useless without learning the tools to actually use them. This video walks through a simple example with TensorFlow.

- Describe an example setup and unusual shapes of variables
- Implement weights and call TensorFlow convolution code
- Evaluate sample node and observe the effect of convolution

Convolutions can find a feature anywhere in an image, but with all the overlap, we need to make sure we don't find the same feature in the same place multiple times. A pooling layer reduces the size of our input, taking only relevant information.

- Max pooling is a sliding window without overlap
- Poolings usually don't consider partial windows
- Combine convolutional and pooling layers for a maximum effect

Having learned how Max Pooling works in theory, it's time to put it into practice by adding it to our simple example in TensorFlow.

- Tf.maxpool is the core code call
- For a final pooling layer, we should flatten the output.
- Confirm that 2x2 max pooling has cut the shape into half

We've learned about convolutional layers and used them in an example, but now let's use them for real by adding a convolutional layer to the font classification model.

- Setup window size and weights with correct shape
- Implement convolution and pooling layers. Flatten the parameters.
- Evaluate the model accuracy

Convolutional layers often work well when chained together. Let's add another to our font classification model.

- Set up a new convolutional layer
- Carefully transition from convolutional to dense layer
- Describe dropout training to avoid overfitting

After our deep model has trained for a model, it's time to see how well it performs.

- Note that there are no dropouts during testing
- Observe the model accuracy
- Understand convolutional weights as small image features

Some problems have time-based inputs. Features from the recent past might matter to the current prediction. To address these, researchers have developed recurrent neural networks. TensorFlow natively supports these.

- Explain the background of recurrent neural networks
- Describe the example problem of predicting the season from the weather
- Implement a season predictor in TensorFlow with a recurrent neural network

TensorFlow models can be cumbersome to specify, yet follow a common pattern. skflow provides a simple interface for typical models.

- Introduce skflow as sklearn for TensorFlow
- Build a simple font classification model with skflow
- Build a custom font classification model with skflow

RNNs can be hard to specify, but skflow will let us quickly build a model.

- Reshape data for skflow RNN
- One-line model specification
- Train and evaluate

After learning so many methods and building all these models, it's helpful to look back and see how far we've come.

- Revisit font image data
- Review simple and densely connected models
- Review convolutional models

TensorFlow is changing very quickly and is being adopted by more researchers and professionals. But at its core are the contributions submitted by new users.

- Note the fast pace of development in TensorFlow
- Describe the generality of TensorFlow
- Encourage viewers to contribute to and improve TensorFlow