Course: From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase

From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase

  • Life Time Access
  • Certificate on Completion
  • Access on Android and iOS App
About this Course

Prerequisites: No prerequisites, knowledge of some undergraduate level mathematics would help but is not mandatory. Working knowledge of Python would be helpful if you want to run the source code that is provided.

Taught by a Stanford-educated, ex-Googler and an IIT, IIM - educated ex-Flipkart lead analyst. This team has decades of practical experience in quant trading, analytics and e-commerce. 

This course is a down-to-earth, shy but confident take on machine learning techniques that you can put to work today

Let’s parse that.

The course is down-to-earth : it makes everything as simple as possible - but not simpler

The course is shy but confident : It is authoritative, drawn from decades of practical experience -but shies away from needlessly complicating stuff.

You can put ML to work today : If Machine Learning is a car, this car will have you driving today. It won't tell you what the carburetor is.

The course is very visual : most of the techniques are explained with the help of animations to help you understand better.

This course is practical as well : There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.

The course is also quirky. The examples are irreverent. Lots of little touches: repetition, zooming out so we remember the big picture, active learning with plenty of quizzes. There’s also a peppy soundtrack, and art - all shown by studies to improve cognition and recall.

What's Covered:

Machine Learning: 

Supervised/Unsupervised learning, Classification, Clustering, Association Detection, Anomaly Detection, Dimensionality Reduction, Regression.

Naive Bayes, K-nearest neighbours, Support Vector Machines, Artificial Neural Networks, K-means, Hierarchical clustering, Principal Components Analysis, Linear regression, Logistics regression, Random variables, Bayes theorem, Bias-variance tradeoff

Natural Language Processing with Python: 

Corpora, stopwords, sentence and word parsing, auto-summarization, sentiment analysis (as a special case of classification), TF-IDF, Document Distance, Text summarization, Text classification with Naive Bayes and K-Nearest Neighbours and Clustering with K-Means

Sentiment Analysis: 

Why it's useful, Approaches to solving - Rule-Based , ML-Based , Training , Feature Extraction, Sentiment Lexicons, Regular Expressions, Twitter API, Sentiment Analysis of Tweets with Python

Mitigating Overfitting with Ensemble Learning:

Decision trees and decision tree learning, Overfitting in decision trees, Techniques to mitigate overfitting (cross validation, regularization), Ensemble learning and Random forests

Recommendations: Content based filtering, Collaborative filtering and Association Rules learning

Get started with Deep learning: Apply Multi-layer perceptrons to the MNIST Digit recognition problem

A Note on Python: The code-alongs in this class all use Python 2.7. Source code (with copious amounts of comments) is attached as a resource with all the code-alongs. The source code has been provided for both Python 2 and Python 3 wherever possible.

Who is the target audience?

  • Yep! Analytics professionals, modelers, big data professionals who haven't had exposure to machine learning
  • Yep! Engineers who want to understand or learn machine learning and apply it to problems they are solving
  • Yep! Product managers who want to have intelligent conversations with data scientists and engineers about machine learning
  • Yep! Tech executives and investors who are interested in big data, machine learning or natural language processing
  • Yep! MBA graduates or business professionals who are looking to move to a heavily quantitative role
Basic knowledge
  • No prerequisites, knowledge of some undergraduate level mathematics would help but is not mandatory. Working knowledge of Python would be helpful if you want to run the source code that is provided.
What you will learn
  • Identify situations that call for the use of Machine Learning
  • Understand which type of Machine learning problem you are solving and choose the appropriate solution
  • Use Machine Learning and Natural Language processing to solve problems like text classification, text summarization in Python
Lectures quantity: 93
Common duration: 19:50:46
  • You, This Course and Us  

    We - the course instructors - start with introductions. We are a team that has studied at Stanford, IIT Madras, IIM Ahmedabad and spent several years working in top tech companies, including Google and Flipkart.

    Next, we talk about the target audience for this course: Analytics professionals, modelers and big data professionals certainly, but also Engineers, Product managers, Tech Executives and Investors, or anyone who has some curiosity about machine learning.

    If Machine Learning is a car, this class will teach you how to drive. By the end of this class, students will be able to: spot situations where machine learning can be used, and deploy the appropriate solutions. Product managers and executives will learn enough of the 'how' to be able intelligently converse with their data science counterparts, without being constrained by it.

    This course is practical as well : There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.

  • A sneak peek at what's coming up  

    This course is both broad and deep. It covers several different types of machine learning problems, their solutions and shows you how to practically apply them using Python. 

Jump right in : Machine learning for Spam detection
  • Solving problems with computers  

    There are different approaches to using computers to solve problems. We'll compare and contrast those approaches in this section

  • Machine Learning: Why should you jump on the bandwagon?  

    Machine learning is quite the buzzword these days. While it's been around for a long time, today its applications are wide and far-reaching - from computer science to social science, quant trading and even genetics. From the outside, it seems like a very abstract science that is heavy on the math and tough to visualize. But it is not at all rocket science. Machine learning is like any other science - if you approach it from first principles and visualize what is happening, you will find that it is not that hard. So, let's get right into it, we will take an example and see what Machine learning is and why it is so useful.

  • Plunging In - Machine Learning Approaches to Spam Detection  

    Machine learning usually involves a lot of terms that sound really obscure. We'll see a real life implementation of a machine learning algorithm (Naive Bayes) and by end of it you should be able to speak some of the language of ML with confidence.

  • Spam Detection with Machine Learning Continued  

    We have gotten our feet wet and seen the implementation of one ML solution to spam detection - let's venture a little further and see some other ways to solve the same problem. We'll see how K-Nearest Neighbors and Support Vector machines can be used to solve spam detection.

  • Get the Lay of the Land : Types of Machine Learning Problems  

    So far we have been slowly getting comfortable with machine learning - we took one example and saw a few different approaches. That was just the the tip of the iceberg - this class is an aerial maneuver, we will scout ahead and see what are the different classes of problems that Machine Learning can solve and that we will cover in this class.

Solving Classification Problems
  • Solving Classification Problems  

    We've described how to identify classification problems. This section covers some of the most popular classification algorithms such as the Naive Bayes classifier, K-Nearest neighbors, Support Vector machines and Artificial Neural Networks

  • Random Variables  

    Many popular machine learning techniques are probabilistic in nature and having some working knowledge helps. We'll cover random variables, probability distributions and the normal distribution.

  • Bayes Theorem  

    We have been learning some fundamentals that will help us with probabilistic concepts in Machine Learning. In this class, we will learn about conditional probability and Bayes theorem which is the foundation of many ML techniques.

  • Naive Bayes Classifier  

    Naive Bayes Classifier is a probabilistic classifier. We have built the foundation to understand what goes on under the hood - let's understand how the Naive Bayes classifier uses the Bayes theorem

  • Naive Bayes Classifier : An example  

    We will see how the Naive Bayes classifier can be used with an example.

  • K-Nearest Neighbors  

    Let's understand the k-Nearest Neighbors setup with a visual representation of how the algorithm works.

  • K-Nearest Neighbors : A few wrinkles  

    There are few wrinkles in k-Nearest Neighbors. These are just the things to keep in mind if and when you decide to implement it.

  • Support Vector Machines Introduced  

    We have been talking about different classifier algorithms. We'll learn about Support Vector Machines which are linear classifiers.

  • Support Vector Machines : Maximum Margin Hyperplane and Kernel Trick  

    Support Vector Machines algorithm can be framed as an optimization problem. The kernel trick can be used along with SVM to perform non-linear classification.

  • Artificial Neural Networks:Perceptrons Introduced  

    Artificial Neural Networks are much misunderstood because of the name. We will see the Perceptron (a prototypical example of ANNs) and how it is analogous to Support Vector Machine

Clustering as a form of Unsupervised learning
  • Clustering : Introduction  

    Clustering helps us understand what are the patterns in a large set of data that we don't know much about. It is a form of unsupervised learning.

  • Clustering : K-Means and DBSCAN  

    K-Means and DBSCAN are 2 very popular clustering algorithms. How do they work and what are the key considerations?

Association Detection
  • Association Rules Learning  

    It is all about finding relationships in the data - sometimes there are relationships that you would not intuitively expect to find. It is pretty powerful - so let's take a peek at what it does.

Dimensionality Reduction
  • Dimensionality Reduction  

    Data that you are working can be noisy or garbled or difficult to make sense of. It can be so complicated that its difficult to process efficiently. Dimensionality reduction to the rescue - it cleans up the noise and shows you a clear picture. Getting rid of unnecessary features makes the computation simpler.

  • Principal Component Analysis  

    PCA is one of the most famous Dimensionality Reduction techniques. When you have data with a lot of variables and confusing interactions, PCA clears the air and finds the underlying causes.

Regression as a form of supervised learning
  • Regression Introduced : Linear and Logistic Regression  

    Regression can be used to predict the value of a variable, given some predictor variables. We'll see an example to understand its use and cover two popular methods : Linear and Logistic regression

  • Bias Variance Trade-off  

    In this class, we will talk about some trade-offs which we have to be aware of when we choose our training data and model.

Natural Language Processing and Python
  • Applying ML to Natural Language Processing  

    This section will help you put all your hard earned knowledge to practice! Here's a quick overview of what's coming up. 

  • Installing Python - Anaconda and Pip  

    Anaconda's iPython is a Python IDE. The best part about it is the ease with which one can install packages in iPython - 1 line is virtually always enough. Just say '!pip'

  • Natural Language Processing with NLTK  

    Natural Language Processing is a serious application for all the Machine Learning techniques we have been using. Let's get our feet wet by understanding a few of the common NLP problems and tasks. We'll get familiar with NLTK - an awesome Python toolkit for NLP

  • Natural Language Processing with NLTK - See it in action  

    We'll continue exploring NLTK and all the cool functionality it brings out of the box - tokenization, Parts-of-Speech tagging, stemming, stopwords removal etc

  • Web Scraping with BeautifulSoup  

    Web Scraping is an integral part of NLP - its how you prepare the text data that you will actually process. Web Scraping can be a headache - but Beautiful Soup makes it elegant and intuitive.

  • A Serious NLP Application : Text Auto Summarization using Python  

    Auto-summarize newspaper articles from a website (Washington Post). We'll use NLP techniques to remove stopwords, tokenize text and sentences and compute term frequencies. The Python source code (with many comments) is attached as a resource.

  • Python Drill : Autosummarize News Articles I  

    Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article.

  • Python Drill : Autosummarize News Articles II  

    Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article and the importance of sentences in an article.

  • Python Drill : Autosummarize News Articles III  

    Code along with us in Python - we'll use Beautiful Soup to parse an article downloaded from the Washington Post and then summarize it using the class we set up earlier.

  • Put it to work : News Article Classification using K-Nearest Neighbors  

    Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the K-Nearest Neighbours algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.

  • Put it to work : News Article Classification using Naive Bayes Classifier  

    Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the Naive Bayes Classification algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.

  • Python Drill : Scraping News Websites  

    Code along with us in Python - we'll use BeautifulSoup to build a corpus of news articles

  • Python Drill : Feature Extraction with NLTK  

    Code along with us in Python - we'll use NLTK to extract features from articles.

  • Python Drill : Classification with KNN  

    Code along with us in Python - we'll use KNN algorithm to classify articles into Tech/NonTech

  • Python Drill : Classification with Naive Bayes  

    Code along with us in Python - we'll use a Naive Bayes Classifier to classify articles into Tech/Non-Tech

  • Document Distance using TF-IDF  

    See how search engines compute the similarity between documents. We'll represent a document as a vector, weight it with TF-IDF and see how cosine similarity or euclidean distance can be used to compute the distance between two documents.

  • Put it to work : News Article Clustering with K-Means and TF-IDF  

    Create clusters of similar articles within a large corpus of articles. We'll scrape a blog to download all the blog posts, use TF-IDF to represent them as vectors. Finally, we'll perform K-Means clustering to identify 5 clusters of articles. The Python source code (with many comments) is attached as a resource.

  • Python Drill : Clustering with K Means  

    Code along with us in Python - We'll cluster articles downloaded from a blog using the KMeans algorithm.

Sentiment Analysis
  • Solve Sentiment Analysis using Machine Learning  

    Lots of new stuff coming up in the next few classes. Sentiment Analysis (or) Opinion Mining is a field of NLP that deals with extracting subjective information (positive/negative, like/dislike, emotions). Learn why it's useful and how to approach the problem. There are Rule-Based and ML-Based approaches. The details are really important - training data and feature extraction are critical. Sentiment Lexicons provide us with lists of words in different sentiment categories that we can use for building our feature set. All this is in the run up to a serious project to perform Twitter Sentiment Analysis. We'll spend some time on Regular Expressions which are pretty handy to know as we'll see in our code-along.

  • Sentiment Analysis - What's all the fuss about?  

    As people spend more and more time on the internet, and the influence of social media explodes, knowing what your customers are saying about you online, becomes crucial. Sentiment Analysis comes in handy here - This is an NLP problem that can be approached in multiple ways. We examine a couple of rule based approaches, one of which has become standard fare (VADER)

  • ML Solutions for Sentiment Analysis - the devil is in the details  

    SVM and Naive Bayes are popular ML approaches to Sentiment Analysis. But the devil really is in the details. What do you use for training data? What features should you use? Getting these right is critical.

  • Sentiment Lexicons ( with an introduction to WordNet and SentiWordNet)  

    Sentiment Lexicon's are a great help in solving problems where the subjectivity/emotion expressed by a word are important. SentiWordNet is different even among the popular sentiment lexicons (General Inquirer, LIWC, MPQA etc) all of which are touched upon

  • Regular Expressions  

    Regular expressions are a handy tool to have when you deal with text processing. They are a bit arcane, but pretty useful in the right situation. Understanding the operators from basics help you build up to constructing complex regexps.

  • Regular Expressions in Python  

    re is the module in python to deal with regular expressions. It has functions to find a pattern, substitute a pattern etc within a string.

  • Put it to work : Twitter Sentiment Analysis  

    A serious project - Accept a search term from a user and output the prevailing sentiment on Twitter for that search term. We'll use the Twitter API, Sentiwordnet, SVM, NLTK, Regular Expressions - really work that coding muscle :)

  • Twitter Sentiment Analysis - Work the API  

    We'll accept a search term from a user and download a 100 tweets with that term. You'll need a corpus to train a classifier which can classifiy these tweets. The corpus has only tweet_ids, so connect to Twitter API and fetch the text for the tweets.

  • Twitter Sentiment Analysis - Regular Expressions for Preprocessing  

    The tweets that we downloaded have a lot of garbage, clean it up using regular expressions and NLTK and get a nice list of words to represent each tweet.

  • Twitter Sentiment Analysis - Naive Bayes, SVM and Sentiwordnet  

    We'll train 2 different classifiers on our training data , Naive Bayes and SVM. The SVM will use Sentiwordnet to assign weights to the elements of the feature vector.

Decision Trees
  • Using Tree Based Models for Classification  

    Tree based models are very useful to solve a variety of classification problems. The next few sections will introduce you to decision trees, problems inherent to tree learning such as overfitting and how to use ensemble learning techniques to solve these problems. 

  • Planting the seed - What are Decision Trees?  

    What are Decision Trees and how are they useful? Decision Trees are a visual and intuitive way of predicting what the outcome will be given some inputs. They assign an order of importance to the input variables that helps you see clearly what really influences your outcome.

  • Growing the Tree - Decision Tree Learning  

    Recursive Partitioning is the most common strategy for growing Decision Trees from a training set.

    Learn what makes one attribute be higher up in a Decision Tree compared to others.

  • Branching out - Information Gain  

    We'll take a small detour into Information Theory to understand the concept of Information Gain. This concept forms the basis of how popular Decision Tree Learning algorithms work.

  • Decision Tree Algorithms  

    ID3, C4.5, CART and CHAID are commonly used Decision Tree Learning algorithms. Learn what makes them different from each other. Pruning is a mechanism to avoid one of the risks inherent with Decision Trees ie overfitting.

  • Titanic : Decision Trees predict Survival (Kaggle) - I  

    Build a decision tree to predict the survival of a passenger on the Titanic. This is a challenge posed by Kaggle (a competitive online data science community). We'll start off by exploring the data and transforming the data into feature vectors that can be fed to a Decision Tree Classifier.

  • Titanic : Decision Trees predict Survival (Kaggle) - II  

    We continue with the Kaggle challenge. Let's feed the training set to a Decision Tree Classifier and then parse the results.

  • Titanic : Decision Trees predict Survival (Kaggle) - III  

    We'll use our Decision Tree Classifier to predict the results on Kaggle's test data set. Submit the results to Kaggle and see where you stand!

A Few Useful Things to Know About Overfitting
  • Overfitting - the bane of Machine Learning  

    Overfitting is one of the biggest problems with Machine Learning - it's a trap that's easy to fall into and important to be aware of.

  • Overfitting Continued  

    Overfitting is a difficult problem to solve - there is no way to avoid it completely, by correcting for it, we fall into the opposite error of underfitting.

  • Cross Validation  

    Cross Validation is a popular way to choose between models. There are a few different variants - K-Fold Cross validation is the most well known.

  • Simplicity is a virtue - Regularization  

    Overfitting occurs when the model becomes too complex. Regularization helps maintain the balance between accuracy and complexity of the model.

  • The Wisdom of Crowds - Ensemble Learning  

    The crowd is indeed wiser than the individual - at least with ensemble learning. The Netflix competition showed that ensemble learning helps achieve tremendous improvements in accuracy - many learners perform better than just 1.

  • Ensemble Learning continued - Bagging, Boosting and Stacking  

    Bagging, Boosting and Stacking are different techniques to help build an ensemble that rocks!

Random Forests
  • Random Forests - Much more than trees  

    Decision trees are cool but painstaking to build - because they really tend to overfit. Random Forests to the rescue! Use an ensemble of decision trees - all the benefits of decision trees, few of the pains!

  • Back on the Titanic - Cross Validation and Random Forests  

    Machine learning is not a one-shot process. You'll need to iterate, test multiple models to see what works better. Let's use cross validation to compare the accuracy of different models - Decision trees vs Random Forests

Recommendation Systems
  • Solving Recommendation Problems  

    Recommendations are some of the most cutting-edge and exciting problems you can solve using Machine learning. Here's a quick overview of what's coming up in this section. 

  • What do Amazon and Netflix have in common?  

    Recommendations - good quality, personalized recommendations - are the holy grail for many online stores. What is the driving force behind this quest?

  • Recommendation Engines - A look inside  

    Recommendation Engines perform a variety of tasks - but the most important one is to find products that are most relevant to the user. Content based filtering, collaborative filtering and Association rules are common approaches to do so.

  • What are you made of? - Content-Based Filtering  

    Content based filtering finds products relevant to a user - based on the content of the product (attributes, description, words etc).

  • With a little help from friends - Collaborative Filtering  

    Collaborative Filtering is a general term for an idea that users can help each other find what products they like. Today this is by far the most popular approach to Recommendations

  • A Neighbourhood Model for Collaborative Filtering  

    Neighbourhood models - also known as Memory based approaches - rely on finding users similar to the active user. Similarity can be measured in many ways - Euclidean Distance, Pearson Correlation and Cosine similarity being a few popular ones.

  • Top Picks for You! - Recommendations with Neighbourhood Models  

    We continue with Neighbourhood models and see how to predict the rating of a user for a new product. Use this to find the top picks for a user.

  • Discover the Underlying Truth - Latent Factor Collaborative Filtering  

    Latent factor methods identify hidden factors that influence users from user history. Matrix Factorization is used to find these factors. This method was first used and then popularized for recommendations by the Netflix Prize winners. Many modern recommendation systems including Netflix, use some form of matrix factorization.

  • Latent Factor Collaborative Filtering contd.  

    Matrix Factorization for Recommendations can be expressed as an optimization problem. Stochastic Gradient Descent or Alternating least squares can then be used to solve that problem.

  • Gray Sheep and Shillings - Challenges with Collaborative Filtering  

    Gray Sheep, Synonymy, Data Sparsity, Shilling Attacks etc are a few challenges that people face with Collaborative Filtering.

  • The Apriori Algorithm for Association Rules  

    Association rules help you find recommendations for products that might complement the user's choices. The seminal paper on association rules introduced an efficient technique for finding these rules - The Apriori Algorithm

Recommendation Systems in Python
  • Back to Basics : Numpy in Python  

    Numpy arrays are pretty cool for performing mathematical computations on your data.

  • Back to Basics : Numpy and Scipy in Python  

    We continue with a basic tutorial on Numpy and Scipy

  • Movielens and Pandas  

    Movielens is a famous dataset with movie ratings. Use Pandas to read and play around with the data.

  • Code Along - What's my favorite movie? - Data Analysis with Pandas  

    We continue playing with Movielens data - lets find the top n rated movies for a user.

  • Code Along - Movie Recommendation with Nearest Neighbour CF  

    Let's find some recommendations now. We'll use neighbour based collaborative filtering to find the users most similar to a user and then predict their rating for a movie

  • Code Along - Top Movie Picks (Nearest Neighbour CF)  

    We've predicted the user's rating for all movies. Let's pick the top recommendations for a user.

  • Code Along - Movie Recommendations with Matrix Factorization  

    Matrix Factorization was first used for recommendations during the Netflix challenge. Let's implement this on the Movielens data and find some recommendations!

  • Code Along - Association Rules with the Apriori Algorithm  

    The Apriori algorithm was introduced in a seminal paper that described how to mine large datasets for association rules efficiently. Let's work through the algorithm in Python.

A Taste of Deep Learning and Computer Vision
  • Computer Vision - An Introduction  

    A quick intro to Computer Vision, and one of the most popular starter problems - identifying handwritten digits using the MNIST database. We also talk about feature extraction from images.

  • Perceptron Revisited  

    Deep Learning Networks are the cutting edge solution for the handwritten digit recognition problem and many others in computer vision. These are often large artificial neural networks. The perceptron is the simplest of artificial neural networks - it becomes a building block for other complex networks

  • Deep Learning Networks Introduced  

    Multilayer perceptrons build upon the idea of a perceptron. These have layers of perceptrons that process the input and feed them forward to other layers.

  • Code Along - Handwritten Digit Recognition -I  

    Train a neural network to classify handwritten digits in Python. First start by downloading and unzipping the MNIST database images to create some training and test datasets.

  • Code Along - Handwritten Digit Recognition - II  

    Continuing on with the handwritten digit recognition problem, we build a neural network and specify the training process.

  • Code Along - Handwritten Digit Recognition - III  

    We have a trained neural network, feed it some test data and check the accuracy.

reviews (0)
Average rating
0 voices
Detailed rating
5 stars
4 stars
3 stars
2 stars
1 stars