Library

Course: From 0 to 1 : Spark for Data Science with Python

From 0 to 1 : Spark for Data Science with Python

  • Life Time Access
  • Certificate on Completion
  • Access on Android and iOS App
About this Course

Taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. 

Get your data to fly using Spark for analytics, machine learning and data science 

Let’s parse that.

  • What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
  • Analytics: Using Spark and Python you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease. 
  • Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets. 

What's Covered:

Lot's of cool stuff ..

  • Music Recommendations using Alternating Least Squares and the Audioscrobbler dataset
  • Dataframes and Spark SQL to work with Twitter data
  • Using the PageRank algorithm with Google web graph dataset
  • Using Spark Streaming for stream processing 
  • Working with graph data using the Marvel Social network dataset 

.. and of course all the Spark basic and advanced features: 

  • Resilient Distributed Datasets, Transformations (map, filter, flatMap), Actions (reduce, aggregate) 
  • Pair RDDs , reduceByKey, combineByKey 
  • Broadcast and Accumulator variables 
  • Spark for MapReduce 
  • The Java API for Spark 
  • Spark SQL, Spark Streaming, MLlib and GraphFrames (GraphX for Python) 

Using discussion forums

Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

It is a hard trade-off.

Thank you for your patience and understanding!

Who is the target audience?

  • Yep! Analysts who want to leverage Spark for analyzing interesting datasets
  • Yep! Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
  • Yep! Engineers who want to use a distributed computing engine for batch or stream processing or both
Basic knowledge
  • The course assumes knowledge of Python. You can write Python code directly in the PySpark shell. If you already have IPython Notebook installed, we'll show you how to configure it for Spark
  • For the Java section, we assume basic knowledge of Java. An IDE which supports Maven, like IntelliJ IDEA/Eclipse would be helpful
  • All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode).
What you will learn
  • Use Spark for a variety of analytics and Machine Learning tasks
  • Implement complex algorithms like PageRank or Music Recommendations
  • Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings
  • Use all the different features and libraries of Spark : RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX
Curriculum
Lectures quantity: 53
Common duration: 08:08:55
You, This Course and Us
  • You, This Course and Us  


    You, This Course and Us

Introduction to Spark
  • What does Donald Rumsfeld have to do with data analysis?  

    He has a great categorization for insights in data, really!

    There is a profound truth in here which data scientists and analysts have known for years.

  • Why is Spark so cool?  

    Explore, investigate and find patterns in data. Build fully fledged, scalable productions system. All using the same environment.

  • An introduction to RDDs - Resilient Distributed Datasets  

    RDDs are pretty magical, they are the core programming abstraction in Spark

  • Built-in libraries for Spark  

    Spark is even more powerful because of the packages that come with it. Spark SQL, Spark Streaming, MLlib and GraphX.

  • Installing Spark  

    Let's get started by installing Spark. We'll also configure Spark to work with IPython Notebook

  • The PySpark Shell  

    Start munging data using the PySpark REPL environment.

  • Transformations and Actions  

    Operations on data, transform data to extract information and then retrieve results.


  • See it in Action : Munging Airlines Data with PySpark - I  

    We've learnt a little bit about how Spark and RDDs work. Let's see it in action! 

  • [For Linux/Mac OS Shell Newbies] Path and other Environment Variables  

    If you are unfamiliar with softwares that require working with a shell/command line environment, this video will be helpful for you. It explains how to update the PATH environment variable, which is needed to set up most Linux/Mac shell based softwares. 

Resilient Distributed Datasets
  • RDD Characteristics: Partitions and Immutability  

    RDDs are very intuitive to use. What are some of the characteristics that make RDDs performant, resilient and efficient? 

  • RDD Characteristics: Lineage, RDDs know where they came from  

    Lazy evaluation of RDDs is possible because RDDs can reconstruct themselves. They know where they came from.

  • What can you do with RDDs?  

    A quick overview of all operations and transformations on RDDs

  • Create your first RDD from a file  

    Parse a CSV file, transform is using the map() operation, create Flight objects on the fly.

  • Average distance travelled by a flight using map() and reduce() operations  

    Use the flights dataset to get interesting insights.

  • Get delayed flights using filter(), cache data using persist()  

    Cache RDDs in memory to optimize operations using persist()

  • Average flight delay in one-step using aggregate()  

    Use the aggregate() operation to calculate average flight delays in one step. Much more compact than map() and reduce().

  • Frequency histogram of delays using countByValue()  

    This is surprisingly simple!

  • See it in Action : Analyzing Airlines Data with PySpark - II  

    See all of the RDD operations in action. Using map, reduce, aggregate to analyze Airline data.

Advanced RDDs: Pair Resilient Distributed Datasets
  • Special Transformations and Actions  

    Pair RDDs are special types of RDDs where every record is a key value pair. All normal actions and transformations apply to these in addition to some special ones.

  • Average delay per airport, use reduceByKey(), mapValues() and join()  

    Pair RDDs are useful to get information on a per-key basis. Sales per city, delays per airport etc.

  • Average delay per airport in one step using combineByKey()  

    Instead of 3 steps use just one to get the average delay per airport.

  • Get the top airports by delay using sortBy()  

    Sort RDDs easily

  • Lookup airport descriptions using lookup(), collectAsMap(), broadcast()  

    Looking up airport descriptions in a pair RDD can be done in many ways, understand how each work.

  • See it in Action : Analyzing Airlines Data with PySpark - III  

    Analyze airlines data with the help of Pair RDDs. 

Advanced Spark: Accumulators, Spark Submit, MapReduce , Behind The Scenes
  • Get information from individual processing nodes using accumulators  

    Accumulators are special variables which allow the main driver program to collect information from nodes on which the actual processing takes place.

  • Using an Accumulator variable  


    See it in Action : Using an Accumulator variable

  • Long running programs using spark-submit  

    Spark is more than just the Read-Evaluate-Print Loop environment, it can run long running programs as well.

  • Running a Python script with Spark-Submit  


    See it in Action : Running a Python script with Spark-Submit

  • Behind the scenes: What happens when a Spark script runs?  

    How does Spark submit jobs for distributed processing? How does the scheduler work? What does the cluster manager do? All this and more in this behind the scenes.

  • Running MapReduce operations  

    MapReduce is a powerful paradigm for distributed processing. Many tasks lend themselves well to this model and Spark has transformations which deal with this beautifully.

  • MapReduce with Spark  

    See it in Action : MapReduce with Spark

Java and Spark
  • The Java API and Function objects  

    Spark works with Java as well. If that is your language of choice then you have reason to rejoice.

  • Pair RDDs in Java  

    Pair RDDs in Java have to be created explicitly, a tuple RDD is not automatically a Pair RDD

  • Running Java code  

    Using spark-submit with Java code.

  • Installing Maven  

    Maven is a prerequisite to compiling and building your Java JARs for Spark. 

  • Running a Spark Job with Java  


    See it in Action : Running a Spark Job with Java

PageRank: Ranking Search Results
  • What is PageRank?  

    What is PageRank?

  • The PageRank algorithm  


    The PageRank algorithm

  • Implement PageRank in Spark  

    This will be way simpler than the explanation.


  • Join optimization in PageRank using Custom Partitioning  

    Optimize the algorithm by making joins more performant

  • The PageRank algorithm using Spark  

    See it Action : The PageRank algorithm using Spark

Spark SQL
  • Dataframes: RDDs + Tables  

    Pretend data is in a relational database using Dataframes. Dataframes are also RDDs, you get the best of both worlds!


  • Dataframes and Spark SQL  

    See it in Action : Dataframes and Spark SQL

MLlib in Spark: Build a recommendations engine
  • Collaborative filtering algorithms  

    This is a family of algorithms which give recommendations based on user data and user preferences.


  • Latent Factor Analysis with the Alternating Least Squares method  

    One type of collaborative filtering algorithm is latent factor analysis. There is some math here but don't worry, MLlib takes care of all this for you.

  • Music recommendations using the Audioscrobbler dataset  

    Let's write a recommendation engine for music services

  • Implement code in Spark using MLlib  

    The code in Spark is surprisingly simple.

Spark Streaming
  • Introduction to streaming  

    Spark can process streaming data in near real time using DStreams. 

  • Implement stream processing in Spark using Dstreams  

    A script to parse logs in real time

  • Stateful transformations using sliding windows  

    Stateful transformations allow cumulative results across a stream using a sliding window.

  • Spark Streaming  

    See it in Action : Spark Streaming

Graph Libraries
  • The Marvel social network using Graphs  

    Find the most well connected Marvel character using GraphFrames with Spark.


Reviews (0)