Massive amounts of data are being generated everyday, everywhere. As a result, a number of organizations are focusing on big data processing. In this course we’ll help you understand how Hadoop, as an ecosystem, helps us store, process, and analyze data. We will then smoothly move to developing large-scale distributed data processing applications using Apache Spark 2.
About the Authors
Randal Scott King
Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children. You can visit his blog at http://www.randalscottking.com.
Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.
Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.
When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.
- Data scientists or big data architects interested in combining the data processing power of Hadoop and Apache Spark should be having prior knowledge of these technologies
- Install and configure an Hadoop instance of your own
- Navigate Hue, the GUI for common tasks in Hadoop
- Import data manually, and automatically from a database
- Build scripts with Pig to perform common ETL tasks
- Write and run a simple MapReduce program
- Structure and query data effectively with Hive, Hadoop’s built-in data warehousing component
- Get to know the fundamentals of Spark 2.0 and the Spark programming model using Scala and Python
- Know how to use Spark SQL and DataFrames using Scala and Python
- Get an introduction to Spark programming using R
- Perform Spark data processing, charting, and plotting using Python
- Get acquainted with Spark stream processing using Scala and Python
- Be introduced to machine learning with Spark using Scala and Python
- Get started with graph processing with Spark using Scala
- Develop a complete Spark application