PySpark for Data Science - Intermediate

PySpark for Data Science – Intermediate

Description

This module on PySpark Tutorials aims to explain the intermediate concepts such as those like the use of Spark session in case of later versions and the use of Spark Config and Spark Context in case of earlier versions. This will also help you in understanding how the Spark-related environment is set up, concepts of Broadcasting and accumulator, other optimization techniques include those like parallelism, tungsten, and catalyst optimizer. You will also be taught about the various compression techniques such as Snappy and Zlib. We will also understand and talk about the various Big data ecosystem related concepts such as HDFS and block storage, various components of Spark such as Spark Core, Mila, GraphX, R, Streaming, SQL, etc. and will also study the basics of Python language which is related and relevant to be used along with Apache Spark thereby making it Pyspark. We will learn the following in this course:

  • Regression
  • Linear Regression
  • Output Column
  • Test Data
  • Prediction
  • Generalized Linear Regression
  • Forest Regression
  • Classification
  • Binomial Logistic Regression
  • Multinomial Logistic Regression
  • Decision Tree
  • Random Forest
  • Clustering
  • K-Means Model

Leave a Reply