A Big Data Hadoop and Spark project for absolute beginners
Free Udemy Coupon - A Big Data Hadoop and Spark project for absolute beginners, Data Engineering, Spark, Hive, Python, PySpark, Scala, Coding framework, Testing, IntelliJ, Maven, Glue, Streaming,
- Created by FutureX Skill
- English
What you'll learn
- Big Data , Hadoop and Spark from scratch by solving a real world use case using Python and Scala
- Spark Scala & PySpark real world coding framework.
- Real world coding best practices, logging, error handling , configuration management using both Scala and Python.
- Serverless big data solution using AWS Glue, Athena and S3
Description
This course will prepare you for a real world Data Engineer role !
Get started with Big Data quickly leveraging free cloud cluster and solving a real world use case! Learn Hadoop, Hive , Spark (both Python and Scala) from scratch!
Learn to code Spark Scala & PySpark like a real world developer. Understand real world coding best practices, logging, error handling , configuration management using both Scala and Python.
Project
A bank is launching a new credit card and wants to identify prospects it can target in its marketing campaign.
It has received prospect data from various internal and 3rd party sources. The data has various issues such as missing or unknown values in certain fields. The data needs to be cleansed before any kind of analysis can be done.
Since the data is in huge volume with billions of records, the bank has asked you to use Big Data Hadoop and Spark technology to cleanse, transform and analyze this data.
What you will learn :
Big Data, Hadoop concepts
How to create a free Hadoop and Spark cluster using Google Dataproc
Hadoop hands-on - HDFS, Hive
Python basics
PySpark RDD - hands-on
PySpark SQL, DataFrame - hands-on
Project work using PySpark and Hive
Scala basics
Spark Scala DataFrame
Project work using Spark Scala
Spark Scala Real world coding framework and development using Winutil, Maven and IntelliJ.
Python Spark Hadoop Hive coding framework and development using PyCharm
Building a data pipeline using Hive , PostgreSQL, Spark
Logging , error handling and unit testing of PySpark and Spark Scala applications
Spark Scala Structured Streaming
Applying spark transformation on data stored in AWS S3 using Glue and viewing data using Athena
Prerequisites :
Some basic programming skills
Some knowledge of SQL queries
Who this course is for:
Beginners who want to learn Big Data or experienced people who want to transition to a Big Data role
Big data beginners who want to learn how to code in the real world
100% Off Udemy Coupon . Free Udemy Courses . Online Classes
Post a Comment for "A Big Data Hadoop and Spark project for absolute beginners"