Price
$199
Course Type
Online
Duration
17 hours
Date
Various dates throughout the year
Entry Requirements
All Levels

About this course

Modern companies estimate that only 12% of their accumulated data is analyzed, and IT professionals who are able to work with the remaining data are becoming increasingly valuable to companies. Big data talent requests are also up 40% in the past year.

Simply put, there is too much data and not enough professionals to manage and analyze it. This course aims to close the gap by covering MapReduce and its most popular implementation: Apache Hadoop. We will also cover Hadoop ecosystems and the practical concepts involved in handling very large data sets.

Learn and Master the Most Popular Big Data Technologies in this Comprehensive Course.

  • Apache Hadoop and MapReduce on Amazon EMR
  • Hadoop Distributed File System vs. Google File System
  • Data Types, Readers, Writers and Splitters
  • Data Mining and Filtering
  • Shell Comments and HDFS
  • Cloudera, Hortonworks and Apache Bigtop Virtual Machines

Mastering Big Data for IT Professionals World Wide
Broken down, Hadoop is an implementation of the MapReduce Algorithm and the MapReduce Algorithm is used in Big Data to scale computations. The MapReduce algorithms load a block of data into RAM, perform some calculations, load the next block, and then keep going until all of the data has been processed from unstructured data into structured data.

IT managers and Big Data professionals who know how to program in Java, are familiar with Linux, have access to an Amazon EMR account, and have Oracle Virtualbox or VMware working will be able to access the key lessons and concepts in this course and learn to write Hadoop jobs and MapReduce programs.

This course is perfect for any data-focused IT job that seeks to learn new ways to work with large amounts of data.

Contents and Overview
In over 16 hours of content including 74 lectures, this course covers necessary Big Data terminology and the use of Hadoop and MapReduce.

This course covers the importance of Big Data, how to setup Node Hadoop pseudo clusters, work with the architecture of clusters, run multi-node clusters on Amazons EMR, work with distributed file systems and operations including running Hadoop on HortonWorks Sandbox and Cloudera.

Students will also learn advanced Hadoop development, MapReduce concepts, using MapReduce with Hive and Pig, and know the Hadoop ecosystem among other important lessons.

Upon completion students will be literate in Big Data terminology, understand how Hadoop can be used to overcome challenging Big Data scenarios, be able to analyze and implement MapReduce workflow, and be able to use virtual machines for code and development testing and configuring jobs.

What are the requirements?

  • A familiarity of programming in Java.
  • A familiarity of Linux
  • Have Oracle Virtualbox or VMware installed and functioning

What am I going to get from this course?

  • Over 76 lectures and 17 hours of content!
  • Become literate in Big Data terminology and Hadoop.
  • Understand the Distributed File Systems architecture and any implementation such as Hadoop Distributed File System or Google File System
  • Use the HDFS shell
  • Use the Cloudera, Hortonworks and Apache Bigtop virtual machines for Hadoop code development and testing
  • Configure, execute and monitor a Hadoop Job

What is the target audience?

  • Big Data professionals who want to Master MapReduce and Hadoop.
  • IT professionals and managers who want to understand and learn this hot new technology
Enquire now

Enquire now