Rs. 999 Rs. 599
Apache Flume and Apache Sqoop tutorial for Ingesting Big Data
This Flume and Sqoop tutorial is created by a Stanford alumni team and teaches how to import data to HDFS and Hive from a variety of sources including MySQL, and Twitter.
Apache Flume and Apache Sqoop are two tools used to draw data from different sources and load them into HDFS. They are mainly used to transport data from sources like HBase, HDFS, and Hive. Flume Agents transport data produced by a streaming application to data stores like HDFS and Hbase, while Sqoop is used to bulk up crucial data from traditional RDBMS to the Hadoop storage architectures-Hive and HDFS. This Flume and Sqoop tutorial for ingesting Big Data covers the basics of data transportation in Hadoop.
By the end of this course you will:
- Be able to use Flume to ingest data HBase and HDFS
- Know how Sqoop to import data from MySQL to Hive and HDFS
- Learn how to ingest data from a variety of sources like Twitter, HTTP, and MySQL
Prerequisites and Target Audience
To take up this online course on Flume and Sqoop for Ingesting Big Data, you will require knowledge of HDFS, HBase, and Hive shells. To run most of the examples, you will also need to install HDFS.
If you do not possess knowledge in one of the required fields, you can subscribe to our course on Learn Hadoop, MapReduce for Big Data problems by Example and HBase Tutorial: Learn by Examples.
This course is designed for Engineers, in specific for those who wish to port data from legacy data stores to HDFS or want to build an application using HDFS, HBase or Hive as the data store.