Spark In-Memory Persistence and Memory Management

Spark In-Memory Persistence and Memory Management must be understood by engineering teams. Spark’s performance advantage over MapReduce is greatest in use cases involving repeated computations. Much of this performance increase is due to Spark’s use of in-memory persistence. Rather than writing to disk between each pass through the data, Spark has the option of keeping the data on the executors … read the rest

Apache Sqoop Introduction

In this article, Apache Sqoop Introduction, we will primarily discuss why this tool exists. Apache sqoop is part of Hadoop Core project or part of Hadoop Ecosystem project.

Bigdata tools which we use for transferring data between Hadoop and relational database servers is what we call Sqoop. Sqoop primarily stands for Sql for Hadoop.

In addition, there are … read the rest

Beginners Impala Tutorial

The Beginners Impala Tutorial covers key concepts of in-memory computation technology called Impala. It is developed by Cloudera. MapReduce based frameworks like Hive is slow due to excessive I/O operations. Cloudera offers a separate tool and that tool is what we call Apache Impala. This Beginners Impala Tutorial will cover the whole concept of Cloudera Impala and how this Massive … read the rest

Hadoop 3.0 Interview Question

Hadoop 3.0 or Bigdata jobs are in demand and in Hadoop 3.0 Interview Question article covers almost all the important topic including the reference link to other tutorials.

Hadoop 3.0 Interview Question

Hadoop 3.0 New Features Questions

What are the new features in Hadoop 3.0?

  1. Java 8 (jdk 1.8) as runtime for Hadoop 3.0
  2. Erasure Encoding for to reduce storage cost
  3. YARN Timeline Service
read the rest

Compare Unix Kernel Shells

This short article talks and compare UNIX kernel shells, which many technical folks are confused of. The Unix operating system used a shell program called the Bourne Shell. Then, slowly, many other shell kernel were developed for different flavors of UNIX operating system. The following is some brief information about different shells:

  • sh—Bourne shell
  • csh—C shell
  • ksh
read the rest

Hadoop 3.0 GPU

Hadoop 3.0 GPU : Hadoop is still behind high performance capacity due to CPUs’ limited parallelism, though. GPU (Graphical Processing Unit) accelerated computing involves the use of a GPU together with a CPU to accelerate applications to data processing on GPU cluster toward higher efficiency. However, GPU cluster has low level data storage capacity

Leveraged Hadoop 3.0 GPU Computing

MapReduce … read the rest

Hadoop 3.0 Erasure Coding Explained

This deep dive article on “Hadoop 3.0 Erasure coding explained” will highlight how the erasure coding will help reducing 50% of storage overhead cost. The storage component (HDFS) of Hadoop 3.0 by default replicates each block 3 times (and could be higher based on configuration). Replication facilitates a simple and robust form of redundancy to protect against failure … read the rest