Tech Mahindra– Hadoop Interview Questions
Here is the list of Hadoop Interview Questions which are recently asked in Tech Mahindra company. These questions are included for both Freshers and Experienced professionals.
1. What are the differences between Hadoop and Spark?
In fact, the key difference between the Hadoop MapReduce and Spark lies in the approach to processing: Spark can do it in-memory, while Hadoop MapReduce has to read from and the write to a disk. As a result, the speed of processing differs significantly Spark may be up to 100 times faster.
2. What are the real-time industry applications of Hadoop?
- Finance sectors.
- Security and Law Enforcement.
- Companies use Hadoop for understanding customers requirements.
- Hadoop Applications in Retail industry.
- Real-time analysis of customers data.
- Uses of Hadoop in Government sectors.
- Hadoop Uses in Advertisements Targeting Platforms.
3. How is Hadoop different from other parallel computing systems?
Hadoop is a distributed file system, which lets you store and the handle massive amount of data on a cloud of machines, handling data redundancy. Each node can process the data stored on it instead of spending time in moving it over the network.
4. In what all modes Hadoop can be run?Hadoop Mainly works on 3 different Modes:
- Standalone Mode.
- Pseudo-distributed Mode.
- Fully-Distributed Mode.
5. Explain the major difference between HDFS block and InputSplit.
HDFS Block is the physical representation of data in Hadoop. MapReduce InputSplit is the logical representation of data present in the block in Hadoop. It is basically used during the data processing in MapReduce program or other processing techniques.
6. What is distributed cache? What are its benefits?
A distributed cache is a system that pools together the random-access memory of multiple networked computers into a single in-memory data store used as a data cache to provide the fast access to data.
7. Explain the difference between NameNode, Checkpoint NameNode, and Backup Node.
Backup Node also provides the check pointing functionality like that of the checkpoint node but it also maintains its up-to-date in-memory copy of the file system namespace that is in sync with the active NameNode.
8. What are the most common input formats in Hadoop?The most common input formats in Hadoop are:
- File Input Format-It is the base class for all file-based Input Format.
- Text Input Format-It is the default Input Format of Map Reduce.
- Key Value Text Input Format-It is similar to Text Input Format.
9. Define DataNode. How does NameNode tackle DataNode failures?
A DataNode stores data in the HadoopFileSystem. A functional filesystem has more than one DataNode, with data replicated across them. It then responds to requests from the NameNode for filesystem operations.Namenode periodically receives a heartbeat and a Block report from each Datanode in the cluster.Since blocks will be under replicated, the system starts the replication process from one Datanode to the another by taking all block information from the Block report of the corresponding Datanode.
10. What are the core methods of a Reducer?
- setup () – This method of the reducer is used for configuring various parameters like the input data size, distributed cache, heap size, etc.
- reduce () it is heart of the reducer which is called once per key with the associated reduce task.
11. What is a SequenceFile in Hadoop?
SequenceFile is a flat file consisting of binary key/value pairs. It is extensively used in the MapReduce as input/output formats. It is also worth nothing that, internally, the temporary outputs of maps are stored using the SequenceFile.
12. What is the role of a JobTracker in Hadoop?
JobTracker is the service within the Hadoop that is responsible for taking client requests. It assigns them to TaskTrackers on DataNodes where the data required is locally present. If that is not possible, JobTracker tries to assign the tasks to TaskTrackers within the same rack where the data is locally present.
13. What is the use of RecordReader in Hadoop?
14. What is Speculative Execution in Hadoop?
A speculative execution means that Hadoop in overall doesn't try to fix slow tasks as it is hard to detect the reason misconfiguration, hardware issues, etc, instead, it just launches the another parallel/backup task for each task that is performing slower than the expected, on faster nodes.
15. How can you debug Hadoop code?
- Add hadoop-mapreduce-client-jobclient maven dependency. The very first step to debug Hadoop map reduce code locally is to add the hadoop-mapreduce-client-jobclient maven dependency.
- Set the local file system. Set either local or file:/// in fs.
- Set the Number of mappers and reducers.
TOP MNC's HADOOP INTERVIEW QUESTIONS & ANSWERS
Here we listed all Hadoop Interview Questions and Answers which are asked in Top MNCs. Periodically we update this page with recently asked Questions, please do visit our page often and be updated in Hadoop .