Hadoop Daemons Core Component
An understanding of the Hadoop distributed file system Daemons
This HDFS consists of three Daemons which are:-
- Secondary Namenode.
All the nodes work the primary slave architecture.
The Namenode is the master node while the data node is the slave node. Within the HDFS, there is only a single Namenode and multiple Datanodes.
Functionality of Nodes
The Namenode is used for storing the metadata of the HDFS. This metadata keeps track and stores information about all the files in the HDFS. All the information is stored in the RAM. Typically the Namenode occupies around 1 GB of space to store around 1 million files.
The information stored in the RAM is known as file system metadata. This metadata is stored in a file system on a disc.
The Datanodes are responsible for retrieving and storing information as instructed by the Namenode. They periodically report back to the Namenodes about their status and the files they are storing through a heartbeat. The Datanodes stores multiple copies for each file that is present within the Hadoop distributed file system.