What is concept of storing data in HDFS

Posted by  Samuel Fernandes
Hadoop 
 972  View(s)
Ratings:
Rate this:
  1. What is concept of storing data in HDFS

    This concept of storing a file as a set of blocks is very consistent with how a normal file systems work. But what the different about HDFS is the scale. A typical block size that we would see in a file system under Linux is 4KB, whereas a typical block size in Hadoop is 128MB. This value is configurable, and it can be customized, as both a new system default and a custom value for individual files.

      Modified On Mar-28-2018 06:44:22 AM

Answer