Home > DeveloperSection > Interviews > What is concept of storing data in HDFS

Posted on    August-29-2016 1:42 AM

 Hadoop Hadoop 
Ratings:
 1 Answer(s)
  65  View(s)
Rate this:

Samuel Fernandes
Samuel Fernandes

Total Post:28

Points:140
Posted on    August-29-2016 1:42 AM

This concept of storing a file as a set of blocks is very consistent with how a normal file systems work. But what the different about HDFS is the scale. A typical block size that we would see in a file system under Linux is 4KB, whereas a typical block size in Hadoop is 128MB. This value is configurable, and it can be customized, as both a new system default and a custom value for individual files.


Don't want to miss updates? Please click the below button!

Follow MindStick