What is concept of storing data in HDFS
1875
29-Aug-2016
Samuel Fernandes
29-Aug-2016This concept of storing a file as a set of blocks is very consistent with how a normal file systems work. But what the different about HDFS is the scale. A typical block size that we would see in a file system under Linux is 4KB, whereas a typical block size in Hadoop is 128MB. This value is configurable, and it can be customized, as both a new system default and a custom value for individual files.