What is concept of storing data in HDFS

Last updated:9/20/2020 4:55:47 PM

1 Answers

Samuel Fernandes
Samuel Fernandes

This concept of storing a file as a set of blocks is very consistent with how a normal file systems work. But what the different about HDFS is the scale. A typical block size that we would see in a file system under Linux is 4KB, whereas a typical block size in Hadoop is 128MB. This value is configurable, and it can be customized, as both a new system default and a custom value for individual files.

Answer