hdfs - Hadoop Distributed File Systmes -
hdfs built around idea efficient data processing pattern write once, read-many-times pattern.
can have real time example how hdfs write once , ready many times? wanted understand core concepts in deep.
hdfs applications need write-once-read-many access model files. file once created, written, , closed need not changed. assumption simplifies data coherency issues , enables high throughput data access. mapreduce application or web crawler application fits model. (source: hdfs design)
hdfs built around idea files updated. rather, read data calculation, , possibly additional data appended files time time. example, airline reservation system not suitable dfs, if data large, because data changed frequently. (source: mining of massive datasets)
Comments
Post a Comment