hdfs - Hadoop Distributed File Systmes -


hdfs built around idea efficient data processing pattern write once, read-many-times pattern.

can have real time example how hdfs write once , ready many times? wanted understand core concepts in deep.

hdfs applications need write-once-read-many access model files. file once created, written, , closed need not changed. assumption simplifies data coherency issues , enables high throughput data access. mapreduce application or web crawler application fits model. (source: hdfs design)

hdfs built around idea files updated. rather, read data calculation, , possibly additional data appended files time time. example, airline reservation system not suitable dfs, if data large, because data changed frequently. (source: mining of massive datasets)

also see why hdfs write once , read multiple times?


Comments

Popular posts from this blog

c# - Validate object ID from GET to POST -

node.js - Custom Model Validator SailsJS -

php - Find a regex to take part of Email -