hadoop - Is star schema still necessary for a big-data-warehouse? -


i designing new hadoop-based data warehouse using hive , wondering whether classic star/snowflake schemas still "standard" in context.

big data systems embrace redundancy normalized schemas have poor performance (for example, in nosql databases hbase or cassandra).

is still best practice making star-schema data warehouses hive?

is better designing row-wide (reduntant) tables, exploiting new columnar file formats?

when designing nosql databases tend optimize specific query preprocessing parts of query , store denormalized copy of data (albeit denormalized in query-specific way).

the star schema, on other hand, all-purpose denormalization that's appropriate.

when you're planning on using hive, you're not using optimization general-purposefullness (?) of sql , such, i'd imagine star schema still appropriate. nosql db non-sql interface, however, i'd suggest use more query-specific design.


Comments

Popular posts from this blog

c# - Validate object ID from GET to POST -

node.js - Custom Model Validator SailsJS -

php - Find a regex to take part of Email -