PDA

View Full Version : Hadoop Data Source for Cubes and Reports



shujamughal
07-29-2010, 06:38 PM
Hi
I am checking the new ETL which is providing the support for hadoop. Now my questions is that suppose i get my data from hadoop, clean it and put back to hadoop (instead of mysql which i am using now) then can we use hadoop as data source for cube or reports generation?

Regards
Shuja

Taqua
07-29-2010, 07:43 PM
Hive is a JDBC driver that sits on top of Hadoop. Just use it as you would use any other JDBC driver.

http://wiki.apache.org/hadoop/Hive/HiveClient

However, if you want to do datawarehousing or let users run reports, then Hive is not for you.



What Hive is NOT

Hadoop is a batch processing system and Hadoop jobs tend to have high latency and incur substantial overheads in job submission and scheduling. As a result - latency for Hive queries is generally very high (minutes) even when data sets involved are very small (say a few hundred megabytes). As a result it cannot be compared with systems such as Oracle where analyses are conducted on a significantly smaller amount of data but the analyses proceed much more iteratively with the response times between iterations being less than a few minutes. Hive aims to provide acceptable (but not optimal) latency for interactive data browsing, queries over small data sets or test queries. Hive also does not provide sort of data or query cache to make repeated queries over the same data set faster.


Do in most cases, you will still be better off having a classical relational datawarehouse with your aggregated data ready for reporting and OLAP.

shujamughal
07-30-2010, 01:48 PM
Thanks Taqua.
Actually My scenario is that i am getting thousands of log files daily which need to be processed to clean the data and get useful information, resulting in millions of rows daily. If i use classical relational data warehousing then after some days, i had billions of billions rows in the fact table and i think classical relational database will fail as days passed. so thats why i am thinking to put all data in hadoop clusters and then generate cubes and reports from here.

What do you think, in this scenario which approach will be better??

Thanks
SHuja