I am asking this as I am wondering if it could be efficient to run mapreduce queries over a database or a shared keyvalue store?
For example, to implement a web trawler, which indexes the internet and c开发者_运维问答ounts all the terms on different web pages, could this be done efficiently with a database as a backend?
Sure. HBase and other NoSql stores are well suited for this task.
See this article for a general overview of using HBase with MapReduce.
HBase is the Hadoop database. Use it when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
HBase is an open-source, distributed, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop. HBase includes:
•Convenient base classes for backing Hadoop MapReduce jobs with HBase tables
•Query predicate push down via server side scan and get filters
•Optimizations for real time queries
•A high performance Thrift gateway •A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding options
•Cascading source and sink modules
•Extensible jruby-based (JIRB) shell
•Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
精彩评论