开发者

Using mahout and hadoop

开发者 https://www.devze.com 2023-02-04 07:27 出处:网络
I am a newbie trying to understand how will mahout and hadoop be used for collaborative filtering. I m having single node cassandra setup. I want to fetch data from cassandra

I am a newbie trying to understand how will mahout and hadoop be used for collaborative filtering. I m having single node cassandra setup. I want to fetch data from cassandra

Where can I find clear installation steps for hadoop first and then mahout开发者_如何学Go to work with cassandra?


(I think this is the same question you just asked on user@mahout.apache.org? Copying my answer.)

You may not need Hadoop at all, and if you don't, I'd suggest you not use it for simplicity. It's "necessary evil" to scale past a certain point.

You can have data on Cassandra but you will want to be able to read it into memory. If you can dump as a file, you can use FileDataModel. Or, you can emulate the code in FileDataModel to create one based on Cassandra.

Then, your two needs are easily answered:

  1. This is not even a recommendation problem. Just pick an implementation of UserSimilarity, and use it to compare a user to all others, and pick the ones with highest similarity. (Wrapping with CachingUserSimilarity will help a lot.)

  2. This is just a recommender problem. Use a GenericUserBasedRecommender with your UserSimilarity and DataModel and you're done.

It of course can get much more complex than this, but this is a fine start point.

If later you use Hadoop, yes you have to set up Hadoop according to its instructions. There is no Mahout "setup". For recommenders, you would look at one of the RecommenderJob classes which invokes the necessary jobs on your Hadoop cluster. You would run it with the "hadoop" command -- again, this is where you'd need to just understand Hadoop.

The book Mahout in Action writes up most of the Mahout Hadoop jobs in some detail.


The book Mahout in Action did indeed just save me from a frustrating lack of docs.

I was following https://issues.apache.org/jira/browse/MAHOUT-180 ... which suggests a 'hadoop -jar' syntax that only gave me errors. The book has 'jar' instead, and with that fix my test job is happily running.

Here's what I did:

  1. used the utility at http://bickson.blogspot.com/2011/02/mahout-svd-matrix-factorization.html?showComment=1298565709376#c3501116664672385942 to convert a CSV representation of my matrix to a mahout file format. Copied it into Hadoop filesystem.

  2. Uploaded mahout-examples-0.5-SNAPSHOT-job.jar from a freshly built Mahout on my laptop, onto the hadoop cluster's control box. No other mahout stuff on there.

  3. Ran this: (assumes hadoop is configured; which I confirm with dfs -ls /user/danbri )

hadoop jar ./mahout-examples-0.5-SNAPSHOT-job.jar \ org.apache.mahout.math.hadoop.decomposer.DistributedLanczosSolver \ --input svdoutput.mht --output outpath --numRows 0 --numCols 4 --rank 50

...now whether I got this right is quite another matter, but it seems to be doing something!


you can follow following tutorial to learn. its ease to understand and stated clearly about basics of Hadoop:

http://developer.yahoo.com/hadoop/tutorial/

0

精彩评论

暂无评论...
验证码 换一张
取 消