English 中文(简体)
How to use Cassandra s Map Reduce with or w/o Pig?
原标题:

Can someone explain how MapReduce works with Cassandra .6? I ve read through the word count example, but I don t quite follow what s happening on the Cassandra end vs. the "client" end.

https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/

For instance, let s say I m using Python and Pycassa, how would I load in a new map reduce function, and then call it? Does my map reduce function have to be java that s installed on the cassandra server? If so, how do I call it from Pycassa?

There s also mention of Pig making this all easier, but I m a complete Hadoop noob, so that didn t really help.

Your answer can use Thrift or whatever, I just mentioned Pycassa to denote the client side. I m just trying to understand the difference between what runs in the Cassandra cluster vs. the actual server making the requests.

最佳回答

From what I ve heard (and from here), the way that a developer writes a MapReduce program that uses Cassandra as the data source is as follows. You write a regular MapReduce program (the example you linked to is for the pure-Java version) and the jars that are now available provide a CustomInputFormat that allows the input source to be Cassandra (instead of the default, which is Hadoop).

If you re using Pycassa I d say you re out of luck until either (1) the maintainer of that project adds support for MapReduce or (2) you throw some Python functions together that write up a Java MapReduce program and run it. The latter is definitely a bit of a hack but would get you up and going.

问题回答

It Knows about the locality ; The Cassandra InputFormat overrides getLocations() to preserve data locality

The win of using a direct InputFormat from cassandra is that it streams the data efficiently, which is a very big win. Each input split covers a range of tokens and rolls off the disk at its full bandwidth: no seeking, no complex querying. I don t think it knows about locality -- to have each tasktracker prefer input splits from a cassandra process on the same node.

You can try using Pig with the STREAM method as a hack until more direct hadoop streaming support is in place.





相关问题
Error in Hadoop MapReduce

When I run a mapreduce program using Hadoop, I get the following error. 10/01/18 10:52:48 INFO mapred.JobClient: Task Id : attempt_201001181020_0002_m_000014_0, Status : FAILED java.io.IOException:...

Error in using Hadoop MapReduce in Eclipse

When I executed a MapReduce program in Eclipse using Hadoop, I got the below error. It has to be some change in path, but I m not able to figure it out. Any idea? 16:35:39 INFO mapred.JobClient: Task ...

Is MapReduce right for me?

I am working on a project that deals with analyzing a very large amount of data, so I discovered MapReduce fairly recently, and before i dive any further into it, i would like to make sure my ...

Hadoop or Hadoop Streaming for MapReduce on AWS

I m about to start a mapreduce project which will run on AWS and I am presented with a choice, to either use Java or C++. I understand that writing the project in Java would make more functionality ...

What default reducers are available in Elastic MapReduce?

I hope I m asking this in the right way. I m learning my way around Elastic MapReduce and I ve seen numerous references to the "Aggregate" reducer that can be used with "Streaming" job flows. In ...

Displaying access log analysis

I m doing some work to analyse the access logs from a Catalyst web application. The data is from the load balancers in front of the web farm and totals about 35Gb per day. It s stored in a Hadoop HDFS ...

热门标签