English 中文(简体)
pig puzzle: reforming an involved reductionr as an brief pigscript?
原标题:pig puzzle: re-writing an involved reducer as a simple pig script?

There are account ids, each with a timestamp grouped by username. foreach of these username groups I want all pairs of (oldest account, other account).

我有一刀切的 reduce子,我能否把它变成一幅简单的script子?

Schema:

{group:username”,A:{(id , Creat_dt)}

投入:

(batman,{(id1,100), (id2,200), (id3,50)})
(lulu  ,{(id7,100), (id9,50)})

预期产出:

(batman,{(id3,id1), (id3,id2)})
(lulu  ,{(id9,id7)})
最佳回答

并非任何人似乎都关心,而是在这里。 你们必须建立一个国防军:

desired =   foreach my_input generate group as n, FIND_PAIRS(A) as pairs_bag;

乌干达国防军:

public class FindPairs extends EvalFunc<DataBag> {
@Override
    public DataBag exec(Tuple input) throws IOException {
        Long pivotCreatedDate = Long.MAX_VALUE;
        Long pivot = null;

        DataBag accountsBag = (DataBag) input.get(0);
        for (Tuple account : accountsBag){
            Long accountId = Long.parseLong(account.get(0).toString());
            Long creationDate = Long.parseLong(account.get(4).toString());
            if (creationDate < pivotCreatedDate ) {
                // pivot is the one with the minimal creation_dt
                pivot = accountId;
                pivotCreatedDate = creationDate;
            }
        }

        DataBag allPairs = BagFactory.getInstance().newDefaultBag();
        if (pivot != null){
            for (Tuple account : accountsBag){
                Long accountId = Long.parseLong(account.get(0).toString());
                Long creationDate = Long.parseLong(account.get(4).toString());
                if (!accountId.equals(pivot)) {
                    // we don t want any self-pairs
                    Tuple output = TupleFactory.getInstance().newTuple(2);
                    if (pivot < accountId){
                            output.set(0, pivot.toString());
                            output.set(1, accountId.toString());
                    }
                    else {
                  output.set(0, accountId.toString());
                    output.set(1, pivot.toString());
                    }
                allPairs.add(output);
            }
        }               
        return allPairs;
}

and if you wanna play real nicely, add this:

/**
 * Letting pig know that we emit a bag with tuples, each representing a pair of accounts
 */
@Override
public Schema outputSchema(Schema input) {
    try{
        Schema pairSchema = new Schema();
        pairSchema.add(new FieldSchema(null, DataType.BYTEARRAY));
        pairSchema.add(new FieldSchema(null, DataType.BYTEARRAY));
        return new Schema(
                new FieldSchema(null,
                new Schema(pairSchema), DataType.BAG));         
    }catch (Exception e){
            return null;
    }
}   

}

问题回答

暂无回答




相关问题
Error in Hadoop MapReduce

When I run a mapreduce program using Hadoop, I get the following error. 10/01/18 10:52:48 INFO mapred.JobClient: Task Id : attempt_201001181020_0002_m_000014_0, Status : FAILED java.io.IOException:...

Error in using Hadoop MapReduce in Eclipse

When I executed a MapReduce program in Eclipse using Hadoop, I got the below error. It has to be some change in path, but I m not able to figure it out. Any idea? 16:35:39 INFO mapred.JobClient: Task ...

Is MapReduce right for me?

I am working on a project that deals with analyzing a very large amount of data, so I discovered MapReduce fairly recently, and before i dive any further into it, i would like to make sure my ...

Hadoop or Hadoop Streaming for MapReduce on AWS

I m about to start a mapreduce project which will run on AWS and I am presented with a choice, to either use Java or C++. I understand that writing the project in Java would make more functionality ...

What default reducers are available in Elastic MapReduce?

I hope I m asking this in the right way. I m learning my way around Elastic MapReduce and I ve seen numerous references to the "Aggregate" reducer that can be used with "Streaming" job flows. In ...

Displaying access log analysis

I m doing some work to analyse the access logs from a Catalyst web application. The data is from the load balancers in front of the web farm and totals about 35Gb per day. It s stored in a Hadoop HDFS ...

热门标签