Hadoop Combiners


In thelast postand in thepreceding onewe saw how to write a MapReduce program for finding the top-n items of a data set. The difference between the two was that the first program (which we callbasic) emitted to the reducers every single item read from input, while the second (which we callenhanced) made a partial computation and emitted only a subset of the input. Theenhanced top-noptimizes network transmissions (the less the key-value pairs emitted, the less network is used for transmitting them from mapper to reducer) and reduces the number of keys shuffled and sorted; but this is obtained at the cost of rewriting of the mapper.

If we look at the code of the mapper of theenhanced top-n, we can see that it implements the idea behind the reducer: it uses a Map for making a partial count of the words and emits every word only once; looking at the reducer's code, we see that it implements the same idea. If we could execute the code of the reducer of thebasic top-nafter the mapper has run on every machine (with its subset of data), we would obtain exactly the same result than rewriting the mapper as in theenhanced. This is exactly what Hadoop combiners do: they're executed just after the mapper on every machine for improving performance. For telling Hadoop which class to use as a combiner, we can use the Job.setCombinerClass() method.

Caution: using the reducer as a combiner works only if the function we're computing is bothcommutative(a + b = b + a) andassociative(a + (b + c) = (a + b) + c).
Let's make an example. Suppose we're analyzing the traffic of a website and we have an input file with the number of visits per day like this (YYYYMMDD value):

20140401 100
20140331 1000
20140330 1300
20140329 5100
20140328 1200

We want to find which is the day with the highest number of visits.
Let's say that we have two mappers; the first one receives the first three lines and the second receives the last two. If we write the mapper to emit every line, the reducer will evaluate something like this:

max(100, 1000, 1300, 5100, 1200) -> 5100

and the max is 5100.
If we use the reducer as a combiner, the reducer will evaluate something like this:

max( max(100, 1000, 1300), max(5100, 1200)) -> max( 1300, 5100) -> 5100

because each of the two mapper will evaluate locally the max function. In this case the result will be 5100 as well, since the function we're evaluating (the max function) is both commutative and associative.

Let's say that now we need to compute the average number of visits per day. If we write the mapper to emit every line of the input file, the reducer will evaluate this:

mean(100, 1000, 1300, 5100, 1200) -> 1740

which is 1740.
If we use the reducer as a combiner, the reducer will evaluate something like this:

mean( mean(100, 1000, 1300), mean(5100, 1200)) -> mean( 800, 3150) -> 1975

because each of the two mapper will evaluate locally the max function. In this case the result will be 1975, which is obviously wrong.

So, if we're computing a commutative and associative function and we want to improve the performance of our job, we can use our reducer as a combiner; if we want to improve performance but we're computing a function that isnotcommutative and associative, we have to rewrite the mapper or to write a new combiner from stratch.

from:http://andreaiacono.blogspot.com/2014/03/hadoop-combiners.html

优质内容筛选与推荐>>
1、js 图片验证大小
2、深度学习十大顶级框架
3、Android 中的回调函数 onActivityResult
4、Hive总结(三)hive组件和执行过程
5、CSS中的expression


长按二维码向我转账

受苹果公司新规定影响,微信 iOS 版的赞赏功能被关闭,可通过二维码转账支持公众号。

    阅读
    好看
    已推荐到看一看
    你的朋友可以在“发现”-“看一看”看到你认为好看的文章。
    已取消,“好看”想法已同步删除
    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号