189 8069 5689

【Hadoop】Map和Reduce个数问题-创新互联

在hadoop中当一个任务没有设置的时候,该任务的执行的map的个数是由任务本身的数据量决定的,具体计算方法会在下文说明;而reduce的个数hadoop是默认设置为1的。为何设置为1那,因为一个任务的输出的文件个数是由reduce的个数来决定的。一般一个任务的结果默认是输出到一个文件中,所以reduce的数目设置为1。那如果我们为了提高任务的执行速度如何对map与reduce的个数来进行调整那。

创新互联是专业的南昌县网站建设公司,南昌县接单;提供成都网站制作、成都网站建设,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行南昌县网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!

在讲解之前首先,看一下hadoop官方文档是如何说明的。

Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).

上述的说明是map与reduce的个数是如何确定的。对于map的个数是通过任务执行的时候读入的数据量除以每个block的大小(默认是64M)来决定的,而reduce就是默认为1,而且它有个建议范围,这个范围是由你的node个数来决定的。一般reduce的个数是通过:nodes个数 X 一个TaskTracker设置的大reduce个数(默认为2)  X (0.95~1.75)之间的数目。注意这上述的个数只是设置中的一个大的上限。在实际运行中的个数,还要看你具体的任务设置。

如果想设置一个任务执行的map与reduce的个数,那可以使用如下方法。

map:当你想更改map的个数的时候,则可以通过更改配置文件中block的size来增大或者减小map的个数,或者通过 JobConf's conf.setNumMapTasks(int num).。但是就算你设置了数目在这里,它在实际运行中的数目不会小于它实际分割产生的数目。意思就是当你通过程序设置map为2个,但是在读入数据的时候,分割数据是需要3个,那么最后任务在实际运行的过程中map个数是3个而不是你设置的2个。

reduce:当想修改reduce的个数那么可以按照如下方法进行更改:

当是在程序调试中可以通过声明一个job对象,调用job.setNumReduceTasks(tasks),或者在conf设置中调用conf.setStrings("mapred.reduce.tasks", values);

而当是通过命令进行执行任务的时候可以在命令行加入运行期参数:

bin/hadoop jar examples.jar job_name -Dmapred.map.tasks=nums -Dmapred.reduce.tasks=nums INPUT OUTPUT

另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。


分享标题:【Hadoop】Map和Reduce个数问题-创新互联
地址分享:http://cdxtjz.cn/article/hhiss.html

其他资讯