MapReduce框架原理
阅读原文时间:2023年07月13日阅读:5

MapReduce框架原理

3.1.1 切片与MapTask并行度决定机制

  1.问题引出

  MapTask的并行度决定Map阶段的任务处理并发度,进而影响到整个Job的处理速度。

  思考:1G的数据,启动8个MapTask,可以提高集群的并发处理能力。那么1K的数据,也启动8个MapTask,会提高集群性能吗?MapTask并行任务是否越多越好呢?哪些因素影响了MapTask并行度?

  2.MapTask并行度决定机制

数据块:Block是HDFS物理上把数据分成一块一块。

数据切片:数据切片只是在逻辑上对输入进行分片,并不会在磁盘上将其切分成片进行存储。

3.1.2 Job提交流程源码和切片源码详解

1.Job提交流程源码详解,如下图所示

waitForCompletion()

submit();

// 1建立连接
connect();
// 1)创建提交Job的代理
new Cluster(getConfiguration());
// (1)判断是本地yarn还是远程
initialize(jobTrackAddr, conf);

// 2 提交job
submitter.submitJobInternal(Job.this, cluster)
// 1)创建给集群提交数据的Stag路径
Path jobStagingArea = JobSubmissionFiles.getStagingDir(cluster, conf);

// 2)获取jobid ,并创建Job路径  
JobID jobId = submitClient.getNewJobID();

// 3)拷贝jar包到集群  

copyAndConfigureFiles(job, submitJobDir);
rUploader.uploadFiles(job, jobSubmitDir);

// 4)计算切片,生成切片规划文件
writeSplits(job, submitJobDir);
maps = writeNewSplits(job, jobSubmitDir);
input.getSplits(job);

// 5)向Stag路径写XML配置文件
writeConf(conf, submitJobFile);
conf.writeXml(out);

// 6)提交Job,返回提交状态
status = submitClient.submitJob(jobId, submitJobDir.toString(), job.getCredentials());

2.FileInputFormat切片源码解析(input.getSplits(job))

3.1.3 FileInputFormat切片机制

3.1.4 CombineTextInputFormat切片机制

  框架默认的TextInputFormat切片机制是对任务按文件规划切片,不管文件多小,都会是一个单独的切片,都会交给一个MapTask,这样如果有大量小文件,就会产生大量的MapTask,处理效率极其低下。

  1、应用场景:

  CombineTextInputFormat用于小文件过多的场景,它可以将多个小文件从逻辑上规划到一个切片中,这样,多个小文件就可以交给一个MapTask处理。

  2、虚拟存储切片最大值设置

  CombineTextInputFormat.setMaxInputSplitSize(job, 4194304);// 4m

  注意:虚拟存储切片最大值设置最好根据实际的小文件大小情况来设置具体的值。

  3、切片机制

  生成切片过程包括:虚拟存储过程和切片过程二部分。

(1)虚拟存储过程:

  将输入目录下所有文件大小,依次和设置的setMaxInputSplitSize值比较,如果不大于设置的最大值,逻辑上划分一个块。如果输入文件大于设置的最大值且大于两倍,那么以最大值切割一块;当剩余数据大小超过设置的最大值且不大于最大值2倍,此时将文件均分成2个虚拟存储块(防止出现太小切片)。

  例如setMaxInputSplitSize值为4M,输入文件大小为8.02M,则先逻辑上分成一个4M。剩余的大小为4.02M,如果按照4M逻辑划分,就会出现0.02M的小的虚拟存储文件,所以将剩余的4.02M文件切分成(2.01M和2.01M)两个文件。

(2)切片过程:

  (a)判断虚拟存储的文件大小是否大于setMaxInputSplitSize值,大于等于则单独形成一个切片。

  (b)如果不大于则跟下一个虚拟存储文件进行合并,共同形成一个切片。

  (c)测试举例:有4个小文件大小分别为1.7M、5.1M、3.4M以及6.8M这四个小文件,则虚拟存储之后形成6个文件块,大小分别为:

  1.7M,(2.55M、2.55M),3.4M以及(3.4M、3.4M)

  最终会形成3个切片,大小分别为:

  (1.7+2.55)M,(2.55+3.4)M,(3.4+3.4)M

3.1.5 CombineTextInputFormat案例实操

1.需求

  将输入的大量小文件合并成一个切片统一处理。

  (1)输入数据

  准备4个小文件

  (2)期望

  期望一个切片处理4个文件

2.实现过程

(1)不做任何处理,运行1.6的WordCount案例程序,观察切片个数为4

(2)在WordcountDriver增加如下代码运行程序,并观察运行的切片个数为3

  (a)驱动类中添加代码如下:

// 如果不设置InputFormat,它默认用的是TextInputFormat.class
job.setInputFormatClass(CombineTextInputFormat.class);

//虚拟存储切片最大值设置4m
CombineTextInputFormat.setMaxInputSplitSize(job, 4194304);

  (b)运行如果为3个切片。

  

3在WordcountDriver增加如下代码运行程序,并观察运行的切片个数为1

  (a)驱动中添加代码如下:

// 如果不设置InputFormat,它默认用的是TextInputFormat.class
job.setInputFormatClass(CombineTextInputFormat.class);

//虚拟存储切片最大值设置20m
CombineTextInputFormat.setMaxInputSplitSize(job, 20971520);

  (b)运行如果为1个切片。

  

3.1.6 FileInputFormat实现

3.1.7 KeyValueTextInputFormat使用案例

1.需求

统计输入文件中每一行的第一个单词相同的行数。

(1)输入数据

banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang

(2)期望结果数据

banzhang 2
xihuan 2

2.需求分析

3.代码实现

(1)编写Mapper类

package com.atguigu.mapreduce.KeyValueTextInputFormat;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class KVTextMapper extends Mapper{

// 1 设置value
LongWritable v = new LongWritable(1);

@Override  
protected void map(Text key, Text value, Context context)  
        throws IOException, InterruptedException {

// banzhang ni hao

    // 2 写出  
    context.write(key, v);  
}  

}

(2)编写Reducer类

package com.atguigu.mapreduce.KeyValueTextInputFormat;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class KVTextReducer extends Reducer{

LongWritable v = new LongWritable();  

@Override  
protected void reduce(Text key, Iterable<LongWritable> values,    Context context) throws IOException, InterruptedException {

     long sum = 0L;  

     // 1 汇总统计  
    for (LongWritable value : values) {  
        sum += value.get();  
    }

    v.set(sum);  

    // 2 输出  
    context.write(key, v);  
}  

}

(3)编写Driver类

package com.atguigu.mapreduce.keyvaleTextInputFormat;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class KVTextDriver {

public static void main(String\[\] args) throws IOException, ClassNotFoundException, InterruptedException {

    Configuration conf = new Configuration();  
    // 设置切割符  
conf.set(KeyValueLineRecordReader.KEY\_VALUE\_SEPERATOR, " ");  
    // 1 获取job对象  
    Job job = Job.getInstance(conf);

    // 2 设置jar包位置,关联mapper和reducer  
    job.setJarByClass(KVTextDriver.class);  
    job.setMapperClass(KVTextMapper.class);  

job.setReducerClass(KVTextReducer.class);

    // 3 设置map输出kv类型  
    job.setMapOutputKeyClass(Text.class);  
    job.setMapOutputValueClass(LongWritable.class);

    // 4 设置最终输出kv类型  
    job.setOutputKeyClass(Text.class);  

job.setOutputValueClass(LongWritable.class);

    // 5 设置输入输出数据路径  
    FileInputFormat.setInputPaths(job, new Path(args\[0\]));

    // 设置输入格式  
job.setInputFormatClass(KeyValueTextInputFormat.class);

    // 6 设置输出数据路径  
    FileOutputFormat.setOutputPath(job, new Path(args\[1\]));

    // 7 提交job  
    job.waitForCompletion(true);  
}  

}

3.1.8 NLineInputFormat使用案例

1.需求

对每个单词进行个数统计,要求根据每个输入文件的行数来规定输出多少个切片。此案例要求每三行放入一个切片中。

(1)输入数据

banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang banzhang ni hao
xihuan hadoop banzhang

(2)期望输出数据

Number of splits:4

2.需求分析

3.代码实现

(1)编写Mapper类

package com.atguigu.mapreduce.nline;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class NLineMapper extends Mapper{

private Text k = new Text();  
private LongWritable v = new LongWritable(1);

@Override  
protected void map(LongWritable key, Text value, Context context)    throws IOException, InterruptedException {

     // 1 获取一行  
    String line = value.toString();

    // 2 切割  
    String\[\] splited = line.split(" ");

    // 3 循环写出  
    for (int i = 0; i < splited.length; i++) {

        k.set(splited\[i\]);

       context.write(k, v);  
    }  
}  

}

(2)编写Reducer类

package com.atguigu.mapreduce.nline;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class NLineReducer extends Reducer{

LongWritable v = new LongWritable();

@Override  
protected void reduce(Text key, Iterable<LongWritable> values,    Context context) throws IOException, InterruptedException {

    long sum = 0l;

    // 1 汇总  
    for (LongWritable value : values) {  
        sum += value.get();  
    }  

    v.set(sum);

    // 2 输出  
    context.write(key, v);  
}  

}

(3)编写Driver类

package com.atguigu.mapreduce.nline;
import java.io.IOException;
import java.net.URISyntaxException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.NLineInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class NLineDriver {

public static void main(String\[\] args) throws IOException, URISyntaxException, ClassNotFoundException, InterruptedException {

// 输入输出路径需要根据自己电脑上实际的输入输出路径设置
args = new String[] { "e:/input/inputword", "e:/output1" };

     // 1 获取job对象  
     Configuration configuration = new Configuration();  
    Job job = Job.getInstance(configuration);

    // 7设置每个切片InputSplit中划分三条记录  
    NLineInputFormat.setNumLinesPerSplit(job, 3);

    // 8使用NLineInputFormat处理记录数  
    job.setInputFormatClass(NLineInputFormat.class);  

    // 2设置jar包位置,关联mapper和reducer  
    job.setJarByClass(NLineDriver.class);  
    job.setMapperClass(NLineMapper.class);  
    job.setReducerClass(NLineReducer.class);  

    // 3设置map输出kv类型  
    job.setMapOutputKeyClass(Text.class);  
    job.setMapOutputValueClass(LongWritable.class);  

    // 4设置最终输出kv类型  
    job.setOutputKeyClass(Text.class);  
    job.setOutputValueClass(LongWritable.class);  

    // 5设置输入输出数据路径  
    FileInputFormat.setInputPaths(job, new Path(args\[0\]));  
    FileOutputFormat.setOutputPath(job, new Path(args\[1\]));  

    // 6提交job  
    job.waitForCompletion(true);  
}  

}

4.测试

(1)输入数据

banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang banzhang ni hao
xihuan hadoop banzhang

(2)输出结果的切片数,如图所示:

3.1.9 自定义InputFormat

3.1.10 自定义InputFormat案例实操

无论HDFS还是MapReduce,在处理小文件时效率都非常低,但又难免面临处理大量小文件的场景,此时,就需要有相应解决方案。可以自定义InputFormat实现小文件的合并。

1.需求

将多个小文件合并成一个SequenceFile文件(SequenceFile文件是Hadoop用来存储二进制形式的key-value对的文件格式),SequenceFile里面存储着多个文件,存储的形式为文件路径+名称为key,文件内容为value。

(1)输入数据

yongpeng weidong weinan
sanfeng luozong xiaoming

longlong fanfan
mazong kailun yuhang yixin
longlong fanfan
mazong kailun yuhang yixin

shuaige changmo zhenqiang
dongli lingu xuanxuan

(2)期望输出文件格式

2.需求分析

3.程序实现

(1)自定义InputFromat

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

// 定义类继承FileInputFormat
public class WholeFileInputformat extends FileInputFormat{

@Override  
protected boolean isSplitable(JobContext context, Path filename) {  
    return false;  
}

@Override  
public RecordReader<Text, BytesWritable> createRecordReader(InputSplit split, TaskAttemptContext context)    throws IOException, InterruptedException {

    WholeRecordReader recordReader = new WholeRecordReader();  
    recordReader.initialize(split, context);

    return recordReader;  
}  

}

(2)自定义RecordReader类

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class WholeRecordReader extends RecordReader{

private Configuration configuration;  
private FileSplit split;

private boolean isProgress= true;  
private BytesWritable value = new BytesWritable();  
private Text k = new Text();

@Override  
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {

    this.split = (FileSplit)split;  
    configuration = context.getConfiguration();  
}

@Override  
public boolean nextKeyValue() throws IOException, InterruptedException {

    if (isProgress) {

        // 1 定义缓存区  
        byte\[\] contents = new byte\[(int)split.getLength()\];

        FileSystem fs = null;  
        FSDataInputStream fis = null;

        try {  
            // 2 获取文件系统  
            Path path = split.getPath();  
            fs = path.getFileSystem(configuration);

            // 3 读取数据  
            fis = fs.open(path);

            // 4 读取文件内容  
            IOUtils.readFully(fis, contents, 0, contents.length);

            // 5 输出文件内容  
            value.set(contents, 0, contents.length);

// 6 获取文件路径及名称
String name = split.getPath().toString();

// 7 设置输出的key值
k.set(name);

        } catch (Exception e) {

        }finally {  
            IOUtils.closeStream(fis);  
        }

        isProgress = false;

        return true;  
    }

    return false;  
}

@Override  
public Text getCurrentKey() throws IOException, InterruptedException {  
    return k;  
}

@Override  
public BytesWritable getCurrentValue() throws IOException, InterruptedException {  
    return value;  
}

@Override  
public float getProgress() throws IOException, InterruptedException {  
    return 0;  
}

@Override  
public void close() throws IOException {  
}  

}

(3)编写SequenceFileMapper类处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

public class SequenceFileMapper extends Mapper{

@Override  
protected void map(Text key, BytesWritable value,            Context context)        throws IOException, InterruptedException {

    context.write(key, value);  
}  

}

(4)编写SequenceFileReducer类处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class SequenceFileReducer extends Reducer {

@Override  
protected void reduce(Text key, Iterable<BytesWritable> values, Context context)        throws IOException, InterruptedException {

    context.write(key, values.iterator().next());  
}  

}

(5)编写SequenceFileDriver类处理流程

package com.atguigu.mapreduce.inputformat;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;

public class SequenceFileDriver {

public static void main(String\[\] args) throws IOException, ClassNotFoundException, InterruptedException {

   // 输入输出路径需要根据自己电脑上实际的输入输出路径设置  
    args = new String\[\] { "e:/input/inputinputformat", "e:/output1" };

   // 1 获取job对象  
    Configuration conf = new Configuration();  
    Job job = Job.getInstance(conf);

   // 2 设置jar包存储位置、关联自定义的mapper和reducer  
    job.setJarByClass(SequenceFileDriver.class);  
    job.setMapperClass(SequenceFileMapper.class);  
    job.setReducerClass(SequenceFileReducer.class);

   // 7设置输入的inputFormat  
    job.setInputFormatClass(WholeFileInputformat.class);

   // 8设置输出的outputFormat  
 job.setOutputFormatClass(SequenceFileOutputFormat.class);

// 3 设置map输出端的kv类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(BytesWritable.class);

   // 4 设置最终输出端的kv类型  
    job.setOutputKeyClass(Text.class);  
    job.setOutputValueClass(BytesWritable.class);

   // 5 设置输入输出路径  
    FileInputFormat.setInputPaths(job, new Path(args\[0\]));  
    FileOutputFormat.setOutputPath(job, new Path(args\[1\]));

   // 6 提交job  
    boolean result = job.waitForCompletion(true);  
    System.exit(result ? 0 : 1);  
}  

}

1.流程示意图,如图所示

2.流程详解

  上面的流程是整个MapReduce最全工作流程,但是Shuffle过程只是从第7步开始到第16步结束,具体Shuffle过程详解,如下:

  1)MapTask收集我们的map()方法输出的kv对,放到内存缓冲区中

  2)从内存缓冲区不断溢出本地磁盘文件,可能会溢出多个文件

  3)多个溢出文件会被合并成大的溢出文件

  4)在溢出过程及合并的过程中,都要调用Partitioner进行分区和针对key进行排序

  5)ReduceTask根据自己的分区号,去各个MapTask机器上取相应的结果分区数据

  6)ReduceTask会取到同一个分区的来自不同MapTask的结果文件,ReduceTask会将这些文件再进行合并(归并排序)

  7)合并成大文件后,Shuffle的过程也就结束了,后面进入ReduceTask的逻辑运算过程(从文件中取出一个一个的键值对Group,调用用户自定义的reduce()方法)

3.注意

  Shuffle中的缓冲区大小会影响到MapReduce程序的执行效率,原则上说,缓冲区越大,磁盘io的次数越少,执行速度就越快。

  缓冲区的大小可以通过参数调整,参数:io.sort.mb默认100M。

4.源码解析流程

context.write(k, NullWritable.get());
  output.write(key, value);
    collector.collect(key, value,partitioner.getPartition(key, value, partitions));
    HashPartitioner();
    collect()
     close()
       collect.flush()
          sortAndSpill()
           sort() QuickSort
          mergeParts();
        

        collector.close();

3.3.1 Shuffle机制

  Map方法之后,Reduce方法之前的数据处理过程称之为Shuffle。如图所示。

  

3.3.2 Partition分区  

3.3.3 Partition分区案例实操

1.需求

将统计结果按照手机归属地不同省份输出到不同文件中(分区)

(1)输入数据

1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200
6 84188413 192.168.100.3 www.atguigu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200

(2)期望输出数据

手机号136、137、138、139开头都分别放到一个独立的4个文件中,其他开头的放到一个文件中。

2.需求分析

3.在案例2.4的基础上,增加一个分区类

package com.atguigu.mapreduce.flowsum;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;

public class ProvincePartitioner extends Partitioner {

@Override  
public int getPartition(Text key, FlowBean value, int numPartitions) {

    // 1 获取电话号码的前三位  
    String preNum = key.toString().substring(0, 3);

    int partition = 4;

    // 2 判断是哪个省  
    if ("136".equals(preNum)) {  
        partition = 0;  
    }else if ("137".equals(preNum)) {  
        partition = 1;  
    }else if ("138".equals(preNum)) {  
        partition = 2;  
    }else if ("139".equals(preNum)) {  
        partition = 3;  
    }

    return partition;  
}  

}

4.在驱动函数中增加自定义数据分区设置和ReduceTask设置

package com.atguigu.mapreduce.flowsum;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class FlowsumDriver {

public static void main(String\[\] args) throws IllegalArgumentException, IOException, ClassNotFoundException, InterruptedException {

    // 输入输出路径需要根据自己电脑上实际的输入输出路径设置  
    args = new String\[\]{"e:/output1","e:/output2"};

    // 1 获取配置信息,或者job对象实例  
    Configuration configuration = new Configuration();  
    Job job = Job.getInstance(configuration);

    // 2 指定本程序的jar包所在的本地路径  
    job.setJarByClass(FlowsumDriver.class);

    // 3 指定本业务job要使用的mapper/Reducer业务类  
    job.setMapperClass(FlowCountMapper.class);  
    job.setReducerClass(FlowCountReducer.class);

    // 4 指定mapper输出数据的kv类型  
    job.setMapOutputKeyClass(Text.class);  
    job.setMapOutputValueClass(FlowBean.class);

    // 5 指定最终输出的数据的kv类型  
    job.setOutputKeyClass(Text.class);  
    job.setOutputValueClass(FlowBean.class);

    // 8 指定自定义数据分区  
    job.setPartitionerClass(ProvincePartitioner.class);

    // 9 同时指定相应数量的reduce task  
    job.setNumReduceTasks(5);

    // 6 指定job的输入原始文件所在目录  
    FileInputFormat.setInputPaths(job, new Path(args\[0\]));  
    FileOutputFormat.setOutputPath(job, new Path(args\[1\]));

    // 7 将job中配置的相关参数,以及job所用的java类所在的jar包, 提交给yarn去运行  
    boolean result = job.waitForCompletion(true);  
    System.exit(result ? 0 : 1);  
}  

}

3.3.4 WritableComparable排序

1.排序的分类

2.自定义排序WritableComparable

(1)原理分析

bean对象做为key传输,需要实现WritableComparable接口重写compareTo方法,就可以实现排序。

@Override
public int compareTo(FlowBean o) {

int result;

// 按照总流量大小,倒序排列  
if (sumFlow > bean.getSumFlow()) {  
    result = -1;  
}else if (sumFlow < bean.getSumFlow()) {  
    result = 1;  
}else {  
    result = 0;  
}

return result;  

}

3.3.5 WritableComparable排序案例实操(全排序

1.需求

根据案例2.3产生的结果再次对总流量进行排序。

(1)输入数据

    原始数据                                第一次处理后的数据

phone_data.txt
1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200
6 84188413 192.168.100.3 www.atguigu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200

part-r-00000
13470253144 180 180 360
13509468723 7335 110349 117684
13560439638 918 4938 5856
13568436656 3597 25635 29232
13590439668 1116 954 2070
13630577991 6960 690 7650
13682846555 1938 2910 4848
13729199489 240 0 240
13736230513 2481 24681 27162
13768778790 120 120 240
13846544121 264 0 264
13956435636 132 1512 1644
13966251146 240 0 240
13975057813 11058 48243 59301
13992314666 3008 3720 6728
15043685818 3659 3538 7197
15910133277 3156 2936 6092
15959002129 1938 180 2118
18271575951 1527 2106 3633
18390173782 9531 2412 11943
84188413 4116 1432 5548

(2)期望输出数据

13509468723   7335  110349  117684

13736230513   2481  24681    27162

13956435636   132    1512      1644

13846544121   264    0       264

。。。 。。。

2.需求分析

3.代码实现

(1)FlowBean对象在在需求1基础上增加了比较功能

package com.atguigu.mapreduce.sort;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;

public class FlowBean implements WritableComparable {

private long upFlow;  
private long downFlow;  
private long sumFlow;

// 反序列化时,需要反射调用空参构造函数,所以必须有  
public FlowBean() {  
    super();  
}

public FlowBean(long upFlow, long downFlow) {  
    super();  
    this.upFlow = upFlow;  
    this.downFlow = downFlow;  
    this.sumFlow = upFlow + downFlow;  
}

public void set(long upFlow, long downFlow) {  
    this.upFlow = upFlow;  
    this.downFlow = downFlow;  
    this.sumFlow = upFlow + downFlow;  
}

public long getSumFlow() {  
    return sumFlow;  
}

public void setSumFlow(long sumFlow) {  
    this.sumFlow = sumFlow;  
}    

public long getUpFlow() {  
    return upFlow;  
}

public void setUpFlow(long upFlow) {  
    this.upFlow = upFlow;  
}

public long getDownFlow() {  
    return downFlow;  
}

public void setDownFlow(long downFlow) {  
    this.downFlow = downFlow;  
}

/\*\*  
 \* 序列化方法  
 \* @param out  
 \* @throws IOException  
 \*/  
@Override  
public void write(DataOutput out) throws IOException {  
    out.writeLong(upFlow);  
    out.writeLong(downFlow);  
    out.writeLong(sumFlow);  
}

/\*\*  
 \* 反序列化方法 注意反序列化的顺序和序列化的顺序完全一致  
 \* @param in  
 \* @throws IOException  
 \*/  
@Override  
public void readFields(DataInput in) throws IOException {  
    upFlow = in.readLong();  
    downFlow = in.readLong();  
    sumFlow = in.readLong();  
}

@Override  
public String toString() {  
    return upFlow + "\\t" + downFlow + "\\t" + sumFlow;  
}

@Override  
public int compareTo(FlowBean o) {

    int result;

    // 按照总流量大小,倒序排列  
    if (sumFlow > bean.getSumFlow()) {  
        result = -1;  
    }else if (sumFlow < bean.getSumFlow()) {  
        result = 1;  
    }else {  
        result = 0;  
    }

    return result;  
}  

}

(2)编写Mapper类

package com.atguigu.mapreduce.sort;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class FlowCountSortMapper extends Mapper{

FlowBean bean = new FlowBean();  
Text v = new Text();

@Override  
protected void map(LongWritable key, Text value, Context context)    throws IOException, InterruptedException {

    // 1 获取一行  
    String line = value.toString();

    // 2 截取  
    String\[\] fields = line.split("\\t");

    // 3 封装对象  
    String phoneNbr = fields\[0\];  
    long upFlow = Long.parseLong(fields\[1\]);  
    long downFlow = Long.parseLong(fields\[2\]);

    bean.set(upFlow, downFlow);  
    v.set(phoneNbr);

    // 4 输出  
    context.write(bean, v);  
}  

}

(3)编写Reducer类

package com.atguigu.mapreduce.sort;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class FlowCountSortReducer extends Reducer{

@Override  
protected void reduce(FlowBean key, Iterable<Text> values, Context context)    throws IOException, InterruptedException {

    // 循环输出,避免总流量相同情况  
    for (Text text : values) {  
        context.write(text, key);  
    }  
}  

}

(4)编写Driver类

package com.atguigu.mapreduce.sort;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class FlowCountSortDriver {

public static void main(String\[\] args) throws ClassNotFoundException, IOException, InterruptedException {

    // 输入输出路径需要根据自己电脑上实际的输入输出路径设置  
    args = new String\[\]{"e:/output1","e:/output2"};

    // 1 获取配置信息,或者job对象实例  
    Configuration configuration = new Configuration();  
    Job job = Job.getInstance(configuration);

    // 2 指定本程序的jar包所在的本地路径  
    job.setJarByClass(FlowCountSortDriver.class);

    // 3 指定本业务job要使用的mapper/Reducer业务类  
    job.setMapperClass(FlowCountSortMapper.class);  
    job.setReducerClass(FlowCountSortReducer.class);

    // 4 指定mapper输出数据的kv类型  
    job.setMapOutputKeyClass(FlowBean.class);  
    job.setMapOutputValueClass(Text.class);

    // 5 指定最终输出的数据的kv类型  
    job.setOutputKeyClass(Text.class);  
    job.setOutputValueClass(FlowBean.class);

    // 6 指定job的输入原始文件所在目录  
    FileInputFormat.setInputPaths(job, new Path(args\[0\]));  
    FileOutputFormat.setOutputPath(job, new Path(args\[1\]));

    // 7 将job中配置的相关参数,以及job所用的java类所在的jar包, 提交给yarn去运行  
    boolean result = job.waitForCompletion(true);  
    System.exit(result ? 0 : 1);  
}  

}

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章