| 程序包 | 说明 |
|---|---|
| org.apache.hadoop.contrib.index.example | |
| org.apache.hadoop.mapred |
A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
|
| org.apache.hadoop.mapred.lib |
Library of generally useful mappers, reducers, and partitioners.
|
| org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
| 构造器和说明 |
|---|
LineDocRecordReader(Configuration job,
FileSplit split)
Constructor
|
| 构造器和说明 |
|---|
KeyValueLineRecordReader(Configuration job,
FileSplit split) |
LineRecordReader(Configuration job,
FileSplit split) |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader(Configuration conf,
FileSplit split) |
SequenceFileAsTextRecordReader(Configuration conf,
FileSplit split) |
SequenceFileRecordReader(Configuration conf,
FileSplit split) |
| 限定符和类型 | 方法和说明 |
|---|---|
protected static FileSplit |
NLineInputFormat.createFileSplit(Path fileName,
long begin,
long length)
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
|
| 限定符和类型 | 方法和说明 |
|---|---|
static FileSplit |
StreamUtil.getCurrentSplit(JobConf job) |
| 构造器和说明 |
|---|
StreamBaseRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs) |
StreamXmlRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs) |
Copyright © 2009 The Apache Software Foundation