| 程序包 | 说明 |
|---|---|
| org.apache.hadoop.contrib.index.example | |
| org.apache.hadoop.contrib.index.mapred | |
| org.apache.hadoop.contrib.utils.join | |
| org.apache.hadoop.examples |
Hadoop example code.
|
| org.apache.hadoop.examples.dancing |
This package is a distributed implementation of Knuth's dancing links
algorithm that can run under Hadoop.
|
| org.apache.hadoop.examples.terasort |
This package consists of 3 map/reduce applications for Hadoop to
compete in the annual terabyte sort
competition.
|
| org.apache.hadoop.filecache | |
| org.apache.hadoop.mapred |
A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
|
| org.apache.hadoop.mapred.jobcontrol |
Utilities for managing dependent jobs.
|
| org.apache.hadoop.mapred.join |
Given a set of sorted datasets keyed with the same class and yielding equal
partitions, it is possible to effect a join of those datasets prior to the map.
|
| org.apache.hadoop.mapred.lib |
Library of generally useful mappers, reducers, and partitioners.
|
| org.apache.hadoop.mapred.lib.aggregate |
Classes for performing various counting and aggregations.
|
| org.apache.hadoop.mapred.lib.db |
org.apache.hadoop.mapred.lib.db Package
This package contains a library to read records from a database as an
input to a mapreduce job, and write the output records to the database.
|
| org.apache.hadoop.mapred.pipes |
Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce.
|
| org.apache.hadoop.mapreduce | |
| org.apache.hadoop.mapreduce.server.jobtracker | |
| org.apache.hadoop.mapreduce.server.tasktracker | |
| org.apache.hadoop.mapreduce.server.tasktracker.userlogs | |
| org.apache.hadoop.mapreduce.split | |
| org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
| org.apache.hadoop.tools | |
| org.apache.hadoop.tools.rumen |
Rumen is a data extraction and analysis tool built for
Apache Hadoop.
|
| 类和说明 |
|---|
| FileInputFormat
A base class for file-based
InputFormat. |
| FileSplit
A section of an input file.
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| OutputCollector |
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| FileOutputFormat
A base class for
OutputFormat. |
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| OutputCollector |
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
| Partitioner
Partitions the key space.
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| OutputCollector |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| FileInputFormat
A base class for file-based
InputFormat. |
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| MultiFileInputFormat
已过时。
Use
CombineFileInputFormat instead |
| MultiFileSplit
已过时。
Use
CombineFileSplit instead |
| OutputCollector |
| Partitioner
Partitions the key space.
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| OutputCollector |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| FileInputFormat
A base class for file-based
InputFormat. |
| FileOutputFormat
A base class for
OutputFormat. |
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| OutputCollector |
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| TextOutputFormat
An
OutputFormat that writes plain text files. |
| 类和说明 |
|---|
| InvalidJobConfException
This exception is thrown when jobconf misses some mendatory attributes
or value of some attributes is invalid.
|
| TaskController
Controls initialization, finalization and clean up of tasks, and
also the launching and killing of task JVMs.
|
| 类和说明 |
|---|
| AdminOperationsProtocol
Protocol for admin operations.
|
| CleanupQueue |
| ClusterStatus
Status information on the current state of the Map-Reduce cluster.
|
| Counters
A set of named counters.
|
| Counters.Counter
A counter record, comprising its name and value.
|
Counters.Group
Group of counters, comprising of counters from a particular
counter Enum class. |
| FileAlreadyExistsException
Used when target file already exists for any operation and
is not configured to be overwritten.
|
| FileInputFormat
A base class for file-based
InputFormat. |
| FileInputFormat.Counter |
| FileOutputFormat
A base class for
OutputFormat. |
| FileOutputFormat.Counter |
| FileSplit
A section of an input file.
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| InvalidJobConfException
This exception is thrown when jobconf misses some mendatory attributes
or value of some attributes is invalid.
|
| JobClient.TaskStatusFilter |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| JobContext |
| JobHistory.JobInfo
Helper class for logging or reading back events related to job start, finish or failure.
|
| JobHistory.Keys
Job history files contain key="value" pairs, where keys belong to this enum.
|
| JobHistory.Listener
Callback interface for reading back log events from JobHistory.
|
| JobHistory.RecordTypes
Record types are identifiers for each line of log in history files.
|
| JobHistory.Task
Helper class for logging or reading back events related to Task's start, finish or failure.
|
| JobHistory.TaskAttempt
Base class for Map and Reduce TaskAttempts.
|
| JobHistory.Values
This enum contains some of the values commonly used by history log events.
|
| JobID
JobID represents the immutable and unique identifier for
the job.
|
| JobInProgress
JobInProgress maintains all the info for keeping
a Job on the straight and narrow.
|
| JobInProgress.Counter |
| JobPriority
Used to describe the priority of the running job.
|
| JobProfile
A JobProfile is a MapReduce primitive.
|
| JobQueueInfo
Class that contains the information regarding the Job Queues which are
maintained by the Hadoop Map/Reduce framework.
|
| JobStatus
Describes the current status of a job.
|
| JobTracker
JobTracker is the central location for submitting and
tracking MR jobs in a network environment.
|
| JobTracker.SafeModeAction
JobTracker SafeMode
|
| JobTracker.State |
| JobTrackerMXBean
The MXBean interface for JobTrackerInfo
|
| JvmTask |
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapRunnable
Expert: Generic interface for
Mappers. |
| MapTaskCompletionEventsUpdate
A class that represents the communication between the tasktracker and child
tasks w.r.t the map task completion events.
|
| Operation
Generic operation that maps to the dependent set of ACLs that drive the
authorization of the operation.
|
| OutputCollector |
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
| Partitioner
Partitions the key space.
|
| QueueAclsInfo
Class to encapsulate Queue ACLs for a particular
user.
|
RawKeyValueIterator
RawKeyValueIterator is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data. |
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
| SequenceFileInputFilter.Filter
filter interface
|
| SequenceFileInputFilter.FilterBase
base class for Filters
|
| SequenceFileInputFormat
An
InputFormat for SequenceFiles. |
| SequenceFileOutputFormat
An
OutputFormat that writes SequenceFiles. |
| Task
Base class for tasks.
|
| Task.CombinerRunner |
| Task.Counter |
| Task.TaskReporter |
| TaskAttemptContext |
| TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for
a task attempt.
|
| TaskCompletionEvent
This is used to track task completion events on
job tracker.
|
| TaskCompletionEvent.Status |
| TaskController
Controls initialization, finalization and clean up of tasks, and
also the launching and killing of task JVMs.
|
| TaskID
TaskID represents the immutable and unique identifier for
a Map or Reduce Task.
|
| TaskLog.LogName
The filter for userlogs.
|
| TaskReport
A report on the state of a task.
|
| TaskStatus
Describes the current status of a task.
|
| TaskStatus.Phase |
| TaskStatus.State |
| TaskTracker
TaskTracker is a process that starts and tracks MR Tasks
in a networked environment.
|
| TaskTrackerMXBean
MXBean interface for TaskTracker
|
| TaskTrackerStatus
A TaskTrackerStatus is a MapReduce primitive.
|
| TaskUmbilicalProtocol
Protocol that task child process uses to contact its parent process.
|
| TIPStatus
The states of a
TaskInProgress as seen by the JobTracker. |
| Utils.OutputFileUtils.OutputLogFilter
This class filters log files from directory given
It doesnt accept paths having _logs.
|
| 类和说明 |
|---|
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
| JobConf
A map/reduce job configuration.
|
| JobID
JobID represents the immutable and unique identifier for
the job.
|
| 类和说明 |
|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| FileInputFormat
A base class for file-based
InputFormat. |
| FileOutputFormat
A base class for
OutputFormat. |
| FileSplit
A section of an input file.
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| MapRunnable
Expert: Generic interface for
Mappers. |
| OutputCollector |
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
| Partitioner
Partitions the key space.
|
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| OutputCollector |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
| 类和说明 |
|---|
| ID
A general identifier, which internally stores the id
as an integer.
|
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
RawKeyValueIterator
RawKeyValueIterator is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data. |
| TaskCompletionEvent
This is used to track task completion events on
job tracker.
|
| 类和说明 |
|---|
| JobInProgress
JobInProgress maintains all the info for keeping
a Job on the straight and narrow.
|
| TaskTrackerStatus
A TaskTrackerStatus is a MapReduce primitive.
|
| 类和说明 |
|---|
| Task
Base class for tasks.
|
| 类和说明 |
|---|
| TaskController
Controls initialization, finalization and clean up of tasks, and
also the launching and killing of task JVMs.
|
| UserLogCleaner
This is used only in UserLogManager, to manage cleanup of user logs.
|
| 类和说明 |
|---|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
| 类和说明 |
|---|
| FileInputFormat
A base class for file-based
InputFormat. |
| FileSplit
A section of an input file.
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
InputSplit
InputSplit represents the data to be processed by an
individual Mapper. |
JobClient
JobClient is the primary interface for the user-job to interact
with the JobTracker. |
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| JobID
JobID represents the immutable and unique identifier for
the job.
|
| KeyValueTextInputFormat
An
InputFormat for plain text files. |
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapRunnable
Expert: Generic interface for
Mappers. |
| MapRunner
Default
MapRunnable implementation. |
| OutputCollector |
RecordReader
RecordReader reads <key, value> pairs from an
InputSplit. |
| Reducer
Reduces a set of intermediate values which share a key to a smaller set of
values.
|
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
RunningJob
RunningJob is the user-interface to query for details on a
running Map-Reduce job. |
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
| JobConfigurable
That what may be configured.
|
| Mapper
Maps input key/value pairs to a set of intermediate key/value pairs.
|
| MapReduceBase |
| OutputCollector |
| Reporter
A facility for Map-Reduce applications to report progress and update
counters, status information etc.
|
| 类和说明 |
|---|
| JobConf
A map/reduce job configuration.
|
| JobPriority
Used to describe the priority of the running job.
|
| TaskStatus.State |
Copyright © 2009 The Apache Software Foundation