org.apache.pig.impl.io
Class ReadToEndLoader
java.lang.Object
org.apache.pig.LoadFunc
org.apache.pig.impl.io.ReadToEndLoader
- All Implemented Interfaces:
- LoadMetadata
public class ReadToEndLoader
- extends LoadFunc
- implements LoadMetadata
This is wrapper Loader which wraps a real LoadFunc underneath and allows
to read a file completely starting a given split (indicated by a split index
which is used to look in the List returned by the underlying
InputFormat's getSplits() method). So if the supplied split index is 0, this
loader will read the entire file. If it is non zero it will read the partial
file beginning from that split to the last split.
The call sequence to use this is:
1) construct an object using the constructor
2) Call getNext() in a loop till it returns null
Constructor Summary |
ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex)
|
ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int[] toReadSplitIdxs)
This constructor takes an array of split indexes (toReadSplitIdxs) of the
splits to be read. |
ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex,
PigContext pigContext)
|
ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex,
String signature)
|
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
ReadToEndLoader
public ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex)
throws IOException
- Parameters:
wrappedLoadFunc
- conf
- inputLocation
- splitIndex
-
- Throws:
IOException
InterruptedException
ReadToEndLoader
public ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex,
PigContext pigContext)
throws IOException
- Throws:
IOException
ReadToEndLoader
public ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int splitIndex,
String signature)
throws IOException
- Throws:
IOException
ReadToEndLoader
public ReadToEndLoader(LoadFunc wrappedLoadFunc,
org.apache.hadoop.conf.Configuration conf,
String inputLocation,
int[] toReadSplitIdxs)
throws IOException
- This constructor takes an array of split indexes (toReadSplitIdxs) of the
splits to be read.
- Parameters:
wrappedLoadFunc
- conf
- inputLocation
- toReadSplitIdxs
-
- Throws:
IOException
InterruptedException
getNext
public Tuple getNext()
throws IOException
- Description copied from class:
LoadFunc
- Retrieves the next tuple to be processed. Implementations should NOT reuse
tuple objects (or inner member objects) they return across calls and
should return a different tuple object in each call.
- Specified by:
getNext
in class LoadFunc
- Returns:
- the next tuple to be processed or null if there are no more tuples
to be processed.
- Throws:
IOException
- if there is an exception while retrieving the next
tuple
getInputFormat
public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
throws IOException
- Description copied from class:
LoadFunc
- This will be called during planning on the front end. This is the
instance of InputFormat (rather than the class name) because the
load function may need to instantiate the InputFormat in order
to control how it is constructed.
- Specified by:
getInputFormat
in class LoadFunc
- Returns:
- the InputFormat associated with this loader.
- Throws:
IOException
- if there is an exception during InputFormat
construction
getLoadCaster
public LoadCaster getLoadCaster()
throws IOException
- Description copied from class:
LoadFunc
- This will be called on the front end during planning and not on the back
end during execution.
- Overrides:
getLoadCaster
in class LoadFunc
- Returns:
- the
LoadCaster
associated with this loader. Returning null
indicates that casts from byte array are not supported for this loader.
construction
- Throws:
IOException
- if there is an exception during LoadCaster
prepareToRead
public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
- Description copied from class:
LoadFunc
- Initializes LoadFunc for reading data. This will be called during execution
before any calls to getNext. The RecordReader needs to be passed here because
it has been instantiated for a particular InputSplit.
- Specified by:
prepareToRead
in class LoadFunc
- Parameters:
reader
- RecordReader
to be used by this instance of the LoadFuncsplit
- The input PigSplit
to process
setLocation
public void setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from class:
LoadFunc
- Communicate to the loader the location of the object(s) being loaded.
The location string passed to the LoadFunc here is the return value of
LoadFunc.relativeToAbsolutePath(String, Path)
. Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
- Specified by:
setLocation
in class LoadFunc
- Parameters:
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)
job
- the Job
object
store or retrieve earlier stored information from the UDFContext
- Throws:
IOException
- if the location is not valid.
getSchema
public ResourceSchema getSchema(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from interface:
LoadMetadata
- Get a schema for the data to be loaded.
- Specified by:
getSchema
in interface LoadMetadata
- Parameters:
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.
- Returns:
- schema for the data to be loaded. This schema should represent
all tuples of the returned data. If the schema is unknown or it is
not possible to return a schema that represents all returned data,
then null should be returned. The schema should not be affected by pushProjection, ie.
getSchema should always return the original schema even after pushProjection
- Throws:
IOException
- if an exception occurs while determining the schema
getStatistics
public ResourceStatistics getStatistics(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from interface:
LoadMetadata
- Get statistics about the data to be loaded. If no statistics are
available, then null should be returned. If the implementing class also extends
LoadFunc
, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job)
is guaranteed to be called before this method.
- Specified by:
getStatistics
in interface LoadMetadata
- Parameters:
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.
- Returns:
- statistics about the data to be loaded. If no statistics are
available, then null should be returned.
- Throws:
IOException
- if an exception occurs while retrieving statistics
getPartitionKeys
public String[] getPartitionKeys(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from interface:
LoadMetadata
- Find what columns are partition keys for this input.
- Specified by:
getPartitionKeys
in interface LoadMetadata
- Parameters:
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.
- Returns:
- array of field names of the partition keys. Implementations
should return null to indicate that there are no partition keys
- Throws:
IOException
- if an exception occurs while retrieving partition keys
setPartitionFilter
public void setPartitionFilter(Expression partitionFilter)
throws IOException
- Description copied from interface:
LoadMetadata
- Set the filter for partitioning. It is assumed that this filter
will only contain references to fields given as partition keys in
getPartitionKeys. So if the implementation returns null in
LoadMetadata.getPartitionKeys(String, Job)
, then this method is not
called by Pig runtime. This method is also not called by the Pig runtime
if there are no partition filter conditions.
- Specified by:
setPartitionFilter
in interface LoadMetadata
- Parameters:
partitionFilter
- that describes filter for partitioning
- Throws:
IOException
- if the filter is not compatible with the storage
mechanism or contains non-partition fields.
setUDFContextSignature
public void setUDFContextSignature(String signature)
- Description copied from class:
LoadFunc
- This method will be called by Pig both in the front end and back end to
pass a unique signature to the
LoadFunc
. The signature can be used
to store into the UDFContext
any information which the
LoadFunc
needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList)
for
use in the back end before returning tuples in LoadFunc.getNext()
.
This method will be call before other methods in LoadFunc
- Overrides:
setUDFContextSignature
in class LoadFunc
- Parameters:
signature
- a unique signature to identify this LoadFunc
Copyright © 2007-2012 The Apache Software Foundation