public class SparkPartitionPruningSinkOperator extends Operator<SparkPartitionPruningSinkDesc>
Operator.Counter, Operator.OperatorFunc, Operator.State| Modifier and Type | Field and Description |
|---|---|
protected org.apache.hadoop.io.DataOutputBuffer |
buffer |
protected static org.slf4j.Logger |
LOG |
protected Serializer |
serializer |
abortOp, alias, asyncInitOperations, bucketingVersion, cContext, childOperators, childOperatorsArray, childOperatorsTag, conf, CONTEXT_NAME_KEY, done, groupKeyObject, HIVE_COUNTER_CREATED_DYNAMIC_PARTITIONS, HIVE_COUNTER_CREATED_FILES, HIVE_COUNTER_FATAL, id, indexForTezUnion, inputObjInspectors, numRows, operatorId, out, outputObjInspector, parentOperators, reporter, runTimeNumRows, state, statsMap| Constructor and Description |
|---|
SparkPartitionPruningSinkOperator()
Kryo ctor.
|
SparkPartitionPruningSinkOperator(CompilationOpContext ctx) |
| Modifier and Type | Method and Description |
|---|---|
void |
addAsSourceEvent(MapWork mapWork,
ExprNodeDesc partKey,
String columnName,
String columnType)
Add this DPP sink as a pruning source for the target MapWork.
|
void |
closeOp(boolean abort)
Operator specific close routine.
|
Operator<?> |
getBranchingOp() |
String |
getName()
Gets the name of the node.
|
static String |
getOperatorName() |
org.apache.hadoop.hive.ql.plan.api.OperatorType |
getType()
Return the type of the specific operator among the
types in OperatorType.
|
String |
getUniqueId() |
void |
initializeOp(org.apache.hadoop.conf.Configuration hconf)
Operator specific initialization.
|
boolean |
isWithMapjoin() |
void |
process(Object row,
int tag)
Process the row.
|
void |
removeFromSourceEvent(MapWork mapWork,
ExprNodeDesc partKey,
String columnName,
String columnType)
Remove this DPP sink from the target MapWork's pruning source.
|
void |
setUniqueId(String uniqueId) |
abort, acceptLimitPushdown, allInitializedParentsAreClosed, areAllParentsInitialized, augmentPlan, cleanUpInputFileChanged, cleanUpInputFileChangedOp, clone, cloneOp, cloneRecursiveChildren, close, columnNamesRowResolvedCanBeObtained, completeInitializationOp, createDummy, defaultEndGroup, defaultStartGroup, dump, dump, endGroup, flush, flushRecursive, forward, forward, forward, getAdditionalCounters, getBucketingVersion, getChildOperators, getChildren, getColumnExprMap, getCompilationOpContext, getConf, getConfiguration, getCounterName, getDone, getExecContext, getGroupKeyObject, getIdentifier, getIndexForTezUnion, getInputObjInspectors, getIsReduceSink, getMarker, getNextCntr, getNumChild, getNumParent, getOperatorId, getOpTraits, getOutputObjInspector, getParentOperators, getReduceOutputName, getSchema, getStatistics, getStats, initEvaluators, initEvaluators, initEvaluatorsAndReturnStruct, initialize, initialize, initializeChildren, initializeLocalWork, initOperatorId, isUseBucketizedHiveInputFormat, jobClose, jobCloseOp, logicalEquals, logicalEqualsTree, logStats, opAllowedAfterMapJoin, opAllowedBeforeMapJoin, opAllowedBeforeSortMergeJoin, opAllowedConvertMapJoin, passExecContext, preorderMap, processGroup, removeChild, removeChildAndAdoptItsChildren, removeParent, removeParents, replaceChild, replaceParent, reset, setAlias, setBucketingVersion, setChildOperators, setColumnExprMap, setCompilationOpContext, setConf, setDone, setExecContext, setGroupKeyObject, setIndexForTezUnion, setInputContext, setInputObjInspectors, setMarker, setNextVectorBatchGroupStatus, setOpTraits, setOutputCollector, setParentOperators, setReporter, setSchema, setStatistics, setUseBucketizedHiveInputFormat, startGroup, supportAutomaticSortMergeJoin, supportSkewJoinOptimization, supportUnionRemoveOptimization, toString, toStringprotected transient Serializer serializer
protected transient org.apache.hadoop.io.DataOutputBuffer buffer
protected static final org.slf4j.Logger LOG
public SparkPartitionPruningSinkOperator()
public SparkPartitionPruningSinkOperator(CompilationOpContext ctx)
public void initializeOp(org.apache.hadoop.conf.Configuration hconf)
throws HiveException
OperatorinitializeOp in class Operator<SparkPartitionPruningSinkDesc>HiveExceptionpublic void process(Object row, int tag) throws HiveException
Operatorprocess in class Operator<SparkPartitionPruningSinkDesc>row - The object representing the row.tag - The tag of the row usually means which parent this row comes from.
Rows with the same tag should have exactly the same rowInspector
all the time.HiveExceptionpublic void closeOp(boolean abort)
throws HiveException
OperatorcloseOp in class Operator<SparkPartitionPruningSinkDesc>HiveExceptionpublic boolean isWithMapjoin()
public Operator<?> getBranchingOp()
public org.apache.hadoop.hive.ql.plan.api.OperatorType getType()
OperatorgetType in class Operator<SparkPartitionPruningSinkDesc>public String getName()
NodegetName in interface NodegetName in class Operator<SparkPartitionPruningSinkDesc>public static String getOperatorName()
public String getUniqueId()
public void setUniqueId(String uniqueId)
public void addAsSourceEvent(MapWork mapWork, ExprNodeDesc partKey, String columnName, String columnType)
public void removeFromSourceEvent(MapWork mapWork, ExprNodeDesc partKey, String columnName, String columnType)
Copyright © 2019 The Apache Software Foundation. All Rights Reserved.