| Constructor and Description |
|---|
CliSessionState(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static void |
ServerUtils.cleanUpScratchDir(HiveConf hiveConf) |
static boolean |
FileUtils.copy(org.apache.hadoop.fs.FileSystem srcFS,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.FileSystem dstFS,
org.apache.hadoop.fs.Path dst,
boolean deleteSource,
boolean overwrite,
HiveConf conf)
Copies files between filesystems.
|
static boolean |
FileUtils.distCp(org.apache.hadoop.fs.FileSystem srcFS,
List<org.apache.hadoop.fs.Path> srcPaths,
org.apache.hadoop.fs.Path dst,
boolean deleteSource,
String doAsUser,
HiveConf conf,
HadoopShims shims) |
static String |
LogUtils.initHiveLog4jCommon(HiveConf conf,
HiveConf.ConfVars confVarName) |
static boolean |
FileUtils.isLocalFile(HiveConf conf,
String fileName)
A best effort attempt to determine if if the file is a local file
|
static boolean |
FileUtils.isLocalFile(HiveConf conf,
URI fileUri)
A best effort attempt to determine if if the file is a local file
|
| Modifier and Type | Method and Description |
|---|---|
static JsonParser |
JsonParserFactory.getParser(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
InPlaceUpdate.canRenderInPlace(HiveConf conf) |
| Constructor and Description |
|---|
LegacyMetrics(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static void |
MetricsFactory.init(HiveConf conf)
Initializes static Metrics instance.
|
| Constructor and Description |
|---|
CodahaleMetrics(HiveConf conf) |
ConsoleMetricsReporter(com.codahale.metrics.MetricRegistry registry,
HiveConf conf) |
JmxMetricsReporter(com.codahale.metrics.MetricRegistry registry,
HiveConf conf) |
JsonFileMetricsReporter(com.codahale.metrics.MetricRegistry registry,
HiveConf conf) |
Metrics2Reporter(com.codahale.metrics.MetricRegistry registry,
HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static StringBuilder |
HiveConfUtil.dumpConfig(HiveConf conf)
Dumps all HiveConf for debugging.
|
String |
VariableSubstitution.substitute(HiveConf conf,
String expr) |
| Constructor and Description |
|---|
HiveConf(HiveConf other)
Copy constructor
|
| Modifier and Type | Method and Description |
|---|---|
static void |
LlapCoordinator.initializeInstance(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static IMetaStoreClient |
HiveMetaStoreUtils.getHiveMetastoreClient(HiveConf hiveConf)
Get or create a hive client depending on whether it exits in cache or not
|
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
QueryState.getConf() |
HiveConf |
IDriver.getConf() |
HiveConf |
Driver.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static IDriver |
DriverFactory.newDriver(HiveConf conf) |
static void |
DriverUtils.runOnDriver(HiveConf conf,
String user,
SessionState sessionState,
String query,
org.apache.hadoop.hive.common.ValidWriteIdList writeIds) |
void |
Context.setConf(HiveConf conf) |
static SessionState |
DriverUtils.setUpSessionState(HiveConf conf,
String user,
boolean doStart) |
QueryState.Builder |
QueryState.Builder.withHiveConf(HiveConf hiveConf)
The source HiveConf object used to create the QueryState.
|
| Constructor and Description |
|---|
Driver(HiveConf conf) |
Driver(HiveConf conf,
Context ctx,
LineageState lineageState) |
Driver(HiveConf conf,
LineageState lineageState) |
Driver(HiveConf conf,
String userName,
LineageState lineageState) |
| Modifier and Type | Method and Description |
|---|---|
static void |
QueryResultsCache.initialize(HiveConf conf) |
| Modifier and Type | Field and Description |
|---|---|
protected HiveConf |
Task.conf |
| Modifier and Type | Method and Description |
|---|---|
static void |
Utilities.ensurePathIsWritable(org.apache.hadoop.fs.Path rootHDFSDirPath,
HiveConf conf)
Checks if path passed in exists and has writable permissions.
|
static int |
Utilities.estimateNumberOfReducers(HiveConf conf,
org.apache.hadoop.fs.ContentSummary inputSummary,
MapWork work,
boolean finalMapRed)
Estimate the number of reducers needed for this job, based on job input,
and configuration parameters.
|
static <T extends Serializable> |
TaskFactory.get(T work,
HiveConf conf) |
static <T extends Serializable> |
TaskFactory.getAndMakeChild(T work,
HiveConf conf,
Task<? extends Serializable>... tasklist) |
static Collection<Class<?>> |
Utilities.getClassNamesFromConfig(HiveConf hiveConf,
HiveConf.ConfVars confVar) |
static Task<?> |
ReplCopyTask.getLoadCopyTask(ReplicationSpec replicationSpec,
org.apache.hadoop.fs.Path srcPath,
org.apache.hadoop.fs.Path dstPath,
HiveConf conf) |
static String |
Utilities.getQualifiedPath(HiveConf conf,
org.apache.hadoop.fs.Path path)
Convert path to qualified path.
|
static boolean |
Utilities.isPerfOrAboveLogging(HiveConf conf)
Checks if the current HiveServer2 logging operation level is >= PERFORMANCE.
|
static void |
DDLTask.makeLocationQualified(String databaseName,
Table table,
HiveConf conf)
Make location in specified sd qualified.
|
static ExecutorService |
StatsTask.newThreadPool(HiveConf conf) |
static void |
Utilities.reworkMapRedWork(Task<? extends Serializable> task,
boolean reworkMapredWork,
HiveConf conf)
The check here is kind of not clean.
|
void |
Task.setConf(HiveConf conf) |
static void |
DDLTask.validateSerDe(String serdeName,
HiveConf conf)
Check if the given serde is valid.
|
| Constructor and Description |
|---|
HarPathHelper(HiveConf hconf,
URI archive,
URI originalBase)
Creates helper for archive.
|
SecureCmdDoAs(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static String |
ExecDriver.generateCmdLine(HiveConf hconf,
Context ctx)
Given a Hive Configuration object - generate a command line fragment for passing such
configuration information to ExecDriver.
|
protected static String |
ExecDriver.getResource(HiveConf conf,
SessionState.ResourceType resType)
Retrieve the resources from the current session and configuration for the given type.
|
static String |
MapRedTask.isEligibleForLocalMode(HiveConf conf,
int numReducers,
long inputLength,
long inputFileCount)
Find out if a job can be run in local mode based on it's characteristics
|
| Modifier and Type | Method and Description |
|---|---|
static Task<?> |
ReplUtils.getTableCheckpointTask(ImportTableDesc tableDesc,
HashMap<String,String> partSpec,
String dumpRoot,
HiveConf conf) |
static Task<?> |
ReplUtils.getTableReplLogTask(ImportTableDesc tableDesc,
ReplLogger replLogger,
HiveConf conf) |
| Constructor and Description |
|---|
ReplLoadWork(HiveConf hiveConf,
String dumpDirectory,
String dbNameOrPattern,
LineageState lineageState) |
ReplLoadWork(HiveConf hiveConf,
String dumpDirectory,
String dbNameToLoadIn,
String tableNameToLoadIn,
LineageState lineageState) |
| Modifier and Type | Method and Description |
|---|---|
DatabaseEvent |
DatabaseEvent.State.toEvent(HiveConf hiveConf) |
| Constructor and Description |
|---|
BootstrapEventsIterator(String dumpDirectory,
String dbNameToLoadIn,
HiveConf hiveConf) |
ConstraintEventsIterator(String dumpDirectory,
HiveConf hiveConf) |
| Modifier and Type | Field and Description |
|---|---|
HiveConf |
Context.hiveConf |
| Constructor and Description |
|---|
Context(String dumpDirectory,
HiveConf hiveConf,
Hive hiveDb,
LineageState lineageState,
Context nestedContext) |
PathInfo(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static HiveSparkClient |
HiveSparkClientFactory.createHiveSparkClient(HiveConf hiveconf,
String sessionId) |
static SparkTask |
SparkUtilities.createSparkTask(HiveConf conf) |
static SparkTask |
SparkUtilities.createSparkTask(SparkWork work,
HiveConf conf) |
static LocalHiveSparkClient |
LocalHiveSparkClient.getInstance(org.apache.spark.SparkConf sparkConf,
HiveConf hiveConf) |
static SparkSession |
SparkUtilities.getSparkSession(HiveConf conf,
SparkSessionManager sparkSessionManager) |
static Map<String,String> |
HiveSparkClientFactory.initiateSparkConf(HiveConf hiveConf,
String sessionId) |
static URI |
SparkUtilities.uploadToHDFS(URI source,
HiveConf conf)
Uploads a local file to HDFS
|
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
SparkSession.getConf() |
HiveConf |
SparkSessionImpl.getConf() |
| Modifier and Type | Method and Description |
|---|---|
SparkSession |
SparkSessionManagerImpl.getSession(SparkSession existingSession,
HiveConf conf,
boolean doOpen)
If the existingSession can be reused return it.
|
SparkSession |
SparkSessionManager.getSession(SparkSession existingSession,
HiveConf conf,
boolean doOpen)
Get a valid SparkSession.
|
void |
SparkSession.open(HiveConf conf)
Initializes a Spark session for DAG execution.
|
void |
SparkSessionImpl.open(HiveConf conf) |
void |
SparkSessionManagerImpl.setup(HiveConf hiveConf) |
void |
SparkSessionManager.setup(HiveConf hiveConf)
Initialize based on given configuration.
|
| Constructor and Description |
|---|
LocalSparkJobMonitor(HiveConf hiveConf,
SparkJobStatus sparkJobStatus) |
RemoteSparkJobMonitor(HiveConf hiveConf,
RemoteSparkJobStatus sparkJobStatus) |
| Constructor and Description |
|---|
LocalSparkJobRef(String jobId,
HiveConf hiveConf,
LocalSparkJobStatus sparkJobStatus,
org.apache.spark.api.java.JavaSparkContext javaSparkContext) |
RemoteSparkJobRef(HiveConf hiveConf,
JobHandle<Serializable> jobHandler,
RemoteSparkJobStatus sparkJobStatus) |
| Modifier and Type | Method and Description |
|---|---|
protected HiveConf |
WorkloadManager.getConf() |
HiveConf |
TezSessionState.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static WorkloadManager |
WorkloadManager.create(String yarnQueue,
HiveConf conf,
org.apache.hadoop.hive.metastore.api.WMFullResourcePlan plan)
Called once, when HS2 initializes.
|
org.apache.hadoop.mapred.JobConf |
DagUtils.createConfiguration(HiveConf hiveConf)
Creates and initializes a JobConf object that can be used to execute
the DAG.
|
protected org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession |
TezSessionPoolManager.createSession(String sessionId,
HiveConf conf) |
protected WmTezSession |
WorkloadManager.createSessionObject(String sessionId,
HiveConf conf) |
TezSessionState |
TezSessionPoolManager.getSession(TezSessionState session,
HiveConf conf,
boolean doOpen,
boolean llap) |
static TezSessionState |
WorkloadManagerFederation.getSession(TezSessionState session,
HiveConf conf,
UserPoolMapping.MappingInput input,
boolean isUnmanagedLlapMode,
WmContext wmContext) |
WmTezSession |
WorkloadManager.getSession(TezSessionState session,
UserPoolMapping.MappingInput input,
HiveConf conf) |
WmTezSession |
WorkloadManager.getSession(TezSessionState session,
UserPoolMapping.MappingInput input,
HiveConf conf,
WmContext wmContext) |
void |
TezSessionPoolManager.initTriggers(HiveConf conf) |
void |
TezSessionPoolManager.setupNonPool(HiveConf conf) |
void |
TezSessionPoolManager.setupPool(HiveConf conf) |
void |
TezSessionPoolManager.startPool(HiveConf conf,
org.apache.hadoop.hive.metastore.api.WMFullResourcePlan resourcePlan) |
| Constructor and Description |
|---|
TezSessionState(DagUtils utils,
HiveConf conf)
Constructor.
|
TezSessionState(String sessionId,
HiveConf conf)
Constructor.
|
WmTezSession(String sessionId,
WorkloadManager parent,
org.apache.hadoop.hive.ql.exec.tez.SessionExpirationTracker expiration,
HiveConf conf) |
YarnQueueHelper(HiveConf conf) |
| Constructor and Description |
|---|
TezJobMonitor(List<BaseWork> topSortedWorks,
org.apache.tez.dag.api.client.DAGClient dagClient,
HiveConf conf,
org.apache.tez.dag.api.DAG dag,
Context ctx) |
| Modifier and Type | Method and Description |
|---|---|
static VectorizationContext.HiveVectorAdaptorUsageMode |
VectorizationContext.HiveVectorAdaptorUsageMode.getHiveConfValue(HiveConf hiveConf) |
static VectorizationContext.HiveVectorIfStmtMode |
VectorizationContext.HiveVectorIfStmtMode.getHiveConfValue(HiveConf hiveConf) |
| Constructor and Description |
|---|
VectorizationContext(String contextName,
HiveConf hiveConf) |
VectorizationContext(String contextName,
List<String> initialColumnNames,
HiveConf hiveConf) |
VectorizationContext(String contextName,
List<String> initialColumnNames,
List<TypeInfo> initialTypeInfos,
List<org.apache.hadoop.hive.common.type.DataTypePhysicalVariation> initialDataTypePhysicalVariations,
HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
HookContext.getConf() |
HiveConf |
QueryLifeTimeHookContextImpl.getHiveConf() |
HiveConf |
QueryLifeTimeHookContext.getHiveConf()
Get the current Hive configuration
|
| Modifier and Type | Method and Description |
|---|---|
static <T extends Hook> |
HookUtils.readHooksFromConf(HiveConf conf,
HiveConf.ConfVars hookConfVar) |
static String |
HookUtils.redactLogString(HiveConf conf,
String logString) |
void |
HookContext.setConf(HiveConf conf) |
void |
QueryLifeTimeHookContextImpl.setHiveConf(HiveConf conf) |
void |
QueryLifeTimeHookContext.setHiveConf(HiveConf conf)
Set Hive configuration
|
QueryLifeTimeHookContextImpl.Builder |
QueryLifeTimeHookContextImpl.Builder.withHiveConf(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
HiveFileFormatUtils.checkInputFormat(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
Class<? extends org.apache.hadoop.mapred.InputFormat> inputFormatCls,
List<org.apache.hadoop.fs.FileStatus> files)
checks if files are in same format as the given input format.
|
static boolean |
AcidUtils.isAcidEnabled(HiveConf hiveConf) |
void |
SymbolicInputFormat.rework(HiveConf job,
MapredWork work) |
void |
ReworkMapredInputFormat.rework(HiveConf job,
MapredWork work) |
boolean |
InputFormatChecker.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files)
This method is used to validate the input files.
|
boolean |
SequenceFileInputFormatChecker.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files) |
boolean |
RCFileInputFormat.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files) |
| Modifier and Type | Method and Description |
|---|---|
void |
MergeFileWork.resolveConcatenateMerge(HiveConf conf)
alter table ...
|
void |
MergeFileWork.resolveDynamicPartitionStoredAsSubDirsMerge(HiveConf conf,
org.apache.hadoop.fs.Path path,
TableDesc tblDesc,
ArrayList<String> aliases,
PartitionDesc partDesc) |
| Modifier and Type | Method and Description |
|---|---|
void |
ExternalCache.configure(HiveConf queryConfig) |
ExternalCache.ExternalFooterCachesByConf.Cache |
MetastoreExternalCachesByConf.getCache(HiveConf conf) |
ExternalCache.ExternalFooterCachesByConf.Cache |
ExternalCache.ExternalFooterCachesByConf.getCache(HiveConf conf) |
boolean |
VectorizedOrcInputFormat.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files) |
boolean |
OrcInputFormat.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files) |
| Modifier and Type | Method and Description |
|---|---|
boolean |
MapredParquetInputFormat.validateInput(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
List<org.apache.hadoop.fs.FileStatus> files) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
HiveLockManagerCtx.getConf() |
| Modifier and Type | Method and Description |
|---|---|
List<HiveLock> |
EmbeddedLockManager.getLocks(boolean verifyTablePartitions,
boolean fetchData,
HiveConf conf) |
List<HiveLock> |
EmbeddedLockManager.getLocks(HiveLockObject key,
boolean verifyTablePartitions,
boolean fetchData,
HiveConf conf) |
HiveTxnManager |
TxnManagerFactory.getTxnManager(HiveConf conf)
Create a new transaction manager.
|
void |
HiveLockManagerCtx.setConf(HiveConf conf) |
| Constructor and Description |
|---|
HiveLockManagerCtx(HiveConf conf) |
HiveLockObjectData(String queryId,
String lockTime,
String lockMode,
String queryStr,
HiveConf conf)
Constructor
Note: The parameters are used to uniquely identify a HiveLockObject.
|
| Modifier and Type | Method and Description |
|---|---|
static org.apache.curator.framework.CuratorFramework |
CuratorFrameworkSingleton.getInstance(HiveConf hiveConf) |
static void |
ZooKeeperHiveLockManager.releaseAllLocks(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static PerfLogger |
PerfLogger.getPerfLogger(HiveConf conf,
boolean resetPerfLogger) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
Hive.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static void |
Hive.clearDestForSubDirSrc(HiveConf conf,
org.apache.hadoop.fs.Path dest,
org.apache.hadoop.fs.Path src,
boolean isSrcLocal) |
static org.apache.hadoop.hive.metastore.api.Partition |
Hive.convertAddSpecToMetaPartition(Table tbl,
AddPartitionDesc.OnePartitionDesc addSpec,
HiveConf conf) |
protected static void |
Hive.copyFiles(HiveConf conf,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.FileSystem fs,
boolean isSrcLocal,
boolean isAcidIUD,
boolean isOverwrite,
List<org.apache.hadoop.fs.Path> newFiles,
boolean isBucketed,
boolean isFullAcidTable,
boolean isManaged)
Copy files.
|
org.apache.calcite.plan.RelOptMaterialization |
HiveMaterializedViewsRegistry.createMaterializedView(HiveConf conf,
Table materializedViewTable)
Adds a newly created materialized view to the cache.
|
static Hive |
Hive.get(HiveConf c)
Gets hive object for the current thread.
|
static Hive |
Hive.get(HiveConf c,
boolean needsRefresh)
get a connection to metastore.
|
boolean |
Hive.getPartitionsByExpr(Table tbl,
ExprNodeGenericFuncDesc expr,
HiveConf conf,
List<Partition> result)
Get a list of Partitions by expr.
|
static Hive |
Hive.getWithFastCheck(HiveConf c)
Same as
Hive.get(HiveConf), except that it checks only the object identity of existing
MS client, assuming the relevant settings would be unchanged within the same conf object. |
static Hive |
Hive.getWithFastCheck(HiveConf c,
boolean doRegisterAllFns)
Same as
Hive.get(HiveConf), except that it checks only the object identity of existing
MS client, assuming the relevant settings would be unchanged within the same conf object. |
static boolean |
Table.hasMetastoreBasedSchema(HiveConf conf,
String serdeLib) |
static boolean |
Hive.moveFile(HiveConf conf,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
boolean replace,
boolean isSrcLocal,
boolean isManaged) |
protected void |
Hive.replaceFiles(org.apache.hadoop.fs.Path tablePath,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.Path oldPath,
HiveConf conf,
boolean isSrcLocal,
boolean purge,
List<org.apache.hadoop.fs.Path> newFiles,
org.apache.hadoop.fs.PathFilter deletePathFilter,
boolean isNeedRecycle,
boolean isManaged)
Replaces files in the partition with new data set specified by srcf.
|
static boolean |
Table.shouldStoreFieldsInMetastore(HiveConf conf,
String serdeLib,
Map<String,String> tableParams) |
| Modifier and Type | Method and Description |
|---|---|
static MetaDataFormatter |
MetaDataFormatUtils.getFormatter(HiveConf conf) |
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par) |
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status.
|
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
BucketJoinProcCtx.getConf() |
HiveConf |
GroupByOptimizer.GroupByOptimizerContext.getConf() |
HiveConf |
GenMRProcContext.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static void |
GenMapRedUtils.addDependentMoveTasks(Task<MoveWork> mvTask,
HiveConf hconf,
Task<? extends Serializable> parentTask,
DependencyCollectionTask dependencyTask)
Adds the dependencyTaskForMultiInsert in ctx as a dependent of parentTask.
|
static void |
GenMapRedUtils.addStatsTask(FileSinkOperator nd,
MoveTask mvTask,
Task<? extends Serializable> currTask,
HiveConf hconf)
Add the StatsTask as a dependent task of the MoveTask
because StatsTask will change the Table/Partition metadata.
|
protected void |
GroupByOptimizer.SortGroupByProcessor.convertGroupByMapSideSortedGroupBy(HiveConf conf,
GroupByOperator groupByOp,
int depth) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
MapJoinOperator |
MapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
MapJoinOperator |
SparkMapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftSrc,
String[] baseSrc,
List<String> mapAliases,
int bigTablePos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
static MapJoinOperator |
MapJoinProcessor.convertSMBJoinToMapJoin(HiveConf hconf,
SMBMapJoinOperator smbJoinOp,
int bigTablePos,
boolean noCheckOuterJoin)
convert a sortmerge join to a a map-side join.
|
static org.apache.hadoop.fs.Path |
GenMapRedUtils.createMoveTask(Task<? extends Serializable> currTask,
boolean chDir,
FileSinkOperator fsOp,
ParseContext parseCtx,
List<Task<MoveWork>> mvTasks,
HiveConf hconf,
DependencyCollectionTask dependencyTask)
Create and add any dependent move tasks
|
static void |
GenMapRedUtils.createMRWorkForMergingFiles(FileSinkOperator fsInput,
org.apache.hadoop.fs.Path finalName,
DependencyCollectionTask dependencyTask,
List<Task<MoveWork>> mvTasks,
HiveConf conf,
Task<? extends Serializable> currTask,
LineageState lineageState) |
static void |
MapJoinProcessor.genMapJoinOpAndLocalWork(HiveConf conf,
MapredWork newWork,
JoinOperator op,
int mapJoinPos)
Convert the join to a map-join and also generate any local work needed.
|
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
static MapredWork |
GenMapRedUtils.getMapRedWorkFromConf(HiveConf conf)
create a new plan and return.
|
MemoryMonitorInfo |
ConvertJoinMapJoin.getMemoryMonitorInfo(long maxSize,
HiveConf conf,
LlapClusterStateForCompile llapInfo) |
protected long |
SizeBasedBigTableSelectorForAutoSMJ.getSize(HiveConf conf,
Partition partition) |
protected long |
SizeBasedBigTableSelectorForAutoSMJ.getSize(HiveConf conf,
Table table) |
void |
Optimizer.initialize(HiveConf hiveConf)
Create the list of transformations.
|
static boolean |
GenMapRedUtils.isMergeRequired(List<Task<MoveWork>> mvTasks,
HiveConf hconf,
FileSinkOperator fsOp,
Task<? extends Serializable> currTask,
boolean isInsertTable)
Returns true iff the fsOp requires a merge
|
void |
GroupByOptimizer.GroupByOptimizerContext.setConf(HiveConf conf) |
void |
GenMRProcContext.setConf(HiveConf conf) |
static void |
GenMapRedUtils.setMapWork(MapWork plan,
ParseContext parseCtx,
Set<ReadEntity> inputs,
PrunedPartitionList partsList,
TableScanOperator tsOp,
String alias_id,
HiveConf conf,
boolean local)
initialize MapWork
|
protected static boolean |
GenMapRedUtils.shouldMergeMovePaths(HiveConf conf,
org.apache.hadoop.fs.Path condInputPath,
org.apache.hadoop.fs.Path condOutputPath,
MoveWork linkedMoveWork)
Checks whether the given input/output paths and a linked MoveWork should be merged into one only MoveWork.
|
| Constructor and Description |
|---|
BucketJoinProcCtx(HiveConf conf) |
GenMRProcContext(HiveConf conf,
HashMap<Operator<? extends OperatorDesc>,Task<? extends Serializable>> opTaskMap,
ParseContext parseCtx,
List<Task<MoveWork>> mvTask,
List<Task<? extends Serializable>> rootTasks,
LinkedHashMap<Operator<? extends OperatorDesc>,GenMRProcContext.GenMapRedCtx> mapCurrCtx,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
GroupByOptimizerContext(HiveConf conf) |
SortBucketJoinProcCtx(HiveConf conf) |
TezBucketJoinProcCtx(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
void |
RelOptHiveTable.computePartitionList(HiveConf conf,
org.apache.calcite.rex.RexNode pruneNode,
Set<Integer> partOrVirtualCols) |
| Constructor and Description |
|---|
HiveDefaultRelMetadataProvider(HiveConf hiveConf) |
RelOptHiveTable(org.apache.calcite.plan.RelOptSchema calciteSchema,
String qualifiedTblName,
org.apache.calcite.rel.type.RelDataType rowType,
Table hiveTblMetadata,
List<ColumnInfo> hiveNonPartitionCols,
List<ColumnInfo> hivePartitionCols,
List<VirtualColumn> hiveVirtualCols,
HiveConf hconf,
Map<String,PrunedPartitionList> partitionCache,
Map<String,ColumnStatsList> colStatsCache,
AtomicInteger noColsMissingStats) |
| Modifier and Type | Method and Description |
|---|---|
static HiveOnTezCostModel |
HiveOnTezCostModel.getCostModel(HiveConf conf) |
| Constructor and Description |
|---|
HivePartitionPruneRule(HiveConf conf) |
HiveSubQueryRemoveRule(HiveConf conf) |
| Constructor and Description |
|---|
HiveOpConverter(SemanticAnalyzer semanticAnalyzer,
HiveConf hiveConf,
UnparseTranslator unparseTranslator,
Map<String,TableScanOperator> topOps) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
AnnotateOpTraitsProcCtx.getConf() |
| Modifier and Type | Method and Description |
|---|---|
void |
AnnotateOpTraitsProcCtx.setConf(HiveConf conf) |
| Modifier and Type | Field and Description |
|---|---|
protected HiveConf |
PhysicalContext.conf |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
PhysicalContext.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static List<Task> |
StageIDsRearranger.getExplainOrder(HiveConf conf,
List<Task<?>> tasks) |
void |
PhysicalContext.setConf(HiveConf conf) |
static boolean |
GenMRSkewJoinProcessor.skewJoinEnabled(HiveConf conf,
JoinOperator joinOp) |
| Constructor and Description |
|---|
PhysicalContext(HiveConf conf,
ParseContext parseContext,
Context context,
List<Task<? extends Serializable>> rootTasks,
Task<? extends Serializable> fetchTask) |
PhysicalOptimizer(PhysicalContext pctx,
HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner
condition.
|
| Constructor and Description |
|---|
SetSparkReducerParallelism(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
AnnotateStatsProcCtx.getConf() |
| Modifier and Type | Method and Description |
|---|---|
void |
AnnotateStatsProcCtx.setConf(HiveConf conf) |
| Modifier and Type | Field and Description |
|---|---|
protected HiveConf |
BaseSemanticAnalyzer.conf |
HiveConf |
GenTezProcContext.conf |
protected HiveConf |
TaskCompiler.conf |
HiveConf |
OptimizeTezProcContext.conf |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
ParseContext.getConf() |
HiveConf |
EximUtil.SemanticAnalyzerWrapperContext.getConf() |
| Modifier and Type | Method and Description |
|---|---|
protected ListBucketingCtx |
BaseSemanticAnalyzer.constructListBucketingCtx(List<String> skewedColNames,
List<List<String>> skewedValues,
Map<List<String>,String> skewedColValueLocationMaps,
boolean isStoredAsSubDirectories,
HiveConf conf)
Construct list bucketing context.
|
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
Iterable<Partition> partitions,
ReplicationSpec replicationSpec,
HiveConf hiveConf) |
protected static Hive |
BaseSemanticAnalyzer.createHiveDB(HiveConf conf) |
static org.apache.calcite.plan.RelOptPlanner |
CalcitePlanner.createPlanner(HiveConf conf) |
static TaskCompiler |
TaskCompilerFactory.getCompiler(HiveConf conf,
ParseContext parseContext)
Returns the appropriate compiler to translate the operator tree
into executable units.
|
static Map<String,String> |
AnalyzeCommandUtils.getPartKeyValuePairsFromAST(Table tbl,
ASTNode tree,
HiveConf hiveConf) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getValidatedPartSpec(Table table,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
static URI |
EximUtil.getValidatedURI(HiveConf conf,
String dcPath)
Initialize the URI where the exported data collection is
to created for export, or is present for import
|
WindowingSpec |
WindowingComponentizer.next(HiveConf hCfg,
SemanticAnalyzer semAly,
UnparseTranslator unparseT,
RowResolver inputRR) |
static String |
EximUtil.relativeToAbsolutePath(HiveConf conf,
String location) |
void |
ParseContext.setConf(HiveConf conf) |
static ReadEntity |
BaseSemanticAnalyzer.toReadEntity(org.apache.hadoop.fs.Path location,
HiveConf conf) |
static WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(org.apache.hadoop.fs.Path location,
HiveConf conf) |
PTFDesc |
PTFTranslator.translate(PTFInvocationSpec qSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
PTFDesc |
PTFTranslator.translate(WindowingSpec wdwSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
static org.apache.hadoop.fs.Path |
BaseSemanticAnalyzer.tryQualifyPath(org.apache.hadoop.fs.Path path,
HiveConf conf) |
static void |
BaseSemanticAnalyzer.validatePartColumnType(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf) |
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
| Constructor and Description |
|---|
ColumnStatsAutoGatherContext(SemanticAnalyzer sa,
HiveConf conf,
Operator<? extends OperatorDesc> op,
Table tbl,
Map<String,String> partSpec,
boolean isInsertInto,
Context ctx) |
GenTezProcContext(HiveConf conf,
ParseContext parseContext,
List<Task<MoveWork>> moveTask,
List<Task<? extends Serializable>> rootTasks,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
OptimizeTezProcContext(HiveConf conf,
ParseContext parseContext,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
SemanticAnalyzerWrapperContext(HiveConf conf,
Hive db,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
List<Task<? extends Serializable>> tasks,
org.slf4j.Logger LOG,
Context ctx) |
TableMask(SemanticAnalyzer analyzer,
HiveConf conf,
boolean skipTableMasking) |
TableSpec(Hive db,
HiveConf conf,
ASTNode ast) |
TableSpec(Hive db,
HiveConf conf,
ASTNode ast,
boolean allowDynamicPartitionsSpec,
boolean allowPartialPartitionsSpec) |
| Constructor and Description |
|---|
HiveAuthorizationTaskFactoryImpl(HiveConf conf,
Hive db) |
| Constructor and Description |
|---|
CopyUtils(String distCpDoAsUser,
HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
Utils.shouldReplicate(org.apache.hadoop.hive.metastore.api.NotificationEvent tableForEvent,
ReplicationSpec replicationSpec,
Hive db,
HiveConf hiveConf) |
static Boolean |
Utils.shouldReplicate(ReplicationSpec replicationSpec,
Table tableHandle,
HiveConf hiveConf)
validates if a table can be exported, similar to EximUtil.shouldExport with few replication
specific checks.
|
static void |
Utils.writeOutput(List<String> values,
org.apache.hadoop.fs.Path outputFile,
HiveConf hiveConf) |
| Constructor and Description |
|---|
Paths(String astRepresentationForErrorMsg,
org.apache.hadoop.fs.Path dbRoot,
String tblName,
HiveConf conf,
boolean shouldWriteData) |
Paths(String astRepresentationForErrorMsg,
String path,
HiveConf conf,
boolean shouldWriteData) |
TableExport(TableExport.Paths paths,
BaseSemanticAnalyzer.TableSpec tableSpec,
ReplicationSpec replicationSpec,
Hive db,
String distCpDoAsUser,
HiveConf conf,
ExportWork.MmContext mmCtx) |
| Constructor and Description |
|---|
Context(org.apache.hadoop.fs.Path eventRoot,
org.apache.hadoop.fs.Path cmRoot,
Hive db,
HiveConf hiveConf,
ReplicationSpec replicationSpec) |
| Constructor and Description |
|---|
ConstraintsSerializer(List<org.apache.hadoop.hive.metastore.api.SQLPrimaryKey> pks,
List<org.apache.hadoop.hive.metastore.api.SQLForeignKey> fks,
List<org.apache.hadoop.hive.metastore.api.SQLUniqueConstraint> uks,
List<org.apache.hadoop.hive.metastore.api.SQLNotNullConstraint> nns,
HiveConf hiveConf) |
FileOperations(List<org.apache.hadoop.fs.Path> dataPathList,
org.apache.hadoop.fs.Path exportRootDataDir,
String distCpDoAsUser,
HiveConf hiveConf,
ExportWork.MmContext mmCtx) |
FunctionSerializer(org.apache.hadoop.hive.metastore.api.Function function,
HiveConf hiveConf) |
TableSerializer(Table tableHandle,
Iterable<Partition> partitions,
HiveConf hiveConf) |
| Constructor and Description |
|---|
DumpMetaData(org.apache.hadoop.fs.Path dumpRoot,
DumpType lvl,
Long eventFrom,
Long eventTo,
org.apache.hadoop.fs.Path cmRoot,
HiveConf hiveConf) |
DumpMetaData(org.apache.hadoop.fs.Path dumpRoot,
HiveConf hiveConf) |
| Constructor and Description |
|---|
Context(String dbName,
String tableName,
String location,
Task<? extends Serializable> precursor,
DumpMetaData dmd,
HiveConf hiveConf,
Hive db,
Context nestedContext,
org.slf4j.Logger log) |
| Modifier and Type | Field and Description |
|---|---|
HiveConf |
GenSparkProcContext.conf |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
OptimizeSparkProcContext.getConf() |
| Modifier and Type | Method and Description |
|---|---|
static org.apache.hadoop.fs.Path |
GenSparkUtils.createMoveTask(Task<? extends Serializable> currTask,
boolean chDir,
FileSinkOperator fsOp,
ParseContext parseCtx,
List<Task<MoveWork>> mvTasks,
HiveConf hconf,
DependencyCollectionTask dependencyTask)
Create and add any dependent move tasks.
|
static SparkEdgeProperty |
GenSparkUtils.getEdgeProperty(HiveConf conf,
ReduceSinkOperator reduceSink,
ReduceWork reduceWork) |
| Constructor and Description |
|---|
GenSparkProcContext(HiveConf conf,
ParseContext parseContext,
List<Task<MoveWork>> moveTask,
List<Task<? extends Serializable>> rootTasks,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs,
Map<String,TableScanOperator> topOps) |
OptimizeSparkProcContext(HiveConf conf,
ParseContext parseContext,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
TezEdgeProperty.getHiveConf() |
| Modifier and Type | Method and Description |
|---|---|
Task<? extends Serializable> |
ImportTableDesc.getCreateTableTask(HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
HiveConf conf) |
List<Task<? extends Serializable>> |
ConditionalResolverMergeFiles.getTasks(HiveConf conf,
Object objCtx) |
List<Task<? extends Serializable>> |
ConditionalResolverCommonJoin.getTasks(HiveConf conf,
Object objCtx) |
List<Task<? extends Serializable>> |
ConditionalResolver.getTasks(HiveConf conf,
Object ctx)
All conditional resolvers implement this interface.
|
List<Task<? extends Serializable>> |
ConditionalResolverSkewJoin.getTasks(HiveConf conf,
Object objCtx) |
void |
MapWork.resolveDynamicPartitionStoredAsSubDirsMerge(HiveConf conf,
org.apache.hadoop.fs.Path path,
TableDesc tblDesc,
ArrayList<String> aliases,
PartitionDesc partDesc) |
protected Task<? extends Serializable> |
ConditionalResolverCommonJoin.resolveMapJoinTask(ConditionalResolverCommonJoin.ConditionalResolverCommonJoinCtx ctx,
HiveConf conf) |
protected void |
ConditionalResolverCommonJoin.resolveUnknownSizes(ConditionalResolverCommonJoin.ConditionalResolverCommonJoinCtx ctx,
HiveConf conf) |
void |
TezEdgeProperty.setAutoReduce(HiveConf hiveConf,
boolean isAutoReduce,
int minReducer,
int maxReducer,
long bytesPerReducer) |
Table |
ImportTableDesc.toTable(HiveConf conf) |
Table |
CreateViewDesc.toTable(HiveConf conf) |
Table |
CreateTableDesc.toTable(HiveConf conf) |
void |
CreateTableDesc.validate(HiveConf conf) |
| Constructor and Description |
|---|
StatsWork(Table table,
BasicStatsWork basicStatsWork,
HiveConf hconf) |
StatsWork(Table table,
HiveConf hconf) |
TezEdgeProperty(HiveConf hiveConf,
TezEdgeProperty.EdgeType edgeType,
boolean isAutoReduce,
boolean isSlowStart,
int minReducer,
int maxReducer,
long bytesPerReducer) |
TezEdgeProperty(HiveConf hiveConf,
TezEdgeProperty.EdgeType edgeType,
int buckets) |
| Modifier and Type | Method and Description |
|---|---|
static StatsSource |
StatsSources.getStatsSource(HiveConf conf) |
static void |
StatsSources.initialize(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static CommandProcessor |
CommandProcessorFactory.get(String[] cmd,
HiveConf conf) |
static CommandProcessor |
CommandProcessorFactory.getForHiveCommand(String[] cmd,
HiveConf conf) |
static CommandProcessor |
CommandProcessorFactory.getForHiveCommandInternal(String[] cmd,
HiveConf conf,
boolean testOnly) |
| Constructor and Description |
|---|
CryptoProcessor(HadoopShims.HdfsEncryptionShim encryptionShim,
HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
ReExecDriver.getConf() |
| Constructor and Description |
|---|
PrivilegeSynchonizer(org.apache.curator.framework.recipes.leader.LeaderLatch privilegeSynchonizerLatch,
PolicyProviderContainer policyProviderContainer,
HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
void |
HiveAuthorizer.applyAuthorizationConfigPolicy(HiveConf hiveConf)
Modify the given HiveConf object to configure authorization related parameters
or other parameters related to hive security
|
void |
HiveV1Authorizer.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
void |
HiveAccessController.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
void |
HiveAuthorizerImpl.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
HiveAuthorizer |
HiveAuthorizerFactory.createHiveAuthorizer(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider hiveAuthenticator,
HiveAuthzSessionContext ctx)
Create a new instance of HiveAuthorizer, initialized with the given objects.
|
static void |
SettableConfigUpdater.setHiveConfWhiteList(HiveConf hiveConf) |
| Constructor and Description |
|---|
HiveV1Authorizer(HiveConf conf) |
HiveV1Authorizer(HiveConf conf,
Hive hive)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
void |
FallbackHiveAuthorizer.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
HiveAuthorizer |
FallbackHiveAuthorizerFactory.createHiveAuthorizer(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
HiveAuthzSessionContext ctx) |
| Modifier and Type | Method and Description |
|---|---|
void |
SQLStdHiveAccessControllerWrapper.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
void |
SQLStdHiveAccessController.applyAuthorizationConfigPolicy(HiveConf hiveConf) |
HiveAuthorizer |
SQLStdHiveAuthorizerFactory.createHiveAuthorizer(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
HiveAuthzSessionContext ctx) |
HiveAuthorizer |
SQLStdConfOnlyAuthorizerFactory.createHiveAuthorizer(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
HiveAuthzSessionContext ctx) |
static RequiredPrivileges |
SQLAuthorizationUtils.getPrivilegesFromFS(org.apache.hadoop.fs.Path filePath,
HiveConf conf,
String userName)
Map permissions for this uri to SQL Standard privileges
|
| Constructor and Description |
|---|
SQLStdHiveAccessController(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
HiveAuthzSessionContext ctx) |
SQLStdHiveAccessControllerWrapper(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
HiveAuthzSessionContext ctx) |
SQLStdHiveAuthorizationValidator(HiveMetastoreClientFactory metastoreClientFactory,
HiveConf conf,
HiveAuthenticationProvider authenticator,
SQLStdHiveAccessControllerWrapper privilegeManager,
HiveAuthzSessionContext ctx) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
SessionState.getConf() |
static HiveConf |
SessionState.getSessionConf() |
| Modifier and Type | Method and Description |
|---|---|
static CreateTableAutomaticGrant |
CreateTableAutomaticGrant.create(HiveConf conf) |
HiveTxnManager |
SessionState.initTxnMgr(HiveConf conf)
Initialize the transaction manager.
|
void |
SessionState.setConf(HiveConf conf) |
static SessionState |
SessionState.start(HiveConf conf)
start a new session and set it to current session.
|
| Constructor and Description |
|---|
ClearDanglingScratchDir(boolean dryRun,
boolean verbose,
boolean useConsole,
String rootHDFSDir,
HiveConf conf) |
OperationLog(String name,
File file,
HiveConf hiveConf) |
SessionState(HiveConf conf) |
SessionState(HiveConf conf,
String userName) |
| Modifier and Type | Method and Description |
|---|---|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
ColumnStatsList colStatsCache,
Table table,
TableScanOperator tableScanOperator)
Collect table, partition and column level statistics
|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
Table table,
List<ColumnInfo> schema,
List<String> neededColumns,
ColumnStatsList colStatsCache,
List<String> referencedColumns,
boolean fetchColStats) |
static int |
StatsUtils.estimateRowSizeFromSchema(HiveConf conf,
List<ColumnInfo> schema) |
static int |
StatsUtils.estimateRowSizeFromSchema(HiveConf conf,
List<ColumnInfo> schema,
List<String> neededColumns) |
static long |
StatsUtils.getAvgColLenOf(HiveConf conf,
ObjectInspector oi,
String colType)
Get the raw data size of variable length data types
|
static ColStatistics |
StatsUtils.getColStatisticsFromExpression(HiveConf conf,
Statistics parentStats,
ExprNodeDesc end)
Get column statistics expression nodes
|
static List<ColStatistics> |
StatsUtils.getColStatisticsFromExprMap(HiveConf conf,
Statistics parentStats,
Map<String,ExprNodeDesc> colExprMap,
RowSchema rowSchema)
Get column statistics from parent statistics.
|
static ColStatistics |
StatsUtils.getColStatsForPartCol(ColumnInfo ci,
PartitionIterable partList,
HiveConf conf) |
static List<Long> |
StatsUtils.getFileSizeForPartitions(HiveConf conf,
List<Partition> parts)
Find the bytes on disks occupied by list of partitions
|
static long |
StatsUtils.getFileSizeForTable(HiveConf conf,
Table table)
Find the bytes on disk occupied by a table
|
static long |
StatsUtils.getNumRows(HiveConf conf,
List<ColumnInfo> schema,
Table table,
PrunedPartitionList partitionList,
AtomicInteger noColsMissingStats)
Returns number of rows if it exists.
|
static long |
StatsUtils.getSizeOfComplexTypes(HiveConf conf,
ObjectInspector oi)
Get the size of complex data types
|
static long |
StatsUtils.getSizeOfPrimitiveTypeArraysFromType(String colType,
int length,
HiveConf conf)
Get the size of arrays of primitive types
|
boolean |
StatsUpdaterThread.runOneWorkerIteration(SessionState ss,
String user,
HiveConf conf,
boolean doWait) |
| Constructor and Description |
|---|
BasicStatsNoJobTask(HiveConf conf,
BasicStatsNoJobWork work) |
BasicStatsTask(HiveConf conf,
BasicStatsWork work) |
ColStatsProcessor(ColumnStatsDesc colStats,
HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
void |
TableFunctionResolver.initialize(HiveConf cfg,
PTFDesc ptfDesc,
PartitionedTableFunctionDef tDef) |
| Modifier and Type | Method and Description |
|---|---|
static String |
ZooKeeperHiveHelper.getQuorumServers(HiveConf conf)
Get the ensemble server addresses from the configuration.
|
| Modifier and Type | Method and Description |
|---|---|
static void |
AvroSerdeUtils.handleAlterTableForAvro(HiveConf conf,
String serializationLib,
Map<String,String> parameters)
Called on specific alter table events, removes schema url and schema literal from given tblproperties
After the change, HMS solely will be responsible for handling the schema
|
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
HiveSchemaTool.getHiveConf() |
HiveConf |
Commands.getHiveConf(boolean call)
This method should only be used in CLI mode.
|
HiveConf |
Commands.getHiveConfHelper(boolean call) |
| Modifier and Type | Method and Description |
|---|---|
String |
Commands.substituteVariables(HiveConf conf,
String line) |
| Constructor and Description |
|---|
HiveSchemaTool(String hiveHome,
HiveConf hiveConf,
String dbType,
String metaDbType) |
| Modifier and Type | Method and Description |
|---|---|
static HiveCompat.CompatLevel |
HiveCompat.getCompatLevel(HiveConf hconf)
Returned the configured compatibility level
|
| Constructor and Description |
|---|
HCatDriver(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static HiveConf |
HCatUtil.getHiveConf(org.apache.hadoop.conf.Configuration conf) |
static HiveConf |
HCatUtil.storePropertiesToHiveConf(Properties properties,
HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static HiveMetaStoreClient |
HCatUtil.getHiveClient(HiveConf hiveConf)
Deprecated.
|
static IMetaStoreClient |
HCatUtil.getHiveMetastoreClient(HiveConf hiveConf)
Get or create a hive client depending on whether it exits in cache or not
|
static HiveConf |
HCatUtil.storePropertiesToHiveConf(Properties properties,
HiveConf hiveConf) |
| Modifier and Type | Field and Description |
|---|---|
protected static HiveConf |
MessageFactory.hiveConf |
| Modifier and Type | Method and Description |
|---|---|
protected static LazySimpleSerDe |
DelimitedInputWriter.createSerde(org.apache.hadoop.hive.metastore.api.Table tbl,
HiveConf conf,
char serdeSeparator)
Deprecated.
Creates LazySimpleSerde
|
StreamingConnection |
HiveEndPoint.newConnection(boolean createPartIfNotExists,
HiveConf conf)
Deprecated.
As of release 1.3/2.1. Replaced by
HiveEndPoint.newConnection(boolean, HiveConf, String) |
StreamingConnection |
HiveEndPoint.newConnection(boolean createPartIfNotExists,
HiveConf conf,
String agentInfo)
Deprecated.
Acquire a new connection to MetaStore for streaming
|
StreamingConnection |
HiveEndPoint.newConnection(boolean createPartIfNotExists,
HiveConf conf,
org.apache.hadoop.security.UserGroupInformation authenticatedUser)
Deprecated.
As of release 1.3/2.1. Replaced by
HiveEndPoint.newConnection(boolean, HiveConf, UserGroupInformation, String) |
StreamingConnection |
HiveEndPoint.newConnection(boolean createPartIfNotExists,
HiveConf conf,
org.apache.hadoop.security.UserGroupInformation authenticatedUser,
String agentInfo)
Deprecated.
Acquire a new connection to MetaStore for streaming.
|
| Modifier and Type | Method and Description |
|---|---|
static HiveConf |
HiveConfFactory.newInstance(Class<?> clazz,
String metaStoreUri)
Deprecated.
|
static HiveConf |
HiveConfFactory.newInstance(org.apache.hadoop.conf.Configuration configuration,
Class<?> clazz,
String metaStoreUri)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
static void |
HiveConfFactory.overrideSettings(HiveConf conf)
Deprecated.
|
| Constructor and Description |
|---|
UgiMetaStoreClientFactory(String metaStoreUri,
HiveConf conf,
org.apache.hadoop.security.UserGroupInformation authenticatedUser,
String user,
boolean secureMode)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
MutatorClientBuilder |
MutatorClientBuilder.configuration(HiveConf conf)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
Lock.Options |
Lock.Options.configuration(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
MutatorCoordinatorBuilder |
MutatorCoordinatorBuilder.configuration(HiveConf configuration)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
HttpServer.Builder |
HttpServer.Builder.setConf(HiveConf origConf) |
| Constructor and Description |
|---|
PamAuthenticator(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
FilterService.getHiveConf() |
HiveConf |
AbstractService.getHiveConf() |
HiveConf |
Service.getHiveConf()
Get the configuration of this service.
|
| Modifier and Type | Method and Description |
|---|---|
static boolean |
ServiceUtils.canProvideProgressLog(HiveConf hiveConf) |
static void |
ServiceOperations.deploy(Service service,
HiveConf configuration)
Initialize then start a service.
|
void |
BreakableService.init(HiveConf conf) |
void |
FilterService.init(HiveConf config) |
void |
AbstractService.init(HiveConf hiveConf)
Initialize the service.
|
void |
Service.init(HiveConf conf)
Initialize the service.
|
void |
CompositeService.init(HiveConf hiveConf) |
static void |
ServiceOperations.init(Service service,
HiveConf configuration)
Initialize a service.
|
protected void |
AbstractService.setHiveConf(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static PasswdAuthenticationProvider |
AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory.AuthMethods authMethod,
HiveConf conf) |
static void |
HiveAuthFactory.loginFromKeytab(HiveConf hiveConf) |
static org.apache.hadoop.security.UserGroupInformation |
HiveAuthFactory.loginFromSpnegoKeytabAndReturnUGI(HiveConf hiveConf) |
static void |
HiveAuthFactory.verifyProxyAccess(String realUser,
String proxyUser,
String ipAddress,
HiveConf hiveConf) |
| Constructor and Description |
|---|
HiveAuthFactory(HiveConf conf) |
LdapAuthenticationProviderImpl(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static List<String> |
LdapUtils.createCandidatePrincipals(HiveConf conf,
String user)
Creates a list of principals to be used for user authentication.
|
Filter |
UserSearchFilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
Filter |
UserFilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
Filter |
GroupFilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
Filter |
FilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
Filter |
CustomQueryFilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
Filter |
ChainFilterFactory.getInstance(HiveConf conf)
Returns an instance of the corresponding filter.
|
DirSearch |
LdapSearchFactory.getInstance(HiveConf conf,
String principal,
String password)
Returns an instance of
DirSearch. |
DirSearch |
DirSearchFactory.getInstance(HiveConf conf,
String user,
String password)
Returns an instance of
DirSearch. |
static List<String> |
LdapUtils.parseDnPatterns(HiveConf conf,
HiveConf.ConfVars var)
Reads and parses DN patterns from Hive configuration.
|
| Constructor and Description |
|---|
LdapSearch(HiveConf conf,
DirContext ctx)
Construct an instance of
LdapSearch. |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
CLIService.getSessionConf(SessionHandle sessionHandle) |
| Modifier and Type | Method and Description |
|---|---|
void |
CLIService.init(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
RowSet |
OperationManager.getOperationLogRowSet(OperationHandle opHandle,
FetchOrientation orientation,
long maxRows,
HiveConf hConf) |
void |
OperationManager.init(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
HiveSessionImpl.getHiveConf() |
HiveConf |
HiveSessionBase.getHiveConf() |
HiveConf |
HiveSessionImpl.getSessionConf() |
HiveConf |
HiveSessionHookContextImpl.getSessionConf() |
HiveConf |
HiveSessionHookContext.getSessionConf()
Retrieve session conf
|
HiveConf |
HiveSession.getSessionConf() |
| Modifier and Type | Method and Description |
|---|---|
void |
SessionManager.init(HiveConf hiveConf) |
| Constructor and Description |
|---|
HiveSessionImpl(SessionHandle sessionHandle,
org.apache.hive.service.rpc.thrift.TProtocolVersion protocol,
String username,
String password,
HiveConf serverConf,
String ipAddress,
List<String> forwardedAddresses) |
HiveSessionImplwithUGI(SessionHandle sessionHandle,
org.apache.hive.service.rpc.thrift.TProtocolVersion protocol,
String username,
String password,
HiveConf hiveConf,
String ipAddress,
String delegationToken,
List<String> forwardedAddresses) |
| Modifier and Type | Field and Description |
|---|---|
protected HiveConf |
ThriftCLIService.hiveConf |
| Modifier and Type | Method and Description |
|---|---|
protected org.apache.thrift.transport.TTransport |
RetryingThriftCLIServiceClient.connect(HiveConf conf) |
void |
EmbeddedThriftBinaryCLIService.init(HiveConf hiveConf) |
void |
ThriftCLIService.init(HiveConf hiveConf) |
void |
EmbeddedThriftBinaryCLIService.init(HiveConf hiveConf,
Map<String,String> confOverlay) |
static RetryingThriftCLIServiceClient.CLIServiceClientWrapper |
RetryingThriftCLIServiceClient.newRetryingCLIServiceClient(HiveConf conf) |
| Constructor and Description |
|---|
CLIServiceClientWrapper(ICLIService icliService,
org.apache.thrift.transport.TTransport tTransport,
HiveConf conf) |
RetryingThriftCLIServiceClient(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
void |
HiveServer2.init(HiveConf hiveConf) |
static void |
HiveServer2.scheduleClearDanglingScratchDir(HiveConf hiveConf,
int initialWaitInSec) |
void |
HiveServer2.startPrivilegeSynchonizer(HiveConf hiveConf) |
| Modifier and Type | Method and Description |
|---|---|
static SparkClient |
SparkClientFactory.createClient(Map<String,String> sparkConf,
HiveConf hiveConf,
String sessionId)
Instantiates a new Spark client.
|
static String |
SparkClientUtilities.findKryoRegistratorJar(HiveConf conf) |
| Modifier and Type | Method and Description |
|---|---|
static String |
RpcConfiguration.getValue(HiveConf conf,
String key)
Utility method for a given RpcConfiguration key, to convert value to millisecond if it is a time value,
and return as string in either case.
|
| Modifier and Type | Field and Description |
|---|---|
protected HiveConf |
AbstractRecordWriter.conf |
| Modifier and Type | Method and Description |
|---|---|
HiveConf |
StreamingConnection.getHiveConf()
Returns hive configuration object used during connection creation.
|
HiveConf |
HiveStreamingConnection.getHiveConf() |
| Modifier and Type | Method and Description |
|---|---|
HiveStreamingConnection.Builder |
HiveStreamingConnection.Builder.withHiveConf(HiveConf hiveConf)
Specify hive configuration object to use for streaming connection.
|
Copyright © 2019 The Apache Software Foundation. All Rights Reserved.