public class SessionHiveMetaStoreClient extends HiveMetaStoreClient implements IMetaStoreClient
SessionState.setCurrentSessionState(SessionState)) pass SessionState to forked threads.
Currently it looks like those threads only read metadata but this is fragile.
Also, maps (in SessionState) where tempt table metadata is stored are concurrent and so
any put/get crosses a memory barrier and so does using most java.util.concurrent.*
so the readers of the objects in these maps should have the most recent view of the object.
But again, could be fragile.| Modifier and Type | Class and Description |
|---|---|
static class |
SessionHiveMetaStoreClient.TempTable
This stores partition information for a temp table.
|
HiveMetaStoreClient.MetastoreMapIterable<K,V>IMetaStoreClient.IncompatibleMetastoreException, IMetaStoreClient.NotificationFilterconf, TEST_VERSION, VERSION| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.hive.metastore.api.Partition |
add_partition(org.apache.hadoop.hive.metastore.api.Partition partition)
Loading Dynamic Partitons calls this.
|
void |
alter_table_with_environmentContext(String dbname,
String tbl_name,
org.apache.hadoop.hive.metastore.api.Table new_tbl,
org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext)
Alter a table.
|
void |
alter_table(String dbname,
String tbl_name,
org.apache.hadoop.hive.metastore.api.Table new_tbl)
Alter a table
|
void |
alter_table(String dbname,
String tbl_name,
org.apache.hadoop.hive.metastore.api.Table new_tbl,
boolean cascade)
Deprecated.
|
protected void |
create_table_with_environment_context(org.apache.hadoop.hive.metastore.api.Table tbl,
org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext) |
boolean |
deleteTableColumnStatistics(String dbName,
String tableName,
String colName)
Delete table level column statistics given dbName, tableName and colName, or all columns in
a table.
|
protected void |
drop_table_with_environment_context(String catName,
String dbname,
String name,
boolean deleteData,
org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext) |
org.apache.hadoop.hive.metastore.api.PrincipalPrivilegeSet |
get_privilege_set(org.apache.hadoop.hive.metastore.api.HiveObjectRef hiveObject,
String userName,
List<String> groupNames)
Return the privileges that the user, group have directly and indirectly through roles
on the given hiveObject
|
List<String> |
getAllTables(String dbName)
Get the names of all tables in the specified database.
|
List<org.apache.hadoop.hive.metastore.api.Partition> |
getPartitionsByNames(String db_name,
String tblName,
List<String> partNames)
partNames are like "p=1/q=2" type strings.
|
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
getSchema(String dbName,
String tableName)
Get schema for a table, including the partition columns.
|
org.apache.hadoop.hive.metastore.api.Table |
getTable(String dbname,
String name)
Get a table object in the default catalog.
|
org.apache.hadoop.hive.metastore.api.Table |
getTable(String catName,
String dbName,
String tableName)
Get a table object.
|
List<org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj> |
getTableColumnStatistics(String dbName,
String tableName,
List<String> colNames)
Get the column statistics for a set of columns in a table.
|
List<org.apache.hadoop.hive.metastore.api.TableMeta> |
getTableMeta(String dbPatterns,
String tablePatterns,
List<String> tableTypes)
Fetches just table name and comments.
|
List<org.apache.hadoop.hive.metastore.api.Table> |
getTableObjectsByName(String dbName,
List<String> tableNames)
Get tables as objects (rather than just fetching their names).
|
List<String> |
getTables(String dbName,
String tablePattern)
Get the names of all tables in the specified database that satisfy the supplied
table name pattern.
|
static Map<String,Table> |
getTempTablesForDatabase(String dbName,
String tblName) |
List<String> |
listPartitionNames(String dbName,
String tableName,
short maxParts)
Returns a list of partition names, i.e.
|
List<org.apache.hadoop.hive.metastore.api.Partition> |
listPartitionsWithAuthInfo(String dbName,
String tableName,
List<String> partialPvals,
short maxParts,
String userName,
List<String> groupNames)
List partitions along with privilege information for a user or groups
|
boolean |
setPartitionColumnStatistics(org.apache.hadoop.hive.metastore.api.SetPartitionsStatsRequest request)
Set table or partition column statistics.
|
boolean |
tableExists(String databaseName,
String tableName)
Check whether a table exists in the default catalog.
|
void |
truncateTable(String dbName,
String tableName,
List<String> partNames)
Truncate the table/partitions in the DEFAULT database.
|
abortTxns, add_partition, add_partitions_pspec, add_partitions, add_partitions, addCheckConstraint, addDefaultConstraint, addDynamicPartitions, addDynamicPartitions, addForeignKey, addMasterKey, addNotNullConstraint, addPrimaryKey, addRuntimeStat, addSchemaVersion, addSerDe, addToken, addUniqueConstraint, allocateTableWriteId, allocateTableWriteIdsBatch, alter_partition, alter_partition, alter_partition, alter_partitions, alter_partitions, alter_partitions, alter_table, alterCatalog, alterDatabase, alterDatabase, alterFunction, alterFunction, alterISchema, alterResourcePlan, alterWMPool, alterWMTrigger, appendPartition, appendPartition, appendPartition, appendPartition, appendPartition, appendPartitionByName, appendPartitionByName, cacheFileMetadata, cancelDelegationToken, checkLock, clearFileMetadata, close, commitTxn, compact, compact, compact2, create_role, createCatalog, createDatabase, createFunction, createISchema, createOrDropTriggerToPoolMapping, createOrUpdateWMMapping, createResourcePlan, createTable, createTable, createTableWithConstraints, createType, createWMPool, createWMTrigger, deepCopy, deepCopy, deepCopy, deepCopyFieldSchemas, deletePartitionColumnStatistics, deletePartitionColumnStatistics, deleteTableColumnStatistics, drop_role, dropCatalog, dropConstraint, dropConstraint, dropDatabase, dropDatabase, dropDatabase, dropDatabase, dropFunction, dropFunction, dropISchema, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartitionByName, dropPartitionByName, dropPartitions, dropPartitions, dropPartitions, dropPartitions, dropResourcePlan, dropSchemaVersion, dropTable, dropTable, dropTable, dropTable, dropTable, dropType, dropWMMapping, dropWMPool, dropWMTrigger, exchange_partition, exchange_partition, exchange_partitions, exchange_partitions, fireListenerEvent, flushCache, get_principals_in_role, get_role_grants_for_principal, getActiveResourcePlan, getAggrColStatsFor, getAggrColStatsFor, getAllDatabases, getAllDatabases, getAllFunctions, getAllResourcePlans, getAllTables, getAllTokenIdentifiers, getCatalog, getCatalogs, getCheckConstraints, getConfigValue, getCurrentNotificationEventId, getDatabase, getDatabase, getDatabases, getDatabases, getDefaultConstraints, getDelegationToken, getDelegationToken, getFields, getFields, getFileMetadata, getFileMetadataBySarg, getForeignKeys, getFunction, getFunction, getFunctions, getFunctions, getISchema, getMasterKeys, getMaterializationInvalidationInfo, getMaterializedViewsForRewriting, getMaterializedViewsForRewriting, getMetaConf, getMetastoreDbUuid, getNextNotification, getNotificationEventsCount, getNotNullConstraints, getNumPartitionsByFilter, getNumPartitionsByFilter, getPartition, getPartition, getPartition, getPartition, getPartitionColumnStatistics, getPartitionColumnStatistics, getPartitionsByNames, getPartitionWithAuthInfo, getPartitionWithAuthInfo, getPrimaryKeys, getResourcePlan, getRuntimeStats, getSchema, getSchemaAllVersions, getSchemaByCols, getSchemaLatestVersion, getSchemaVersion, getSerDe, getTableColumnStatistics, getTableMeta, getTableObjectsByName, getTables, getTables, getTables, getToken, getTokenStrForm, getTriggersForResourcePlan, getTTransport, getType, getTypeAll, getUniqueConstraints, getValidTxns, getValidTxns, getValidWriteIds, getValidWriteIds, grant_privileges, grant_role, heartbeat, heartbeatLockMaterializationRebuild, heartbeatTxnRange, insertTable, isCompatibleWith, isLocalMetaStore, isPartitionMarkedForEvent, isPartitionMarkedForEvent, isSameConfObj, list_privileges, list_roles, listPartitionNames, listPartitionNames, listPartitionNames, listPartitions, listPartitions, listPartitions, listPartitions, listPartitionsByExpr, listPartitionsByExpr, listPartitionsByFilter, listPartitionsByFilter, listPartitionSpecs, listPartitionSpecs, listPartitionSpecsByFilter, listPartitionSpecsByFilter, listPartitionsWithAuthInfo, listPartitionsWithAuthInfo, listPartitionsWithAuthInfo, listPartitionValues, listRoleNames, listTableNamesByFilter, listTableNamesByFilter, lock, lockMaterializationRebuild, mapSchemaVersionToSerde, markPartitionForEvent, markPartitionForEvent, newSynchronizedClient, openTxn, openTxns, partitionNameToSpec, partitionNameToVals, putFileMetadata, reconnect, recycleDirToCmPath, refresh_privileges, removeMasterKey, removeToken, renamePartition, renamePartition, renewDelegationToken, replAllocateTableWriteIdsBatch, replCommitTxn, replOpenTxn, replRollbackTxn, replTableWriteIdState, revoke_privileges, revoke_role, rollbackTxn, setHiveAddedJars, setMetaConf, setSchemaVersionState, showCompactions, showLocks, showLocks, showTxns, tableExists, truncateTable, unlock, updateCreationMetadata, updateCreationMetadata, updateMasterKey, updatePartitionColumnStatistics, updateTableColumnStatistics, validatePartitionNameCharacters, validateResourcePlanclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitabortTxns, add_partitions_pspec, add_partitions, add_partitions, addCheckConstraint, addDefaultConstraint, addDynamicPartitions, addDynamicPartitions, addForeignKey, addMasterKey, addNotNullConstraint, addPrimaryKey, addRuntimeStat, addSchemaVersion, addSerDe, addToken, addUniqueConstraint, allocateTableWriteId, allocateTableWriteIdsBatch, alter_partition, alter_partition, alter_partition, alter_partition, alter_partitions, alter_partitions, alter_partitions, alter_partitions, alter_table, alter_table, alterCatalog, alterDatabase, alterDatabase, alterFunction, alterFunction, alterISchema, alterResourcePlan, alterWMPool, alterWMTrigger, appendPartition, appendPartition, appendPartition, appendPartition, cacheFileMetadata, cancelDelegationToken, checkLock, clearFileMetadata, close, commitTxn, compact, compact, compact2, create_role, createCatalog, createDatabase, createFunction, createISchema, createOrDropTriggerToPoolMapping, createOrUpdateWMMapping, createResourcePlan, createTable, createTableWithConstraints, createWMPool, createWMTrigger, deletePartitionColumnStatistics, deletePartitionColumnStatistics, deleteTableColumnStatistics, drop_role, dropCatalog, dropConstraint, dropConstraint, dropDatabase, dropDatabase, dropDatabase, dropDatabase, dropDatabase, dropDatabase, dropFunction, dropFunction, dropISchema, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartition, dropPartitions, dropPartitions, dropPartitions, dropPartitions, dropPartitions, dropPartitions, dropResourcePlan, dropSchemaVersion, dropTable, dropTable, dropTable, dropTable, dropTable, dropTable, dropWMMapping, dropWMPool, dropWMTrigger, exchange_partition, exchange_partition, exchange_partitions, exchange_partitions, fireListenerEvent, flushCache, get_principals_in_role, get_role_grants_for_principal, getActiveResourcePlan, getAggrColStatsFor, getAggrColStatsFor, getAllDatabases, getAllDatabases, getAllFunctions, getAllResourcePlans, getAllTables, getAllTokenIdentifiers, getCatalog, getCatalogs, getCheckConstraints, getConfigValue, getCurrentNotificationEventId, getDatabase, getDatabase, getDatabases, getDatabases, getDefaultConstraints, getDelegationToken, getFields, getFields, getFileMetadata, getFileMetadataBySarg, getForeignKeys, getFunction, getFunction, getFunctions, getFunctions, getISchema, getMasterKeys, getMaterializationInvalidationInfo, getMaterializedViewsForRewriting, getMaterializedViewsForRewriting, getMetaConf, getMetastoreDbUuid, getNextNotification, getNotificationEventsCount, getNotNullConstraints, getNumPartitionsByFilter, getNumPartitionsByFilter, getPartition, getPartition, getPartition, getPartition, getPartitionColumnStatistics, getPartitionColumnStatistics, getPartitionsByNames, getPartitionWithAuthInfo, getPartitionWithAuthInfo, getPrimaryKeys, getResourcePlan, getRuntimeStats, getSchema, getSchemaAllVersions, getSchemaByCols, getSchemaLatestVersion, getSchemaVersion, getSerDe, getTableColumnStatistics, getTableMeta, getTableObjectsByName, getTables, getTables, getTables, getToken, getTokenStrForm, getTriggersForResourcePlan, getUniqueConstraints, getValidTxns, getValidTxns, getValidWriteIds, getValidWriteIds, grant_privileges, grant_role, heartbeat, heartbeatLockMaterializationRebuild, heartbeatTxnRange, insertTable, isCompatibleWith, isLocalMetaStore, isPartitionMarkedForEvent, isPartitionMarkedForEvent, isSameConfObj, list_privileges, list_roles, listPartitionNames, listPartitionNames, listPartitionNames, listPartitions, listPartitions, listPartitions, listPartitions, listPartitionsByExpr, listPartitionsByExpr, listPartitionsByFilter, listPartitionsByFilter, listPartitionSpecs, listPartitionSpecs, listPartitionSpecsByFilter, listPartitionSpecsByFilter, listPartitionsWithAuthInfo, listPartitionsWithAuthInfo, listPartitionsWithAuthInfo, listPartitionValues, listRoleNames, listTableNamesByFilter, listTableNamesByFilter, lock, lockMaterializationRebuild, mapSchemaVersionToSerde, markPartitionForEvent, markPartitionForEvent, openTxn, openTxns, partitionNameToSpec, partitionNameToVals, putFileMetadata, reconnect, recycleDirToCmPath, refresh_privileges, removeMasterKey, removeToken, renamePartition, renamePartition, renewDelegationToken, replAllocateTableWriteIdsBatch, replCommitTxn, replOpenTxn, replRollbackTxn, replTableWriteIdState, revoke_privileges, revoke_role, rollbackTxn, setHiveAddedJars, setMetaConf, setSchemaVersionState, showCompactions, showLocks, showLocks, showTxns, tableExists, truncateTable, unlock, updateCreationMetadata, updateCreationMetadata, updateMasterKey, updatePartitionColumnStatistics, updateTableColumnStatistics, validatePartitionNameCharacters, validateResourcePlanprotected void create_table_with_environment_context(org.apache.hadoop.hive.metastore.api.Table tbl,
org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext)
throws org.apache.hadoop.hive.metastore.api.AlreadyExistsException,
org.apache.hadoop.hive.metastore.api.InvalidObjectException,
org.apache.hadoop.hive.metastore.api.MetaException,
org.apache.hadoop.hive.metastore.api.NoSuchObjectException,
org.apache.thrift.TException
create_table_with_environment_context in class HiveMetaStoreClientorg.apache.hadoop.hive.metastore.api.AlreadyExistsExceptionorg.apache.hadoop.hive.metastore.api.InvalidObjectExceptionorg.apache.hadoop.hive.metastore.api.MetaExceptionorg.apache.hadoop.hive.metastore.api.NoSuchObjectExceptionorg.apache.thrift.TExceptionprotected void drop_table_with_environment_context(String catName, String dbname, String name, boolean deleteData, org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.NoSuchObjectException, UnsupportedOperationException
drop_table_with_environment_context in class HiveMetaStoreClientorg.apache.hadoop.hive.metastore.api.MetaExceptionorg.apache.thrift.TExceptionorg.apache.hadoop.hive.metastore.api.NoSuchObjectExceptionUnsupportedOperationExceptionpublic void truncateTable(String dbName, String tableName, List<String> partNames) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException
IMetaStoreClienttruncateTable in interface IMetaStoreClienttruncateTable in class HiveMetaStoreClientdbName - The db to which the table to be truncate belongs totableName - The table to truncatepartNames - List of partitions to truncate. NULL will truncate the whole table/all partitionsorg.apache.hadoop.hive.metastore.api.MetaException - Failure in the RDBMS or storageorg.apache.thrift.TException - Thrift transport exceptionpublic org.apache.hadoop.hive.metastore.api.Table getTable(String dbname, String name) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.NoSuchObjectException
IMetaStoreClientgetTable in interface IMetaStoreClientgetTable in class HiveMetaStoreClientdbname - The database the table is located in.name - Name of the table to fetch.org.apache.hadoop.hive.metastore.api.MetaException - Could not fetch the tableorg.apache.thrift.TException - A thrift communication error occurredorg.apache.hadoop.hive.metastore.api.NoSuchObjectException - In case the table wasn't found.public org.apache.hadoop.hive.metastore.api.Table getTable(String catName, String dbName, String tableName) throws org.apache.thrift.TException
IMetaStoreClientgetTable in interface IMetaStoreClientgetTable in class HiveMetaStoreClientcatName - catalog the table is in.dbName - database the table is in.tableName - table name.org.apache.hadoop.hive.metastore.api.MetaException - Something went wrong, usually in the RDBMS.org.apache.thrift.TException - general thrift error.public List<String> getAllTables(String dbName) throws org.apache.hadoop.hive.metastore.api.MetaException
IMetaStoreClientgetAllTables in interface IMetaStoreClientgetAllTables in class HiveMetaStoreClientdbName - database nameorg.apache.hadoop.hive.metastore.api.MetaException - something went wrong with the fetch from the RDBMSpublic List<String> getTables(String dbName, String tablePattern) throws org.apache.hadoop.hive.metastore.api.MetaException
IMetaStoreClientgetTables in interface IMetaStoreClientgetTables in class HiveMetaStoreClientdbName - database name.tablePattern - pattern for table name to conform toorg.apache.hadoop.hive.metastore.api.MetaException - error fetching information from the RDBMSpublic List<org.apache.hadoop.hive.metastore.api.TableMeta> getTableMeta(String dbPatterns, String tablePatterns, List<String> tableTypes) throws org.apache.hadoop.hive.metastore.api.MetaException
IMetaStoreClientgetTableMeta in interface IMetaStoreClientgetTableMeta in class HiveMetaStoreClientdbPatterns - database pattern to match, or null for all databasestablePatterns - table pattern to match.tableTypes - list of table types to fetch.org.apache.hadoop.hive.metastore.api.MetaException - something went wrong with the fetch from the RDBMSpublic List<org.apache.hadoop.hive.metastore.api.Table> getTableObjectsByName(String dbName, List<String> tableNames) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.hadoop.hive.metastore.api.InvalidOperationException, org.apache.hadoop.hive.metastore.api.UnknownDBException, org.apache.thrift.TException
IMetaStoreClientgetTableObjectsByName in interface IMetaStoreClientgetTableObjectsByName in class HiveMetaStoreClientdbName - The database the tables are located in.tableNames - The names of the tables to fetchorg.apache.hadoop.hive.metastore.api.MetaException - Any other errorsorg.apache.hadoop.hive.metastore.api.InvalidOperationException - The input to this operation is invalid (e.g., the list of tables names is null)org.apache.hadoop.hive.metastore.api.UnknownDBException - The requested database could not be fetched.org.apache.thrift.TException - A thrift communication error occurredpublic boolean tableExists(String databaseName, String tableName) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.UnknownDBException
IMetaStoreClienttableExists in interface IMetaStoreClienttableExists in class HiveMetaStoreClientdatabaseName - database nametableName - table nameorg.apache.hadoop.hive.metastore.api.MetaException - error fetching form the RDBMSorg.apache.thrift.TException - thrift transport errororg.apache.hadoop.hive.metastore.api.UnknownDBException - the indicated database does not exist.public List<org.apache.hadoop.hive.metastore.api.FieldSchema> getSchema(String dbName, String tableName) throws org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.UnknownTableException, org.apache.hadoop.hive.metastore.api.UnknownDBException
IMetaStoreClientgetSchema in interface IMetaStoreClientgetSchema in class HiveMetaStoreClientdbName - database nametableName - table nameorg.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMSorg.apache.hadoop.hive.metastore.api.UnknownTableException - no such tableorg.apache.hadoop.hive.metastore.api.UnknownDBException - no such databaseorg.apache.thrift.TException - thrift transport error@Deprecated public void alter_table(String dbname, String tbl_name, org.apache.hadoop.hive.metastore.api.Table new_tbl, boolean cascade) throws org.apache.hadoop.hive.metastore.api.InvalidOperationException, org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException
alter_table in interface IMetaStoreClientalter_table in class HiveMetaStoreClientorg.apache.hadoop.hive.metastore.api.InvalidOperationExceptionorg.apache.hadoop.hive.metastore.api.MetaExceptionorg.apache.thrift.TExceptionpublic void alter_table(String dbname, String tbl_name, org.apache.hadoop.hive.metastore.api.Table new_tbl) throws org.apache.hadoop.hive.metastore.api.InvalidOperationException, org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException
IMetaStoreClientalter_table in interface IMetaStoreClientalter_table in class HiveMetaStoreClientdbname - database nametbl_name - table namenew_tbl - new table object, should be complete representation of the table, not just the
things you want to change.org.apache.hadoop.hive.metastore.api.InvalidOperationException - something is wrong with the new table object or an
operation was attempted that is not allowed (such as changing partition columns).org.apache.hadoop.hive.metastore.api.MetaException - something went wrong, usually in the RDBMSorg.apache.thrift.TException - general thrift exceptionpublic void alter_table_with_environmentContext(String dbname, String tbl_name, org.apache.hadoop.hive.metastore.api.Table new_tbl, org.apache.hadoop.hive.metastore.api.EnvironmentContext envContext) throws org.apache.hadoop.hive.metastore.api.InvalidOperationException, org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException
IMetaStoreClientalter_table_with_environmentContext in interface IMetaStoreClientalter_table_with_environmentContext in class HiveMetaStoreClientdbname - database nametbl_name - table namenew_tbl - new table object, should be complete representation of the table, not just the
things you want to change.envContext - options for the alter.org.apache.hadoop.hive.metastore.api.InvalidOperationException - something is wrong with the new table object or an
operation was attempted that is not allowed (such as changing partition columns).org.apache.hadoop.hive.metastore.api.MetaException - something went wrong, usually in the RDBMSorg.apache.thrift.TException - general thrift exceptionpublic org.apache.hadoop.hive.metastore.api.PrincipalPrivilegeSet get_privilege_set(org.apache.hadoop.hive.metastore.api.HiveObjectRef hiveObject,
String userName,
List<String> groupNames)
throws org.apache.hadoop.hive.metastore.api.MetaException,
org.apache.thrift.TException
IMetaStoreClientget_privilege_set in interface IMetaStoreClientget_privilege_set in class HiveMetaStoreClientorg.apache.hadoop.hive.metastore.api.MetaExceptionorg.apache.thrift.TExceptionpublic boolean setPartitionColumnStatistics(org.apache.hadoop.hive.metastore.api.SetPartitionsStatsRequest request)
throws org.apache.hadoop.hive.metastore.api.NoSuchObjectException,
org.apache.hadoop.hive.metastore.api.InvalidObjectException,
org.apache.hadoop.hive.metastore.api.MetaException,
org.apache.thrift.TException,
org.apache.hadoop.hive.metastore.api.InvalidInputException
setPartitionColumnStatistics in interface IMetaStoreClientsetPartitionColumnStatistics in class HiveMetaStoreClientrequest - request object, contains all the table, partition, and statistics informationorg.apache.hadoop.hive.metastore.api.NoSuchObjectException - the table, partition, or columns specified do not exist.org.apache.hadoop.hive.metastore.api.InvalidObjectException - the stats object is not valid.org.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMS.org.apache.thrift.TException - thrift transport error.org.apache.hadoop.hive.metastore.api.InvalidInputException - the input is invalid (eg, a null table name)public List<org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj> getTableColumnStatistics(String dbName, String tableName, List<String> colNames) throws org.apache.hadoop.hive.metastore.api.NoSuchObjectException, org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.InvalidInputException, org.apache.hadoop.hive.metastore.api.InvalidObjectException
IMetaStoreClient.getPartitionColumnStatistics(String, String, List, List).getTableColumnStatistics in interface IMetaStoreClientgetTableColumnStatistics in class HiveMetaStoreClientdbName - database nametableName - table namecolNames - list of column namesorg.apache.hadoop.hive.metastore.api.NoSuchObjectException - no such tableorg.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMSorg.apache.thrift.TException - thrift transport errororg.apache.hadoop.hive.metastore.api.InvalidInputExceptionorg.apache.hadoop.hive.metastore.api.InvalidObjectExceptionpublic boolean deleteTableColumnStatistics(String dbName, String tableName, String colName) throws org.apache.hadoop.hive.metastore.api.NoSuchObjectException, org.apache.hadoop.hive.metastore.api.InvalidObjectException, org.apache.hadoop.hive.metastore.api.MetaException, org.apache.thrift.TException, org.apache.hadoop.hive.metastore.api.InvalidInputException
deleteTableColumnStatistics in interface IMetaStoreClientdeleteTableColumnStatistics in class HiveMetaStoreClientdbName - database nametableName - table namecolName - column name, or null to drop stats for all columnsorg.apache.hadoop.hive.metastore.api.NoSuchObjectException - No such tableorg.apache.hadoop.hive.metastore.api.InvalidObjectException - error dropping the statsorg.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMSorg.apache.thrift.TException - thrift transport errororg.apache.hadoop.hive.metastore.api.InvalidInputException - bad input, like a null table name.public static Map<String,Table> getTempTablesForDatabase(String dbName, String tblName)
dbName - actual database nametblName - actual table name or search pattern (for error message)public org.apache.hadoop.hive.metastore.api.Partition add_partition(org.apache.hadoop.hive.metastore.api.Partition partition)
throws org.apache.thrift.TException
add_partition in interface IMetaStoreClientadd_partition in class HiveMetaStoreClientpartition - The partition to addorg.apache.hadoop.hive.metastore.api.InvalidObjectException - Could not find table to add toorg.apache.hadoop.hive.metastore.api.AlreadyExistsException - Partition already existsorg.apache.hadoop.hive.metastore.api.MetaException - Could not add partitionorg.apache.thrift.TException - Thrift exceptionThriftHiveMetastore.Iface.add_partition(org.apache.hadoop.hive.metastore.api.Partition)public List<org.apache.hadoop.hive.metastore.api.Partition> listPartitionsWithAuthInfo(String dbName, String tableName, List<String> partialPvals, short maxParts, String userName, List<String> groupNames) throws org.apache.thrift.TException
IMetaStoreClientlistPartitionsWithAuthInfo in interface IMetaStoreClientlistPartitionsWithAuthInfo in class HiveMetaStoreClientpartialPvals - partition values, can be partial. This really means that missing values
are represented by empty str.maxParts - maximum number of partitions to fetch, or -1 for alldbName - database nametableName - table nameuserName - user to fetch privilege information forgroupNames - group to fetch privilege information fororg.apache.hadoop.hive.metastore.api.NoSuchObjectException - no partitions matching the criteria were foundorg.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMSorg.apache.thrift.TException - thrift transport errorpublic List<String> listPartitionNames(String dbName, String tableName, short maxParts) throws org.apache.thrift.TException
listPartitionNames in interface IMetaStoreClientlistPartitionNames in class HiveMetaStoreClientdbName - database name.tableName - table name.maxParts - maximum number of parts of fetch, or -1 to fetch them all.org.apache.hadoop.hive.metastore.api.NoSuchObjectException - No such table.org.apache.hadoop.hive.metastore.api.MetaException - Error accessing the RDBMS.org.apache.thrift.TException - thrift transport errorpublic List<org.apache.hadoop.hive.metastore.api.Partition> getPartitionsByNames(String db_name, String tblName, List<String> partNames) throws org.apache.thrift.TException
getPartitionsByNames in interface IMetaStoreClientgetPartitionsByNames in class HiveMetaStoreClientdb_name - database nametblName - table namepartNames - list of partition namesorg.apache.hadoop.hive.metastore.api.NoSuchObjectException - No such partitionsorg.apache.hadoop.hive.metastore.api.MetaException - error accessing the RDBMS.org.apache.thrift.TException - thrift transport errorCopyright © 2019 The Apache Software Foundation. All Rights Reserved.