public interface HiveSparkClient extends Serializable, Closeable
| Modifier and Type | Method and Description |
|---|---|
SparkJobRef |
execute(DriverContext driverContext,
SparkWork sparkWork)
HiveSparkClient should generate Spark RDD graph by given sparkWork and driverContext,
and submit RDD graph to Spark cluster.
|
int |
getDefaultParallelism()
For standalone mode, this can be used to get total number of cores.
|
int |
getExecutorCount() |
org.apache.spark.SparkConf |
getSparkConf() |
SparkJobRef execute(DriverContext driverContext, SparkWork sparkWork) throws Exception
driverContext - sparkWork - Exceptionorg.apache.spark.SparkConf getSparkConf()
int getExecutorCount()
throws Exception
ExceptionCopyright © 2019 The Apache Software Foundation. All Rights Reserved.