Package com.linkedin.venice.spark
Class SparkConstants
java.lang.Object
com.linkedin.venice.spark.SparkConstants
-
Field Summary
Modifier and TypeFieldDescriptionstatic final org.apache.spark.sql.types.StructType
static final org.apache.spark.sql.types.StructType
static final String
static final String
static final String
static final String
static final String
static final String
static final String
Configs with this prefix will be set when building the data writer spark job and passed as job properties.static final String
static final String
Configs with this prefix will be set when building the spark session.static final String
-
Constructor Summary
-
Method Summary
-
Field Details
-
KEY_COLUMN_NAME
- See Also:
-
VALUE_COLUMN_NAME
- See Also:
-
PARTITION_COLUMN_NAME
- See Also:
-
DEFAULT_SCHEMA
public static final org.apache.spark.sql.types.StructType DEFAULT_SCHEMA -
DEFAULT_SCHEMA_WITH_PARTITION
public static final org.apache.spark.sql.types.StructType DEFAULT_SCHEMA_WITH_PARTITION -
SPARK_SESSION_CONF_PREFIX
Configs with this prefix will be set when building the spark session. These will get applied to all Spark jobs that get triggered as a part of VPJ. It can be used to configure arbitrary cluster properties like cluster address.- See Also:
-
SPARK_APP_NAME_CONFIG
- See Also:
-
SPARK_CASE_SENSITIVE_CONFIG
- See Also:
-
SPARK_CLUSTER_CONFIG
- See Also:
-
SPARK_LEADER_CONFIG
- See Also:
-
DEFAULT_SPARK_CLUSTER
- See Also:
-
SPARK_DATA_WRITER_CONF_PREFIX
Configs with this prefix will be set when building the data writer spark job and passed as job properties. These will only get applied on the DataWriter Spark jobs. It is useful when there are custom input formats which need additional configs to be able to read the data.- See Also:
-
-
Constructor Details
-
SparkConstants
public SparkConstants()
-