Class VeniceHdfsInputTable
java.lang.Object
com.linkedin.venice.spark.input.hdfs.VeniceHdfsInputTable
- All Implemented Interfaces:
org.apache.spark.sql.connector.catalog.SupportsRead
,org.apache.spark.sql.connector.catalog.Table
public class VeniceHdfsInputTable
extends Object
implements org.apache.spark.sql.connector.catalog.SupportsRead
A table format that is used by Spark to read Avro files from HDFS for use in VenicePushJob.
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionSet<org.apache.spark.sql.connector.catalog.TableCapability>
name()
org.apache.spark.sql.connector.read.ScanBuilder
newScanBuilder
(org.apache.spark.sql.util.CaseInsensitiveStringMap options) org.apache.spark.sql.types.StructType
schema()
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.sql.connector.catalog.Table
partitioning, properties
-
Constructor Details
-
VeniceHdfsInputTable
-
-
Method Details
-
newScanBuilder
public org.apache.spark.sql.connector.read.ScanBuilder newScanBuilder(org.apache.spark.sql.util.CaseInsensitiveStringMap options) - Specified by:
newScanBuilder
in interfaceorg.apache.spark.sql.connector.catalog.SupportsRead
-
name
- Specified by:
name
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
schema
public org.apache.spark.sql.types.StructType schema()- Specified by:
schema
in interfaceorg.apache.spark.sql.connector.catalog.Table
-
capabilities
- Specified by:
capabilities
in interfaceorg.apache.spark.sql.connector.catalog.Table
-