Class StartOfPush
- java.lang.Object
-
- org.apache.avro.specific.SpecificRecordBase
-
- com.linkedin.venice.kafka.protocol.StartOfPush
-
- All Implemented Interfaces:
java.io.Externalizable
,java.io.Serializable
,java.lang.Comparable<org.apache.avro.specific.SpecificRecord>
,org.apache.avro.generic.GenericContainer
,org.apache.avro.generic.GenericRecord
,org.apache.avro.generic.IndexedRecord
,org.apache.avro.specific.SpecificRecord
public class StartOfPush extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord
This ControlMessage is sent once per partition, at the beginning of a bulk load, before any of the data producers come online. This does not contain any data beyond the one which is common to all ControlMessageType.- See Also:
- Serialized Form
-
-
Field Summary
Fields Modifier and Type Field Description boolean
chunked
Whether the messages inside the current push are encoded with chunking support.java.nio.ByteBuffer
compressionDictionary
The raw bytes of dictionary used to compress/decompress records.int
compressionStrategy
What type of compression strategy the current push uses.static org.apache.avro.Schema
SCHEMA$
boolean
sorted
Whether the messages inside current topic partition between 'StartOfPush' control message and 'EndOfPush' control message is lexicographically sorted by key bytesint
timestampPolicy
The policy to determine timestamps of batch push records.
-
Constructor Summary
Constructors Constructor Description StartOfPush()
Default constructor.StartOfPush(java.lang.Boolean sorted, java.lang.Boolean chunked, java.lang.Integer compressionStrategy, java.nio.ByteBuffer compressionDictionary, java.lang.Integer timestampPolicy)
All-args constructor.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description java.lang.Object
get(int field$)
boolean
getChunked()
Gets the value of the 'chunked' field.static org.apache.avro.Schema
getClassSchema()
java.nio.ByteBuffer
getCompressionDictionary()
Gets the value of the 'compressionDictionary' field.int
getCompressionStrategy()
Gets the value of the 'compressionStrategy' field.org.apache.avro.Schema
getSchema()
boolean
getSorted()
Gets the value of the 'sorted' field.org.apache.avro.specific.SpecificData
getSpecificData()
int
getTimestampPolicy()
Gets the value of the 'timestampPolicy' field.void
put(int field$, java.lang.Object value$)
void
readExternal(java.io.ObjectInput in)
void
setChunked(boolean value)
Sets the value of the 'chunked' field.void
setCompressionDictionary(java.nio.ByteBuffer value)
Sets the value of the 'compressionDictionary' field.void
setCompressionStrategy(int value)
Sets the value of the 'compressionStrategy' field.void
setSorted(boolean value)
Sets the value of the 'sorted' field.void
setTimestampPolicy(int value)
Sets the value of the 'timestampPolicy' field.void
writeExternal(java.io.ObjectOutput out)
-
Methods inherited from class org.apache.avro.specific.SpecificRecordBase
compareTo, customDecode, customEncode, equals, get, getConversion, getConversion, hasCustomCoders, hashCode, put, toString
-
-
-
-
Field Detail
-
SCHEMA$
public static final org.apache.avro.Schema SCHEMA$
-
sorted
public boolean sorted
Whether the messages inside current topic partition between 'StartOfPush' control message and 'EndOfPush' control message is lexicographically sorted by key bytes
-
chunked
public boolean chunked
Whether the messages inside the current push are encoded with chunking support. If true, this means keys will be prefixed with ChunkId, and values may contain a ChunkedValueManifest (if schema is defined as -20).
-
compressionStrategy
public int compressionStrategy
What type of compression strategy the current push uses. Using int because Avro Enums are not evolvable. The mapping is the following: 0 => NO_OP, 1 => GZIP, 2 => ZSTD, 3 => ZSTD_WITH_DICT
-
compressionDictionary
public java.nio.ByteBuffer compressionDictionary
The raw bytes of dictionary used to compress/decompress records.
-
timestampPolicy
public int timestampPolicy
The policy to determine timestamps of batch push records. 0 => no per record replication metadata is stored, hybrid writes always win over batch, 1 => no per record timestamp metadata is stored, Start-Of-Push Control message's logicalTimestamp is treated as last update timestamp for all batch record, and hybrid writes wins only when their own logicalTimestamp are higher, 2 => per record timestamp metadata is provided by the push job and stored for each key, enabling full conflict resolution granularity on a per field basis, just like when merging concurrent update operations.
-
-
Constructor Detail
-
StartOfPush
public StartOfPush()
Default constructor. Note that this does not initialize fields to their default values from the schema. If that is desired then one should usenewBuilder()
.
-
StartOfPush
public StartOfPush(java.lang.Boolean sorted, java.lang.Boolean chunked, java.lang.Integer compressionStrategy, java.nio.ByteBuffer compressionDictionary, java.lang.Integer timestampPolicy)
All-args constructor.- Parameters:
sorted
- Whether the messages inside current topic partition between 'StartOfPush' control message and 'EndOfPush' control message is lexicographically sorted by key byteschunked
- Whether the messages inside the current push are encoded with chunking support. If true, this means keys will be prefixed with ChunkId, and values may contain a ChunkedValueManifest (if schema is defined as -20).compressionStrategy
- What type of compression strategy the current push uses. Using int because Avro Enums are not evolvable. The mapping is the following: 0 => NO_OP, 1 => GZIP, 2 => ZSTD, 3 => ZSTD_WITH_DICTcompressionDictionary
- The raw bytes of dictionary used to compress/decompress records.timestampPolicy
- The policy to determine timestamps of batch push records. 0 => no per record replication metadata is stored, hybrid writes always win over batch, 1 => no per record timestamp metadata is stored, Start-Of-Push Control message's logicalTimestamp is treated as last update timestamp for all batch record, and hybrid writes wins only when their own logicalTimestamp are higher, 2 => per record timestamp metadata is provided by the push job and stored for each key, enabling full conflict resolution granularity on a per field basis, just like when merging concurrent update operations.
-
-
Method Detail
-
getClassSchema
public static org.apache.avro.Schema getClassSchema()
-
getSpecificData
public org.apache.avro.specific.SpecificData getSpecificData()
- Overrides:
getSpecificData
in classorg.apache.avro.specific.SpecificRecordBase
-
getSchema
public org.apache.avro.Schema getSchema()
- Specified by:
getSchema
in interfaceorg.apache.avro.generic.GenericContainer
- Specified by:
getSchema
in classorg.apache.avro.specific.SpecificRecordBase
-
get
public java.lang.Object get(int field$)
- Specified by:
get
in interfaceorg.apache.avro.generic.IndexedRecord
- Specified by:
get
in classorg.apache.avro.specific.SpecificRecordBase
-
put
public void put(int field$, java.lang.Object value$)
- Specified by:
put
in interfaceorg.apache.avro.generic.IndexedRecord
- Specified by:
put
in classorg.apache.avro.specific.SpecificRecordBase
-
getSorted
public boolean getSorted()
Gets the value of the 'sorted' field.- Returns:
- Whether the messages inside current topic partition between 'StartOfPush' control message and 'EndOfPush' control message is lexicographically sorted by key bytes
-
setSorted
public void setSorted(boolean value)
Sets the value of the 'sorted' field. Whether the messages inside current topic partition between 'StartOfPush' control message and 'EndOfPush' control message is lexicographically sorted by key bytes- Parameters:
value
- the value to set.
-
getChunked
public boolean getChunked()
Gets the value of the 'chunked' field.- Returns:
- Whether the messages inside the current push are encoded with chunking support. If true, this means keys will be prefixed with ChunkId, and values may contain a ChunkedValueManifest (if schema is defined as -20).
-
setChunked
public void setChunked(boolean value)
Sets the value of the 'chunked' field. Whether the messages inside the current push are encoded with chunking support. If true, this means keys will be prefixed with ChunkId, and values may contain a ChunkedValueManifest (if schema is defined as -20).- Parameters:
value
- the value to set.
-
getCompressionStrategy
public int getCompressionStrategy()
Gets the value of the 'compressionStrategy' field.- Returns:
- What type of compression strategy the current push uses. Using int because Avro Enums are not evolvable. The mapping is the following: 0 => NO_OP, 1 => GZIP, 2 => ZSTD, 3 => ZSTD_WITH_DICT
-
setCompressionStrategy
public void setCompressionStrategy(int value)
Sets the value of the 'compressionStrategy' field. What type of compression strategy the current push uses. Using int because Avro Enums are not evolvable. The mapping is the following: 0 => NO_OP, 1 => GZIP, 2 => ZSTD, 3 => ZSTD_WITH_DICT- Parameters:
value
- the value to set.
-
getCompressionDictionary
public java.nio.ByteBuffer getCompressionDictionary()
Gets the value of the 'compressionDictionary' field.- Returns:
- The raw bytes of dictionary used to compress/decompress records.
-
setCompressionDictionary
public void setCompressionDictionary(java.nio.ByteBuffer value)
Sets the value of the 'compressionDictionary' field. The raw bytes of dictionary used to compress/decompress records.- Parameters:
value
- the value to set.
-
getTimestampPolicy
public int getTimestampPolicy()
Gets the value of the 'timestampPolicy' field.- Returns:
- The policy to determine timestamps of batch push records. 0 => no per record replication metadata is stored, hybrid writes always win over batch, 1 => no per record timestamp metadata is stored, Start-Of-Push Control message's logicalTimestamp is treated as last update timestamp for all batch record, and hybrid writes wins only when their own logicalTimestamp are higher, 2 => per record timestamp metadata is provided by the push job and stored for each key, enabling full conflict resolution granularity on a per field basis, just like when merging concurrent update operations.
-
setTimestampPolicy
public void setTimestampPolicy(int value)
Sets the value of the 'timestampPolicy' field. The policy to determine timestamps of batch push records. 0 => no per record replication metadata is stored, hybrid writes always win over batch, 1 => no per record timestamp metadata is stored, Start-Of-Push Control message's logicalTimestamp is treated as last update timestamp for all batch record, and hybrid writes wins only when their own logicalTimestamp are higher, 2 => per record timestamp metadata is provided by the push job and stored for each key, enabling full conflict resolution granularity on a per field basis, just like when merging concurrent update operations.- Parameters:
value
- the value to set.
-
writeExternal
public void writeExternal(java.io.ObjectOutput out) throws java.io.IOException
- Specified by:
writeExternal
in interfacejava.io.Externalizable
- Overrides:
writeExternal
in classorg.apache.avro.specific.SpecificRecordBase
- Throws:
java.io.IOException
-
readExternal
public void readExternal(java.io.ObjectInput in) throws java.io.IOException
- Specified by:
readExternal
in interfacejava.io.Externalizable
- Overrides:
readExternal
in classorg.apache.avro.specific.SpecificRecordBase
- Throws:
java.io.IOException
-
-