Class MergeGenericRecord
- java.lang.Object
-
- com.linkedin.davinci.replication.merge.MergeGenericRecord
-
- All Implemented Interfaces:
Merge<org.apache.avro.generic.GenericRecord>
public class MergeGenericRecord extends java.lang.Object
Implementations of the API defined inMerge
based on V1 metadata timestamp Schema generated byRmdSchemaGeneratorV1
. All the implementations assume replication metadata format is union record type [long, record] where record is top-level fieldName:timestamp format. 1. Currently collection merging is not supported as replication metadata does not support it yet. 2. schema evolution is not supported, so it assumes incoming and old schema are same else else throws VeniceException 3. Assumes new value to be GenericRecord type, does not support non-record values.
-
-
Constructor Summary
Constructors Constructor Description MergeGenericRecord(WriteComputeProcessor writeComputeProcessor, MergeRecordHelper mergeRecordHelper)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description ValueAndRmd<org.apache.avro.generic.GenericRecord>
delete(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, long deleteOperationTimestamp, int deleteOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
protected ValueAndRmd<T>
deleteWithValueLevelTimestamp(long oldTimestamp, long deleteOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, ValueAndRmd<T> oldValueAndRmd)
ValueAndRmd<org.apache.avro.generic.GenericRecord>
put(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, org.apache.avro.generic.GenericRecord newValue, long putOperationTimestamp, int putOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
Three important requirements regarding input params: 1.protected ValueAndRmd<T>
putWithRecordLevelTimestamp(long oldTimestamp, ValueAndRmd<T> oldValueAndRmd, long putOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, T newValue)
ValueAndRmd<org.apache.avro.generic.GenericRecord>
update(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, Lazy<org.apache.avro.generic.GenericRecord> writeComputeRecord, org.apache.avro.Schema currValueSchema, long updateOperationTimestamp, int updateOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
protected void
updateReplicationCheckpointVector(org.apache.avro.generic.GenericRecord oldRmd, long newValueSourceOffset, int newValueSourceBrokerID)
-
-
-
Constructor Detail
-
MergeGenericRecord
public MergeGenericRecord(WriteComputeProcessor writeComputeProcessor, MergeRecordHelper mergeRecordHelper)
-
-
Method Detail
-
put
public ValueAndRmd<org.apache.avro.generic.GenericRecord> put(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, org.apache.avro.generic.GenericRecord newValue, long putOperationTimestamp, int putOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
Three important requirements regarding input params: 1. Old value and RMD must share the same value schema ID. 2. Old value schema must be a superset of the new value schema. 3. Neither old value nor old RMD should be null.- Parameters:
oldValueAndRmd
- the old value and replication metadata which are persisted in the server prior to the write operation. Old value should NOT be null. If the old value does not exist, the caller of this method must create aGenericRecord
of the old value with default values set for all fields.newValue
- a record with all fields populated and with one of the registered value schemasputOperationTimestamp
- the timestamp of the incoming write operationputOperationColoID
- ID of the colo/fabric where this PUT request was originally received.newValueSourceOffset
- The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.newValueSourceBrokerID
- The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.- Returns:
- the resulting
ValueAndRmd
after merging the old one with the incoming write operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
-
delete
public ValueAndRmd<org.apache.avro.generic.GenericRecord> delete(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, long deleteOperationTimestamp, int deleteOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
- Parameters:
oldValueAndRmd
- the old value and replication metadata which are persisted in the server prior to the write operationdeleteOperationColoID
- ID of the colo/fabric where this DELETE request was originally received.newValueSourceOffset
- The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.newValueSourceBrokerID
- The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.- Returns:
- the resulting
ValueAndRmd
after merging the old one with the incoming delete operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
-
update
public ValueAndRmd<org.apache.avro.generic.GenericRecord> update(ValueAndRmd<org.apache.avro.generic.GenericRecord> oldValueAndRmd, Lazy<org.apache.avro.generic.GenericRecord> writeComputeRecord, org.apache.avro.Schema currValueSchema, long updateOperationTimestamp, int updateOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
- Parameters:
oldValueAndRmd
- the old value and replication metadata which are persisted in the server prior to the write operationwriteComputeRecord
- a record with a write compute schemacurrValueSchema
- Schema of the current value that is to-be-updated here.updateOperationTimestamp
- the timestamp of the incoming write operationupdateOperationColoID
- ID of the colo/fabric where this UPDATE request was originally received.newValueSourceOffset
- The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.newValueSourceBrokerID
- The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.- Returns:
- the resulting
ValueAndRmd
after merging the old one with the incoming write operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
-
putWithRecordLevelTimestamp
protected ValueAndRmd<T> putWithRecordLevelTimestamp(long oldTimestamp, ValueAndRmd<T> oldValueAndRmd, long putOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, T newValue)
-
deleteWithValueLevelTimestamp
protected ValueAndRmd<T> deleteWithValueLevelTimestamp(long oldTimestamp, long deleteOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, ValueAndRmd<T> oldValueAndRmd)
-
updateReplicationCheckpointVector
protected void updateReplicationCheckpointVector(org.apache.avro.generic.GenericRecord oldRmd, long newValueSourceOffset, int newValueSourceBrokerID)
-
-