Class MergeByteBuffer

java.lang.Object
com.linkedin.davinci.replication.merge.MergeByteBuffer
All Implemented Interfaces:
Merge<ByteBuffer>

public class MergeByteBuffer extends Object
This class handles byte-level merge. Each byte buffer has only one (value-level) timestamp. DCR happens only at the whole byte buffer level. Specifically, given 2 byte buffers to merge, only one of them will win completely.
  • Constructor Details

    • MergeByteBuffer

      public MergeByteBuffer()
  • Method Details

    • put

      public ValueAndRmd<ByteBuffer> put(ValueAndRmd<ByteBuffer> oldValueAndRmd, ByteBuffer newValue, long putOperationTimestamp, int writeOperationColoID, long sourceOffsetOfNewValue, int newValueSourceBrokerID)
      Parameters:
      oldValueAndRmd - the old value and replication metadata which are persisted in the server prior to the write operation. Note that some implementation(s) may require the old value to be non-null. Please refer to the Javadoc of specific implementation.
      newValue - a record with all fields populated and with one of the registered value schemas
      putOperationTimestamp - the timestamp of the incoming write operation
      writeOperationColoID - ID of the colo/fabric where this PUT request was originally received.
      sourceOffsetOfNewValue - The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.
      newValueSourceBrokerID - The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.
      Returns:
      the resulting ValueAndRmd after merging the old one with the incoming write operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
    • delete

      public ValueAndRmd<ByteBuffer> delete(ValueAndRmd<ByteBuffer> oldValueAndRmd, long deleteOperationTimestamp, int deleteOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
      Parameters:
      oldValueAndRmd - the old value and replication metadata which are persisted in the server prior to the write operation
      deleteOperationColoID - ID of the colo/fabric where this DELETE request was originally received.
      newValueSourceOffset - The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.
      newValueSourceBrokerID - The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.
      Returns:
      the resulting ValueAndRmd after merging the old one with the incoming delete operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
    • update

      public ValueAndRmd<ByteBuffer> update(ValueAndRmd<ByteBuffer> oldValueAndRmd, Lazy<org.apache.avro.generic.GenericRecord> writeOperation, org.apache.avro.Schema currValueSchema, long updateOperationTimestamp, int updateOperationColoID, long newValueSourceOffset, int newValueSourceBrokerID)
      Parameters:
      oldValueAndRmd - the old value and replication metadata which are persisted in the server prior to the write operation
      writeOperation - a record with a write compute schema
      currValueSchema - Schema of the current value that is to-be-updated here.
      updateOperationTimestamp - the timestamp of the incoming write operation
      updateOperationColoID - ID of the colo/fabric where this UPDATE request was originally received.
      newValueSourceOffset - The offset from which the new value originates in the realtime stream. Used to build the ReplicationMetadata for the newly inserted record.
      newValueSourceBrokerID - The ID of the broker from which the new value originates. ID's should correspond to the kafkaClusterUrlIdMap configured in the LeaderFollowerIngestionTask. Used to build the ReplicationMetadata for the newly inserted record.
      Returns:
      the resulting ValueAndRmd after merging the old one with the incoming write operation. The returned object is guaranteed to be "==" to the input oldValueAndReplicationMetadata object and the internal members of the object are possibly mutated.
    • putWithRecordLevelTimestamp

      protected ValueAndRmd<ByteBuffer> putWithRecordLevelTimestamp(long oldTimestamp, ValueAndRmd<ByteBuffer> oldValueAndRmd, long putOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, ByteBuffer newValue)
    • deleteWithValueLevelTimestamp

      protected ValueAndRmd<ByteBuffer> deleteWithValueLevelTimestamp(long oldTimestamp, long deleteOperationTimestamp, long newValueSourceOffset, int newValueSourceBrokerID, ValueAndRmd<ByteBuffer> oldValueAndRmd)
    • updateReplicationCheckpointVector

      protected void updateReplicationCheckpointVector(org.apache.avro.generic.GenericRecord oldRmd, long newValueSourceOffset, int newValueSourceBrokerID)