Class KafkaDataIntegrityValidator


  • public class KafkaDataIntegrityValidator
    extends java.lang.Object
    This class is a library that can validate the Kafka message during consumption, which can be used in Venice-Server/Da-Vinci and ETL. In high level, it keeps track of messages produced by different producers, and validates data integrity in 4 perspectives: 1. Whether a segment starts from a non-zero sequence number (UNREGISTERED_PRODUCER); 2. Whether there is a gap between segments or within a segment (MISSING); 3. Whether data within a segment is corrupted (CORRUPT); 4. Whether producers have produced duplicate messages, which is fine and expected due to producer retries (DUPLICATE).
    • Field Detail

      • partitionTrackers

        protected final SparseConcurrentList<PartitionTracker> partitionTrackers
        Keeps track of every upstream producer this consumer task has seen so far for each partition.
      • partitionTrackerCreator

        protected final java.util.function.IntFunction<PartitionTracker> partitionTrackerCreator
    • Constructor Detail

      • KafkaDataIntegrityValidator

        public KafkaDataIntegrityValidator​(java.lang.String topicName)
      • KafkaDataIntegrityValidator

        public KafkaDataIntegrityValidator​(java.lang.String topicName,
                                           long kafkaLogCompactionDelayInMs)
        This constructor is used by a proprietary ETL project. Do not clean up (yet)! TODO: Open source the ETL or make it stop depending on an exotic open source API
      • KafkaDataIntegrityValidator

        public KafkaDataIntegrityValidator​(java.lang.String topicName,
                                           long kafkaLogCompactionDelayInMs,
                                           long maxAgeInMs)
    • Method Detail

      • clearPartition

        public void clearPartition​(int partition)
        In some cases, such as when resetting offsets or unsubscribing from a partition, the PartitionTracker should forget about the state that it accumulated for a given partition.
        Parameters:
        partition - to clear state for
      • setPartitionState

        public void setPartitionState​(int partition,
                                      OffsetRecord offsetRecord)
      • updateOffsetRecordForPartition

        public void updateOffsetRecordForPartition​(int partition,
                                                   OffsetRecord offsetRecord)
        For a given partition, find all the producers that has written to this partition and update the offsetRecord using segment information. Prior to this, the state which is expired according to maxAgeInMs will be cleared.
        Parameters:
        partition - to extract info for
        offsetRecord - to modify
      • checkMissingMessage

        public void checkMissingMessage​(PubSubMessage<KafkaKey,​KafkaMessageEnvelope,​java.lang.Long> consumerRecord,
                                        java.util.Optional<PartitionTracker.DIVErrorMetricCallback> errorMetricCallback)
                                 throws DataValidationException
        Only check for missing sequence number; segment starting from a positive sequence number is acceptable considering real-time buffer replay would start in the middle of a segment; checksum is also ignored for the same reason. If missing message happens for an old message that is older than the Kafka log compaction lag threshold, MISSING exception will not be thrown because it's expected that log compaction would compact old messages. However, if data are fresh and missing message is detected, MISSING exception will be thrown. This API is used by a proprietary ETL project. Do not clean up (yet)! TODO: Open source the ETL or make it stop depending on an exotic open source API
        Throws:
        DataValidationException