Class ValueRecord

java.lang.Object
com.linkedin.davinci.store.record.ValueRecord

public class ValueRecord extends Object
This class provides the following functionalities: 1. Concatenate schema id and data array to produce a big binary array, which will be stored in DB; 2. Parse the binary array stored in DB into schema id and data array; Right now, the concatenation part will allocate a new byte array and copy over schema id and data, which might cause some GC issue since this operation will be triggered for every 'PUT'. If this issue happens, we need to consider other ways to improve it: 1. Maybe we can do the concatenation in VeniceWriter, which is being used by VenicePushJob; 2. Investigate whether DB can accept multiple binary arrays for 'PUT' operation; 3. ... For deserialization part, this class is using Netty SlicedByteBuf, which will be backed by the same byte array, and intelligently takes care of the offset.
  • Field Details

  • Method Details

    • create

      public static ValueRecord create(int schemaId, byte[] data)
    • create

      public static ValueRecord create(int schemaId, io.netty.buffer.ByteBuf data)
    • parseAndCreate

      public static ValueRecord parseAndCreate(byte[] combinedData)
    • parseSchemaId

      public static int parseSchemaId(byte[] combinedData)
    • parseDataAsByteBuf

      public static io.netty.buffer.ByteBuf parseDataAsByteBuf(byte[] combinedData)
    • parseDataAsNIOByteBuffer

      public static ByteBuffer parseDataAsNIOByteBuffer(byte[] combinedData)
    • getSchemaId

      public int getSchemaId()
    • getData

      public io.netty.buffer.ByteBuf getData()
    • getDataSize

      public int getDataSize()
    • getDataInBytes

      public byte[] getDataInBytes()
    • serialize

      public byte[] serialize()