Class ValueRecord
java.lang.Object
com.linkedin.davinci.store.record.ValueRecord
This class provides the following functionalities:
1. Concatenate schema id and data array to produce a big binary array, which will be stored in DB;
2. Parse the binary array stored in DB into schema id and data array;
Right now, the concatenation part will allocate a new byte array and copy over schema id and data,
which might cause some GC issue since this operation will be triggered for every 'PUT'.
If this issue happens, we need to consider other ways to improve it:
1. Maybe we can do the concatenation in VeniceWriter, which is being used by VenicePushJob;
2. Investigate whether DB can accept multiple binary arrays for 'PUT' operation;
3. ...
For deserialization part, this class is using Netty SlicedByteBuf, which will be backed by the same
byte array, and intelligently takes care of the offset.
-
Field Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic ValueRecord
create
(int schemaId, byte[] data) static ValueRecord
create
(int schemaId, io.netty.buffer.ByteBuf data) io.netty.buffer.ByteBuf
getData()
byte[]
int
int
static ValueRecord
parseAndCreate
(byte[] combinedData) static io.netty.buffer.ByteBuf
parseDataAsByteBuf
(byte[] combinedData) static ByteBuffer
parseDataAsNIOByteBuffer
(byte[] combinedData) static int
parseSchemaId
(byte[] combinedData) byte[]
-
Field Details
-
SCHEMA_HEADER_LENGTH
public static final int SCHEMA_HEADER_LENGTH- See Also:
-
-
Method Details
-
create
-
create
-
parseAndCreate
-
parseSchemaId
public static int parseSchemaId(byte[] combinedData) -
parseDataAsByteBuf
public static io.netty.buffer.ByteBuf parseDataAsByteBuf(byte[] combinedData) -
parseDataAsNIOByteBuffer
-
getSchemaId
public int getSchemaId() -
getData
public io.netty.buffer.ByteBuf getData() -
getDataSize
public int getDataSize() -
getDataInBytes
public byte[] getDataInBytes() -
serialize
public byte[] serialize()
-