Package io.deephaven.parquet.compress
Interface CompressorAdapter
- All Superinterfaces:
AutoCloseable,SafeCloseable
An Intermediate adapter interface between Deephaven column writing and parquet compression.
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final CompressorAdapterAnCompressorAdapterinstance that reads and writes uncompressed data directly. -
Method Summary
Modifier and TypeMethodDescriptioncompress(OutputStream os) Creates a new output stream that will take uncompressed writes, and flush data to the provided stream as compressed data.org.apache.parquet.bytes.BytesInputdecompress(InputStream inputStream, int compressedSize, int uncompressedSize, Function<Supplier<SafeCloseable>, SafeCloseable> decompressorCache) Returns an in-memory instance of BytesInput containing the fully decompressed results of the input stream.org.apache.parquet.hadoop.metadata.CompressionCodecNamevoidreset()Reset the internal state of thisCompressorAdapterso more rows can be read or written.Methods inherited from interface io.deephaven.util.SafeCloseable
close
-
Field Details
-
PASSTHRU
AnCompressorAdapterinstance that reads and writes uncompressed data directly.
-
-
Method Details
-
compress
Creates a new output stream that will take uncompressed writes, and flush data to the provided stream as compressed data.Note that this method is not thread safe.
- Parameters:
os- the output stream to write compressed contents to- Returns:
- an output stream that can accept writes
- Throws:
IOException- thrown if an error occurs writing data
-
decompress
org.apache.parquet.bytes.BytesInput decompress(InputStream inputStream, int compressedSize, int uncompressedSize, Function<Supplier<SafeCloseable>, SafeCloseable> decompressorCache) throws IOExceptionReturns an in-memory instance of BytesInput containing the fully decompressed results of the input stream. The providedDecompressorHolderis used for decompressing if compatible with the compression codec. Otherwise, a new decompressor is created and set in the DecompressorHolder.Note that this method is thread safe, assuming the cached decompressor instances are not shared across threads.
- Parameters:
inputStream- an input stream containing compressed datacompressedSize- the number of bytes in the compressed datauncompressedSize- the number of bytes that should be present when decompresseddecompressorCache- Used to cacheDecompressorinstances for reuse- Returns:
- the decompressed bytes, copied into memory
- Throws:
IOException- thrown if an error occurs reading data.
-
getCodecName
org.apache.parquet.hadoop.metadata.CompressionCodecName getCodecName()- Returns:
- the CompressionCodecName enum value that represents this compressor.
-
reset
void reset()Reset the internal state of thisCompressorAdapterso more rows can be read or written.This method can be called after
compress(java.io.OutputStream)to reset the internal state of the compressor, and is not required beforecompress(java.io.OutputStream), or before and afterdecompress(java.io.InputStream, int, int, java.util.function.Function<java.util.function.Supplier<io.deephaven.util.SafeCloseable>, io.deephaven.util.SafeCloseable>)because those methods internally manage their own state.
-