Package io.deephaven.kafka.ingest
Interface ConsumerRecordToTableWriterAdapter
- All Known Implementing Classes:
GenericRecordConsumerRecordToTableWriterAdapter
,JsonConsumerRecordToTableWriterAdapter
,PojoConsumerRecordToTableWriterAdapter
,ProtobufConsumerRecordToTableWriterAdapter
,SimpleConsumerRecordToTableWriterAdapter
- Functional Interface:
- This is a functional interface and can therefore be used as the assignment target for a lambda expression or method reference.
Converter from a stream of Kafka records to a table writer.
Typically, this should be created by a factory that takes a
TableWriter
as its input.-
Method Summary
Modifier and TypeMethodDescriptiondefault void
close()
Provides an implementation with a way to terminate any potential threads it has spawnedvoid
consumeRecord
(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime) Consume a Kafka record, producing zero or more rows in the output.default void
consumeRecord
(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime, ProcessDataLocker processor, Runnable afterConsumed) Consume a Kafka record, producing zero or more rows in the output.default void
defaultConsumeRecord
(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime, ProcessDataLocker processor, Runnable afterConsumed) A sane default for single-threaded parsing/writing which will execute entirely on the write-lock, ensuring proper ordering of written row(s)setAsyncErrorHandler
(Consumer<Exception> exceptionHandler) Provides a mechanism for helper threads to provide an Exception back to the mainKafkaConsumer.poll(Duration)
loopsetAsyncStatusUpdater
(BiConsumer<Integer, Long> updateHandler) Provides a mechanism for notification of status of parallel-parser, if one exists.
-
Method Details
-
consumeRecord
void consumeRecord(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime) throws IOExceptionConsume a Kafka record, producing zero or more rows in the output.- Parameters:
record
- the record received fromKafkaConsumer.poll(Duration)
.recvTime
- the time the record was received fromKafkaConsumer.poll(Duration)
.- Throws:
IOException
- if there was an error writing to the output table
-
consumeRecord
default void consumeRecord(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime, ProcessDataLocker processor, Runnable afterConsumed) Consume a Kafka record, producing zero or more rows in the output. Defaulted to an implementation that parses and writes the row(s) to disk within the `processor.processData()` lock, which is a reasonable default for single threaded implementations. Implementations which parse inbound messages on multiple threads must overwrite this method- Parameters:
record
- the record received fromKafkaConsumer.poll(Duration)
.recvTime
- the time the record was received fromKafkaConsumer.poll(Duration)
.processor
- aProcessDataLocker
instance, which is used as a write-lock to serialize table-write operationsafterConsumed
- a Runnable executed after successful completion of the potential write(s), which is run within the context of the write-lock
-
defaultConsumeRecord
default void defaultConsumeRecord(org.apache.kafka.clients.consumer.ConsumerRecord<?, ?> record, DBDateTime recvTime, ProcessDataLocker processor, Runnable afterConsumed) A sane default for single-threaded parsing/writing which will execute entirely on the write-lock, ensuring proper ordering of written row(s)- Parameters:
record
- the record received fromKafkaConsumer.poll(Duration)
.recvTime
- the time the record was received fromKafkaConsumer.poll(Duration)
.processor
- aProcessDataLocker
instance, which is used as a write-lock to serialize table-write operationsafterConsumed
- a Runnable executed after successful completion of the potential write(s), which is run within the context of the write-lock
-
setAsyncErrorHandler
default ConsumerRecordToTableWriterAdapter setAsyncErrorHandler(Consumer<Exception> exceptionHandler) Provides a mechanism for helper threads to provide an Exception back to the mainKafkaConsumer.poll(Duration)
loop- Parameters:
exceptionHandler
- an Exception handler which may process Exceptions thrown by an asynchronous message parser or by the table-writer- Returns:
- this ConsumerRecordToTableWriterAdapter instance
-
setAsyncStatusUpdater
default ConsumerRecordToTableWriterAdapter setAsyncStatusUpdater(BiConsumer<Integer, Long> updateHandler) Provides a mechanism for notification of status of parallel-parser, if one exists. The number of items currently waiting for execution and the total number of records successfully processed are handed to the consumer- Parameters:
updateHandler
- a BiConsumer, which is handed the number of items currently waiting for execution and the total number of records successfully processed- Returns:
- this ConsumerRecordToTableWriterAdapter instance
-
close
default void close()Provides an implementation with a way to terminate any potential threads it has spawned
-