Class PublishToKafka<K,V>
- All Implemented Interfaces:
LogOutputAppendable,LivenessManager,LivenessNode,LivenessReferent,Serializable
- See Also:
-
Field Summary
Fields -
Constructor Summary
ConstructorsConstructorDescriptionPublishToKafka(Properties props, Table table, String defaultTopic, Integer defaultPartition, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, ColumnName topicColumn, ColumnName partitionColumn, ColumnName timestampColumn, boolean publishInitial) Construct a publisher fortableaccording the to Kafkapropsfor the suppliedtopic.PublishToKafka(Properties props, Table table, String topic, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, boolean publishInitial) Deprecated, for removal: This API element is subject to removal in a future version. -
Method Summary
Modifier and TypeMethodDescriptionprotected voiddestroy()Attempt to release (destructively when necessary) resources held by this object.static LongtimestampMillis(LongChunk<?> nanosChunk, int index) Methods inherited from class io.deephaven.engine.liveness.LivenessArtifact
manageWithCurrentScopeMethods inherited from class io.deephaven.engine.liveness.ReferenceCountedLivenessNode
getWeakReference, initializeTransientFieldsForLiveness, onReferenceCountAtZero, tryManage, tryUnmanage, tryUnmanageMethods inherited from class io.deephaven.engine.liveness.ReferenceCountedLivenessReferent
dropReference, tryRetainReferenceMethods inherited from class io.deephaven.util.referencecounting.ReferenceCounted
append, decrementReferenceCount, forceReferenceCountToZero, getReferenceCountDebug, incrementReferenceCount, resetReferenceCount, toString, tryDecrementReferenceCount, tryIncrementReferenceCountMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface io.deephaven.engine.liveness.LivenessManager
manage, unmanage, unmanageMethods inherited from interface io.deephaven.engine.liveness.LivenessReferent
dropReference, getReferentDescription, retainReference, tryRetainReference
-
Field Details
-
CHUNK_SIZE
public static final int CHUNK_SIZE
-
-
Constructor Details
-
PublishToKafka
@Deprecated(forRemoval=true) public PublishToKafka(Properties props, Table table, String topic, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, boolean publishInitial) Deprecated, for removal: This API element is subject to removal in a future version. -
PublishToKafka
public PublishToKafka(Properties props, Table table, String defaultTopic, Integer defaultPartition, String[] keyColumns, org.apache.kafka.common.serialization.Serializer<K> kafkaKeySerializer, KeyOrValueSerializer<K> keyChunkSerializer, String[] valueColumns, org.apache.kafka.common.serialization.Serializer<V> kafkaValueSerializer, KeyOrValueSerializer<V> valueChunkSerializer, ColumnName topicColumn, ColumnName partitionColumn, ColumnName timestampColumn, boolean publishInitial) Construct a publisher for
tableaccording the to Kafkapropsfor the suppliedtopic.The new publisher will produce records for existing
tabledata at construction.If
tableis a dynamic, refreshing table (Table.isRefreshing()), the calling thread must block theupdate graphby holding either itsexclusive lockor itsshared lock. The publisher will install a listener in order to produce new records as updates become available. Callers must be sure to maintain a reference to the publisher and ensure that it remainslive. The easiest way to do this may be to construct the publisher enclosed by aliveness scopewithenforceStrongReachabilityspecified astrue, andreleasethe scope when publication is no longer needed. For example:// To initiate publication: final LivenessScope publisherScope = new LivenessScope(true); try (final SafeCloseable ignored = LivenessScopeStack.open(publisherScope, false)) { new PublishToKafka(...); } // To cease publication: publisherScope.release();- Parameters:
props- The KafkaPropertiestable- The sourceTabledefaultTopic- The default destination topicdefaultPartition- The default destination partitionkeyColumns- Optional array of string column names from table for the columns corresponding to Kafka's Key field.kafkaKeySerializer- The kafkaSerializerto use for keyskeyChunkSerializer- OptionalKeyOrValueSerializerto consume table data and produce Kafka record keys in chunk-oriented fashionvalueColumns- Optional array of string column names from table for the columns corresponding to Kafka's Value field.kafkaValueSerializer- The kafkaSerializerto use for valuesvalueChunkSerializer- OptionalKeyOrValueSerializerto consume table data and produce Kafka record values in chunk-oriented fashionpublishInitial- If the initial data intableshould be publishedtopicColumn- The topic column. When set, uses the the givenCharSequencecolumn fromtableas the first source for setting the kafka record topic.partitionColumn- The partition column. When set, uses the the givenintcolumn fromtableas the first source for setting the kafka record partition.timestampColumn- The timestamp column. When set, uses the the givenInstantcolumn fromtableas the first source for setting the kafka record timestamp.
-
-
Method Details
-
timestampMillis
-
destroy
@OverridingMethodsMustInvokeSuper protected void destroy()Description copied from class:ReferenceCountedLivenessReferentAttempt to release (destructively when necessary) resources held by this object. This may render the object unusable for subsequent operations. Implementations should be sure to call super.destroy().This is intended to only ever be used as a side effect of decreasing the reference count to 0.
- Overrides:
destroyin classReferenceCountedLivenessReferent
-
KafkaTools.produceFromTable(KafkaPublishOptions)