Class KafkaTableWriter.Options
- Enclosing class:
- KafkaTableWriter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptioncommitToBroker(boolean doCommitToBroker) Optionally, you can specify whether the ingester commits offsets to the Kafka broker.dataImportServer(io.deephaven.shadow.enterprise.com.illumon.iris.db.tables.dataimport.logtailer.DataImportServer dataImportServer) The DataImportServer to use for ingestion.The name of the DataImportServer (dis) to use for ingestion.disNameWithStorage(@NotNull String disName, @NotNull String disStoragePath) The name of the DataImportServer (dis) to use for ingestion.dynamicPartitionFunction(String inputColumn, LongFunction<String> function) An input column (which must be a long) and a function to determine which column partition a row belongs to.ignoreColumn(String columnName) Specify a column from the source Kafka table that are ignored.ignoreColumns(Collection<String> columnNames) Specify columns from the source Kafka table that are ignored.ignoreOffsetFromBroker(boolean ignoreOffsetFromBroker) Set a flag to determine whether previously-committed offsets at the broker will be respected.ignoreOffsetFromCheckpoints(boolean ignoreOffsetFromCheckpoints) Set a flag to determine whether previously-committed offsets from earlier checkpoint records will be respected.kafkaProperties(Properties kafkaProperties) Set the properties for the underlyingKafkaConsumer, this is a required option.keySpec(KafkaTools.Consume.KeyOrValueSpec keySpec) Determine how the key is converted to table columns.Specify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.lastBy(String name, Collection<String> keyColumnNames) Specify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.lastBy(Collection<String> keyColumnNames) Specify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.The namespace of the table to ingest, this is a required option.partitionFilter(IntPredicate partitionFilter) Set a filter to select which partitions to ingest.partitionToInitialOffsetFallback(IntToLongFunction partitionToInitialOffsetFallback) Set a function to determine what offset to consume from when other methods fail.partitionValue(String partitionValue) The fixed column partition value to ingest into.resumeFrom(UnaryOperator<String> resumeFrom) Optional function that determines a prior column partition to read a size record from.The name of the table to ingest, this is a required option.Set the topic to consume, this is a required option.transactionsEnabled(boolean enabled) Deephaven tables support transactions, which allow a set of contiguous rows to be appended to a table atomically or not at all.transformation(Function<Table, Table> transformation) A transformation to apply after Core produces data from Kafka.valueSpec(KafkaTools.Consume.KeyOrValueSpec valueSpec) Determine how the value is converted to table columns.
- 
Constructor Details- 
Optionspublic Options()
 
- 
- 
Method Details- 
namespaceThe namespace of the table to ingest, this is a required option.- Parameters:
- namespace- the namespace to ingest.
- Returns:
- this builder
 
- 
tableNameThe name of the table to ingest, this is a required option.- Parameters:
- tableName- the namespace to ingest.
- Returns:
- this builder
 
- 
disNameThe name of the DataImportServer (dis) to use for ingestion.This is mutually exclusive with dataImportServer(DataImportServer). Exactly one of these options must be set. If you are going to start multiple ingesters, then you should usedataImportServer(DataImportServer). If the name option is used, then the DataImportServer is started within theKafkaTableWriter.consumeToDis(Options)call.- Parameters:
- disName- the name of the data import server from the routing configuration
- Returns:
- this builder
 
- 
disNameWithStorage@ScriptApi public KafkaTableWriter.Options disNameWithStorage(@NotNull @NotNull String disName, @NotNull @NotNull String disStoragePath) The name of the DataImportServer (dis) to use for ingestion.This is mutually exclusive with dataImportServer(DataImportServer). Exactly one of these options must be set. If you are going to start multiple ingesters, then you should usedataImportServer(DataImportServer). If the name option is used, then the DataImportServer is started within theKafkaTableWriter.consumeToDis(Options)call.- Parameters:
- disName- the name of the data import server from the routing configuration
- disStoragePath- the storage path for a DIS configured with private storage
- Returns:
- this builder
 
- 
dataImportServer@ScriptApi public KafkaTableWriter.Options dataImportServer(io.deephaven.shadow.enterprise.com.illumon.iris.db.tables.dataimport.logtailer.DataImportServer dataImportServer) The DataImportServer to use for ingestion.This is mutually exclusive with disName(String). Exactly one of these options must be set. If you are going to start multiple ingesters, then you should use this method. If only one ingester is used, the name can be used for convenience. You can retrieve a DIS with theKafkaTableWriter.getDisByName(String)method.- Parameters:
- dataImportServer- the DataImportServer to use
- Returns:
- this builder
 
- 
partitionValueThe fixed column partition value to ingest into.This is mutually exclusive with dynamicPartitionFunction(String, LongFunction). Exactly one of a fixed column partition or dynamicPartition function must be specified.- Parameters:
- partitionValue- the column partition value to ingest into
- Returns:
- this builder
 
- 
dynamicPartitionFunction@ScriptApi public KafkaTableWriter.Options dynamicPartitionFunction(String inputColumn, LongFunction<String> function) An input column (which must be a long) and a function to determine which column partition a row belongs to.This is mutually exclusive with partitionValue(String). Exactly one of a fixed column partition or dynamicPartition function must be specified.- Parameters:
- inputColumn- the column to use for determining the column partition of a row
- function- the function to apply to the input column
- Returns:
- this builder
 
- 
kafkaPropertiesSet the properties for the underlyingKafkaConsumer, this is a required option.- Parameters:
- kafkaProperties- the properties for the underlying- KafkaConsumer
- Returns:
- this builder
 
- 
topicSet the topic to consume, this is a required option.- Parameters:
- topic- the topic to consume
- Returns:
- this builder
 
- 
partitionFilterSet a filter to select which partitions to ingest. Defaults to all partitions.- Parameters:
- partitionFilter- a predicate indicating whether we should ingest a partition
- Returns:
- this builder
 
- 
partitionToInitialOffsetFallback@ScriptApi public KafkaTableWriter.Options partitionToInitialOffsetFallback(IntToLongFunction partitionToInitialOffsetFallback) Set a function to determine what offset to consume from when other methods fail.On startup, the ingester must determine where to begin processing records. When resuming an existing partition, the checkpoint record is used to ensure exactly once delivery of messages. When the current checkpoint record is not available (i.e. for a new partition), then the following steps are taken: - If resumeFrom(UnaryOperator)is specified, then the prior partition is determined by invoking the resumeFrom function with the current column partition. If a matching internal partition is found with a checkpoint record, then the ingestion is resumed from that offset.
- The Kafka broker is queried for committed values. If there is a committed value, then ingestion is resumed from that offset.
- Finally, the fallback function is called, and ingestion is resumed from the returned offset.
 If not set, defaults to seeking to the beginning of the topic. - Parameters:
- partitionToInitialOffsetFallback- a function from partition to offset
- Returns:
- this builder
 
- If 
- 
ignoreOffsetFromCheckpoints@ScriptApi public KafkaTableWriter.Options ignoreOffsetFromCheckpoints(boolean ignoreOffsetFromCheckpoints) Set a flag to determine whether previously-committed offsets from earlier checkpoint records will be respected. Iffalse(the default), then the ingester will seek to an offset determined by the process described atpartitionToInitialOffsetFallback(IntToLongFunction). Iftrue, then on startup the ingester will ignore the first phase of that process (i.e., determining the offset to seek to from the latest checkpoint record.)- Parameters:
- ignoreOffsetFromCheckpoints- whether offsets stored in checkpoint records should be ignored
- Returns:
- this builder
 
- 
ignoreOffsetFromBrokerSet a flag to determine whether previously-committed offsets at the broker will be respected. Iffalse(the default), then the ingester will seek to an offset determined by the process described atpartitionToInitialOffsetFallback(IntToLongFunction). Iftrue, then on startup the ingester will ignore step 2 from the second phase of that process (i.e., determining the offset to seek to from the last offset committed to the broker.)- Parameters:
- ignoreOffsetFromBroker- whether offsets committed to the broker should be ignored
- Returns:
- this builder
 
- 
resumeFromOptional function that determines a prior column partition to read a size record from.When beginning to ingest a new column partition, you may want to read the checkpoint record from the previous column partition and start from that point. This allows you to ingest records into exactly one day. - Parameters:
- resumeFrom- a function from the current column partition to the previous column partition
- Returns:
- this builder
 
- 
keySpecDetermine how the key is converted to table columns. This is a required option.- Parameters:
- keySpec- the key specification
- Returns:
- this builder
 
- 
valueSpecDetermine how the value is converted to table columns. This is a required option.- Parameters:
- valueSpec- the value specification
- Returns:
- this builder
 
- 
commitToBrokerOptionally, you can specify whether the ingester commits offsets to the Kafka broker. Defaults to true.For testing, you may want to avoid committing offsets to the broker so that data can be reingested without changing the Kafka consumer group. - Parameters:
- doCommitToBroker- if true, offsets are committed to the broker; if false, no commits are performed
- Returns:
- this builder
 
- 
transactionsEnabledDeephaven tables support transactions, which allow a set of contiguous rows to be appended to a table atomically or not at all.For Kafka feeds, each row is independent, therefore transactions are usually not necessary. For key and value parsers that expand an input row into multiple output rows, transactions are useful to ensure that no partial rows are written (Currently, the Core Kafka ingester does not have support for multiple output rows for a single input row). - Parameters:
- enabled- if true, each received group of Kafka messages are surrounded in a transaction; if false, no transactions are used. Defaults to false.
- Returns:
- this builder
 
- 
ignoreColumnSpecify a column from the source Kafka table that are ignored.- Parameters:
- columnName- the name of the column to ignore
- Returns:
- this builder
 
- 
ignoreColumnsSpecify columns from the source Kafka table that are ignored.- Parameters:
- columnNames- the name of the column to ignore
- Returns:
- this builder
 
- 
lastBySpecify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.- Parameters:
- name- the name of the lastBy view
- keyColumnNames- the key columns for the lastBy view
- Returns:
- this builder
 
- 
lastBySpecify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.- Parameters:
- keyColumnNames- the key columns for the lastBy view
- Returns:
- this builder
 
- 
lastBySpecify that this DIS should produce a lastBy view of the table with the given key columns using aCoreLastByTableImportStateunder the hood.- Parameters:
- keyColumnNames- the key columns for the lastBy view
- Returns:
- this builder
 
- 
transformationA transformation to apply after Core produces data from Kafka.The Core KafkaTools.Consume.KeyOrValueSpecprovides limited options for parsing data from Kafka. When persisting the data to disk, the Enterprise schema must match the input; which may not precisely map to the incoming Kafka data. By specifying a transformation function, that operates on the original Table as produced by a Core Kafka ingester, you can almost arbitrarily adjust each row. The transformation function must respect the following limitations:- The input to the transformation function is a blinktable. The output must also be a blink table.
- Each row in the input must map to exactly zero or one rows in the output.
- The offset column in the input must not be changed (every offset in the output must map to a row in the input, and the order must be maintained).
 - Parameters:
- transformation- a function from one Table to another applied to data after it is parsed from kafka, but before it is persisted to disk
- Returns:
- this builder
 
- The input to the transformation function is a 
 
-