Quick Reference Guide
Table Operations
Accessing Tables
db.t("Namespace", "Table")
|
Accesses historical data |
db.i("Namespace", "Table")
|
Accesses intraday data |
db.i("Namespace", "Table", isLive)
|
Accesses intraday data without ticking |
Finding Data Sources
db.getNamespaces()
|
Returns a table with all available namespaces |
db.getTableNames("<namespace>")
|
Returns a table with a list of all available tables found in the namespace listed |
db.getCatalog()
|
Returns a table with all available namespaces and table names |
Filtering Data
Note: Backticks (`) are used for strings within strings and single quotes (') are used for times within strings.
where
The where method returns all the rows from the source table for which the logical expressions evaluate to true. Typically these expressions will do things like check for values contained in columns.
t.where("Condition"…)

t.where("Date > `2018-01-01`")
t.where("Timestamp <= '2018-01-01T16:00:00 NY'")
t.where("USym = `AAPL`")
t.where("USym in `AAPL`, `MSFT`, `GOOG`") //filters a list of values
t.where("USym icase in `aapl`, `msft`, `goog`") //case insensitive
t.where("USym not in `NVDA`, `AMZN`")
whereIn/whereNotIn
The whereIn
and whereNotIn
methods enable you to filter one table based on the contents of another table and are evaluated whenever either table changes.t1.whereIn(t2, "Match"…)
Filters to rows that matcht1.whereNotIn(t2, "Match"…)
Filters to rows that do not match

t1.whereIn(t2, "USym")
t1.whereIn(t2, "USym=UnderlyingSym") //can match different column names
head/tail
Head and tail filters are used to return the first or last set of rows of a table by specifying the number of rows desired or by specifying the percent of the table.
t.head(rows)
|
Returns first number of rows |
t.headPct(decimalNum)
|
Returns first percentage of rows |
t.headBy(rows, "Column"…)
|
Returns first number of rows for each subset |
t.tail(rows)
|
Returns last number of rows
|
t.tailPct(decimalNum)
|
Returns last percentage of rows |
t.tailBy(rows, "Column"…)
|
Returns last number of rows for each subset |
Data Selection
Deephaven offers several methods for selecting (or deselecting) columns of data in a query. These methods also enable users to eliminate or modify existing columns of data, or create new columns of data.

select |
view
|
|
Only keep specified columns from source table. |
update |
updateView
|
lazyUpdate
|
Keep all columns from source table. |
Content evaluated once and saved into memory. |
Content calculated on the fly and never saved. |
Content is evaluated on demand; formula inputs store values in the cache to avoid recomputation. |
|
Requires a large amount of memory to store large datasets. |
Low memory requirements. |
Uses less memory than an |
|
Storing results in memory is recommended for content that is expensive to evaluate or accessed frequently. |
Storing the results as a formula is recommended when the formula is fast to compute or the data accessed is small. |
This method is best when you have a small number of unique values. |
|
In-memory
Data obtained by these methods is evaluated once and saved into memory. Recommended when the content is expensive to evaluate or accessed frequently.
On-the-fly
Data obtained by these methods is referenced via a formula and is not saved in memory. Recommended when the formula computes quickly or if you are only accessing a small portion of the data.
t.updateView("ColumnFormula"…)
Other Methods
|
Creates a new table with only the columns from the source table that are listed in the argument. The number of rows in the new table's column is determined by the number of distinct values contained in these columns. |
t.dropColumns("Column"…)
|
Creates a table with the same number of rows as the source table that omits any columns included in the dropColumns argument. |
t.renameColumns("ColumnRename"…)
|
Creates a table with the same number of rows and columns as the source table that renames columns included in the renameColumns argument. |
t.moveColumns(index, "Column"…)
|
Creates a table with the same number of rows and columns as the source table, but columns can be moved to a different location by designating a specific column index value in the moveColumns argument. Deephaven uses a zero-based index model, so column index number 3 would be the fourth column.
|
t.moveUpColumns("Column"…)
|
Creates a table with the same number of rows and columns as the source table, but columns can be moved to the first column index position by including them in the moveUpColumns argument. |
Sorting Data
t.sort("Column"…)
|
Rearranges rows in a table from smallest to largest based on the column(s) listed |
t.sortDescending("Column"…)
|
Rearranges rows in a table from largest to smallest order based on the columns listed |
t.sort(SortPair.ascending("Column"), SortPair.descending("Column")
|
Simultaneously rearranges rows in the table in ascending and descending order based on the columns listed. |
t.restrictSortTo("Column"…)
|
Restricts sorting to specified columns |
t.clearSortingRestrictions()
|
Removes sorting restrictions |
t.reverse()
|
Reverses order of entire table |
Grouping and Aggregating Data
By and Ungroup
t.by("Column"…)
|
Groups data in columns other than those specified into arrays. |
t.ungroup()
|
Distributes values into arrays; each array value becomes its own row. Ungroup is usually called with no arguments, but columns may be specified. |
ByExternal
Divides a single table into multiple tables, which are defined by unique key-value pairs in a TableMap. The keys are determined by the columnNames used in the argument.
tables = t.byExternal("Column"…)
table = tables.get("Key")
tables.getKeySet() //returns array of keys
The following is required if the table is broken down by multiple columns:
import com.fishlib.datastructures.util.SmartKey
table = tables.get(new SmartKey("Key"...))
Simple Aggregations |
Combined Aggregations |
A group of predefined aggregation functions that perform a single calculation and then group together the values of multiple rows to form a single value. | These functions perform multiple aggregations at once, defined by a comboBy method signature. |
t.firstBy("Column"…) t.lastBy("Column"…) t.sumBy("Column"…) t.avgBy("Column"…) t.stdBy("Column"…) t.varBy("Column"…) t.medianBy("Column"…) t.minBy("Column"…) t.maxBy("Column"…) t.countBy("CountColumn", "Column"…) |
t.by(AggCombo(Agg…), "Column"…) AggFirst("Column"…) AggLast("Column"…) AggSum("Column"…) AggAvg("Column"…) AggStd("Column"…) AggVar("Column"…) AggMed("Column"…) AggMin("Column"…) AggMax("Column"…) AggCount("Column"…) AggArray("Column"…) AggPct(pct, "Column"…) AggWAvg("Weight column", "Column"…) |

t.by(AggCombo(AggSum("Dollars", "Size"), AggAvg("AvgSize=Size")),"Symbol")
Joining Data from Multiple Tables
Methods to combine (join) certain columns of data from multiple tables.
t1.naturalJoin(t2, "Match…", "ColumnToAdd…")
|
Returns a table where all the rows in the left table will be combined with data from the rows in the right table, even if there are no matching criteria for some of the rows. |
t1.exactJoin(t2, "Match…", "ColumnToAdd…")
|
Requires exactly one match for each row in the column(s) designated as the ColumnsToAdd from the right table. The operation will fail if there are no matches or if there are multiple matches in the right table's rows. |
t1.join(t2, "Match…", "ColumnToAdd…")
|
Returns a table that is the cartesian product of left rows X right rows filtered by the matching criteria. The rows are ordered first by the left table then by the right table. |
t1.leftJoin(t2, "Match…", "ColumnToAdd…")
|
When the right table's columns are joined to the left table, the data joined is grouped into arrays for each corresponding row in the left table that shares the same matching criteria.
|
t1.aj(t2, "Match…, AjMatch", "ColumnToAdd…")
|
Finds the closest-without-going-over value in the right table and joins the row values in the argument to the left table. Note: in as-of joins, matching subsets need to be ordered. |
t1.raj(t2, "Match…, AjMatch", "ColumnToAdd…")
|
Finds the closest-without-going-over value in the right table and joins the row values in the argument to the left table.
|
Formatting Tables
Locking Column Order
Users can lock columns to the front or end of a Deephaven table, and prevent layout changes to that table from being saved.
t.layoutHints(
com.illumon.iris.db.tables.utils.LayoutHintBuilder
.atFront("ColName") //locks column(s) to front of table
.atEnd("ColName") //locks column(s) to end of table
.savedLayouts(false)) //prevents layout changes from being saved

importjava("com.illumon.iris.db.tables.utils.LayoutHintBuilder")
t3=db.t("LearnDeephaven", "StockQuotes").where("Date=`2017-08-25`")
t4=t3.update(java_array("java.lang.String", [])).layoutHints(LayoutHintBuilder.get().atFront("USym","Timestamp").atEnd("Sym", "SecurityType").savedLayouts(False))
import com.illumon.iris.db.tables.utils.LayoutHintBuilder
t1=db.t("LearnDeephaven", "StockQuotes").where("Date=`2017-08-25`")
t2=t1.update().layoutHints(LayoutHintBuilder
.get()
.atFront("USym","Timestamp")
.atEnd("Sym", "SecurityType")
.savedLayouts(false)
)
Hiding Columns
Columns in Deephaven tables can be hidden from view by default in a table, ensuring they are hidden for all users who access that table.
t.layoutHints(LayoutHintBuilder.get().hide("ColumnA","ColumnB","ColumnC"))

from deephaven import *
t1=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`") \
.layoutHints(LayoutHintBuilder.get().hide("USym","SecurityType","Source"))
import com.illumon.iris.db.tables.utils.LayoutHintBuilder
t1=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")
.layoutHints(LayoutHintBuilder.get().hide("USym","SecurityType","Source"))
Freezing Columns
Freezing designated columns keeps them in place on the left side of the table so that they remain in view even when horizontally scrolling through the table.
t.layoutHints(LayoutHintBuilder.get().freeze("ColumnA","ColumnB","ColumnC"))

LayoutHintBuilder = jpy.get_type("com.illumon.iris.db.tables.utils.LayoutHintBuilder")
t=db.t("LearnDeephaven", "StockTrades")\
.where("Date=`2017-08-21`")\
.layoutHints(LayoutHintBuilder\
.get()\
.freeze("USym","Exchange"))
import com.illumon.iris.db.tables.utils.LayoutHintBuilder
t=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")
.layoutHints(LayoutHintBuilder
.get()
.freeze("USym","Exchange")
)
AutoFilter
An AutoFilter allows users to filter a Deephaven table based on an automatically generated list of unique values found in the column when enabled via query.
t.layoutHints(LayoutHintBuilder.get().autoFilter("ColumnA","ColumnB","ColumnC"))

from deephaven import *
t1=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")\
.layoutHints(LayoutHintBuilder.get().autoFilter("USym", "Exchange", "Size"))
import com.illumon.iris.db.tables.utils.LayoutHintBuilder
t1=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")
.layoutHints(LayoutHintBuilder.get().autoFilter("USym","Exchange","Size"))
Totals
Users can include Totals and Grand Totals sections in their tables that display aggregated column values, such as the total sum of values in a column, or an average value of a particular column. Totals display aggregated values calculated from a filtered table, while Grand Totals display aggregated values calculated from the entire source table.
import com.illumon.iris.db.v2.TotalsTableBuilder //imports the TotalsTableBuilder class
t1 = db.t("Namespace" , "Table") //source table
t2 = t1.setTotalsTable(new TotalsTableBuilder() //creates a new TotalsTable
.setOperation("Bid", "Avg") //sets the desired aggregation ("Avg") on the specified column ("Bid")

from deephaven import *
t1 = db.t("LearnDeephaven", "StockQuotes").where("Date=`2017-08-25`")
t2 = t1.setTotalsTable(TotalsTableBuilder()
.setOperation("Bid", "Avg")
.setOperation("Ask", "Avg"))
import com.illumon.iris.db.v2.TotalsTableBuilder
t1 = db.t("LearnDeephaven" , "StockQuotes").where("Date=`2017-08-25`")
t2 = t1.setTotalsTable(new TotalsTableBuilder()
.setOperation("Bid", "Avg")
.setOperation("Ask", "Avg"))
Numeric
t.formatColumns("ColumnName=Decimal(`#0.00`)"…) //or Decimal(`\$.00`) or Decimal(`0.00%`)
Note: # means only display non-zero numbers in position; 0 means also display zeroes in position
Color
t.formatColumns("ColumnName=Color")
t.formatColumnWhere("ColumnName", "Condition", "Color")
t.formatRowWhere("Condition", "Color")
t.formatColumns("ColumnName=heatmap(Column, minValue, maxValue, minColor, maxColor)")
Creating Tables
Deephaven allows users to store their own tables in the database. There are two types of user tables: directly manipulated and centrally managed.
Non-Partitioned Tables
A non-partitioned, or splayed table, is not stored as subtables: the table is stored as a whole.
db.addTable("Namespace", "TableName", table)
db.replaceTable("Namespace", "TableName", table)
db.removeTable("Namespace", "TableName")
Partitioned Tables
A partitioned table places rows with the same value in a designated "partitioning column" (the most common is Date) into the same partition. Each partition is stored as a separate subtable. Individual partitions can be added, replaced, or removed without altering data from other partitions.
db.addPartitionedTableDefinition("Namespace", "TableName", "PartitioningColumn", table.getDefinition())
db.addPartitionedTableDefinitionIfAbsent("Namespace", "TableName", "PartitioningColumn", table.getDefinition())
db.addTablePartition("Namespace", "TableName", "PartitioningValue", table)
db.replaceTablePartition("Namespace", "TableName", "PartitioningValue", table)
db.removeTablePartition("Namespace", "TableName", "PartitioningValue")
Centrally Managed Tables
db.addPartitionedTableDefinition("Namespace", "TableName", "PartitioningColumn", table.getDefinition())
db.appendCentral("Namespace","TableName", "ColumnPartitionValue", "NewData", flush[boolean])
db.closeAndDeleteCentral("Namespace", "TableName")
db.removeTableDefinition("Namespace", "TableName")
Tree Tables
Tree tables conveniently display your data by allowing you to expand or collapse "branches" of the table as needed. When fully collapsed, the "tree" component resembles a navigation menu: clicking on the arrow in a root row opens the data for that particular branch.
ids = db.t("Namespace","TableName") .firstBy("Sym") .updateView("ID=Sym","Parent=(String)null","Date=(String) null", "Timestamp=(com.illumon.iris.db.tables.utils.DBDateTime) null"…) //orders the table by the chosen ID (Sym) and nulls the values in the other columns data = db.t("Namespace","TableName").where() .updateView("ID=Long.toString(k)","Parent=Sym") //the data to display in each branch combo = merge(ids,data) comboTree = combo.treeTable("ID","Parent") //creates the actual tree table

from deephaven import *
syms = db.t("LearnDeephaven","StockTrades") \
.firstBy("Sym") \
.updateView("ID=Sym","Parent=(String)null","Date=(String) null", "Timestamp=(com.illumon.iris.db.tables.utils.DBDateTime) null", "SecurityType = (String) null", "Exchange=(String)null", "Last=(Double) null", "Size=(Integer)null", "Source=(String)null", "ExchangeId=(Long) null", "ExchangeTimestamp=(DBDateTime)null", "SaleCondition=(String)null")
data = db.t("LearnDeephaven","StockTrades").where() \
.updateView("ID=Long.toString(k)","Parent=Sym")
combo = ttools.merge(syms,data)
comboTree = combo.treeTable("ID","Parent")
syms = db.t("LearnDeephaven","StockTrades")
.firstBy("Sym")
.updateView("ID=Sym","Parent=(String)null","Date=(String) null", "Timestamp=(com.illumon.iris.db.tables.utils.DBDateTime) null", "SecurityType = (String) null", "Exchange=(String)null", "Last=(Double) null", "Size=(Integer)null", "Source=(String)null", "ExchangeId=(Long) null", "ExchangeTimestamp=(DBDateTime)null", "SaleCondition=(String)null")
data = db.t("LearnDeephaven","StockTrades").where()
.updateView("ID=Long.toString(k)","Parent=Sym")
combo = merge(syms,data)
comboTree = combo.treeTable("ID","Parent")
Roll-up Tables
Roll-up Tables show only certain, customized aggregated column values. Organized by a user-specified "Group By" column, Roll-up Tables display total values (such as sum or average) of other columns in the tables for those distinct items in the "Group By" column.
t.rollup(AggCombo([Aggregations...]), groupByColumns...)

from deephaven import *
t1=db.t("LearnDeephaven", "StockQuotes").where()
t2=t1.rollup(caf.AggCombo(caf.AggMin("Ask_Min=Ask"),caf.AggAvg("AskSize_Avg=AskSize"), caf.AggMax("Bid_Max=Bid"), caf.AggAvg("BidSize_Avg=BidSize")), "Sym", "Exchange")
t1=db.t("LearnDeephaven", "StockQuotes").where()
t2=t1.rollup(AggCombo(AggMin("Ask_Min=Ask"),AggAvg("AskSize_Avg=AskSize"), AggMax("Bid_Max=Bid"), AggAvg("BidSize_Avg=BidSize")), "Sym", "Exchange")
Pivot Widgets
Pivot Widgets summarize and reorder table data into a multi-pane widget that shows values for particular keys chosen from the source table and, if desired, aggregated values and grand totals.
pw = PivotWidgetBuilder.pivot(t, "RowCol" , "ValueCol" , "ValueCol")

from deephaven import *
t=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")
pwAvg = PivotWidgetBuilder.pivot(t, "USym", "Exchange", "Last")\
.addValueColumns("Size")\
.avg()\
.show()\
import com.illumon.iris.console.utils.PivotWidgetBuilder
t=db.t("LearnDeephaven", "StockTrades").where("Date=`2017-08-21`")
pwAvg = PivotWidgetBuilder.pivot(t, "USym", "Exchange", "Last")
.addValueColumns("Size")
.avg()
.show()
Input Tables
Input Tables are user-modifiable, real-time database tables. Much like a spreadsheet, Input Tables store user-provided data and that can be added, changed, or deleted.
Creation
input = InputTable.newInputTable(database, "Namespace", "TableName",
cKey("ColName", dataType)..., //key columns
c("ColName", dataType)... //non-key columns
)
Retrieval
t = input.getTable()
t = InputTable.inputTable(db, "Namespace", "TableName")
Note: you can look at history of changes with db.i("Namespace", "InputTableName")
.
Time Tables
A time table is a special type of table that adds new rows at a regular, user-defined interval; its sole column is Timestamp
.
db.timeTable("hh:mm:ss")
CSV Tables
CSV (which stands for "comma-separated values) is a simple file format used to store tabular data; these files can be imported to and exported from programs such as Deephaven that store data in tables.
readCSV("filepath", "delimiter")
//Reads CSV file on query server into table
writeCsv(table, "filePath")
Preemptive Tables
If a table is a Preemptive Table, the entire table is sent over the network to subscribed clients when the table is refreshed.
Publishing
table.preemptiveUpdatesTable(milliseconds)
Subscribing
client = PersistentQueryTableHelper.getClientForPersistentQuery(log, "Username", "ScriptName", timeoutMilliseconds)
preemptiveTable = client.getPreemptiveTable("PreemptiveTableName")
Special Table Operations
t.getColumn("Column").getDirect()
Gets column values as an array
t.getColumn("Column").get(index)
Gets cell value
col("ColumnName", value…)
longCol("ColumnName", value…)
intCol("ColumnName",value…)
shortCol("ColumnName", value…)
byteCol("ColumnName", value…)
charCol("ColumnName", value…)
doubleCol("ColumnName", value…)
floatCol("ColumnName", value…)
roundDecimalColumns(tableToRound)
Rounds all decimal columns in a given table to whole numbers
roundDecimalColumnsExcept(tableToRound, "Column"…)
Rounds all columns in a given table except for columns specified
Table Tools
The TableTools class contains many methods for printing, loading, saving and modifying tables.
emptyTable(rows)
|
Creates a new empty table |
newTable(column…)
|
Creates a table with columns specified |
merge(table…)
|
Appends table rows to one another with same columns |
Metadata
Shows all of the column names and data types within a specific table:t.getMeta()
Snapshotting Methods
Snapshots allow you to explicitly set the interval at which a table updates to reflect changing data. The snapshot operation produces an in-memory copy of a table (the "target table"), which updates the entire table every time another table (the "trigger table", which is often a time table) ticks.
controlTicks
will only update upon triggerTable changing:
controlTicks = triggerTable.snapshot(targetTable, "Timestamp")
With snapshotIncremental
, only rows that have changed will update:
controlTicks = triggerTable.snapshotIncremental(targetTable, "Timestamp")
The entire table is appended to the result with updates:
controlTicks = triggerTable.snapshotHistory(targetTable)
Formula Operations
Arrays
binSearchIndex(array, value, equalsPreference)
|
Finds the index of a value in (sorted) array if it were to replace a value; preference can be BS_LOWEST , BS_HIGHEST , or BS_ANY |
column.subArray(startIndex, endIndex)
|
Start-inclusive, end-exclusive |
nullToValue(column, value)
|
Sets null values in columns to specified value |
str.substring(str.indexOf(character))
|
Extracts substring starting with the first instance of specified character |
Applying Row Index
Note: ii
represents positional row index and should not be used with ticking data.
t.update("Index=ii")
Placing an underscore after a column will allow array access for its values:t.update("Difference=Column - Column_[ii - 1]")
Create percentile bins based on row index:upperBin(ii / Column_.size() * 100, 1)
Numeric
sqrt(input)
abs(input)
Calculate exponential moving average without grouping/ungrouping behaviors:
ByEma.BadDataBehavior.BD_RESET
or BD_SKIP or BD_PROCESS
modes: AbstractMa.Mode.TICK
or TIME
:
ema = new ByEmaSimple(nullBehavior, nanBehavior, mode, timescale, timeUnit)
ema.update(Timestamp, Value)
Checks that value is not null, NaN, or infinite:
isNormal(value)
Generates random values for any numeric primitive type:
r = new Random()
Note: use with.update method only to avoid undefined results
r.nextInt()
r.nextLong()
r.nextFloat()
r.nextDouble()
Working with Time
Note: Single quotes (') rather than backticks (`) are used for times within strings.
String Formats
yyyy-mm-ddThh:mm:ss.nanos TZ //date-time
#y#m#w#dT#h#m#s //period
hh:mm:ss.nanos //duration
Constants representing nanoseconds:
threeSeconds = 3 * SECOND
fiveMinutes = 5 * MINUTE
twoHours = 2 * HOUR
sevenDays = 7 * DAY
Arithmetic
t.updateView("TimeSum = Timestamp + '3y7mT12h'") //period
t.updateView("TimeSum = Timestamp + '01:30:00'") //duration
t.updateView("TimeSum = Timestamp + 6 * HOUR") //nanosecond constants
Downsampling
Round timebin up and get last:
t.updateView("TimeBin=upperBin(Timestamp, TimeLength)")
.lastBy("Sym", "TimeBin").dropColumns("TimeBin")
Round timebin down and get first:
t.updateView("TimeBin=lowerBin(Timestamp, TimeLength)")
.firstBy("Sym", "TimeBin").dropColumns("TimeBin")
Note: TimeLength
can use duration or nanosecond constants.
Useful Methods
currentDateNy()
lastBusinessDateNy()
currentTime()
isBusinessTime(dateTime)
inRange(value, startLimit, endLimit) //start and end limit inclusive
Convert a string to DBDateTime:
convertDateTime("Date string")
Convert to a date string:formatDateNy(DateTime)
Return string representation of date a specified number of days previous:previousBusinessDay("Date", numberOfDays)
previousBusinessDay(DateTime, numberOfDays)
Plotting Operations
Note that Python users should import the following:
from deephaven import Plot
Plot
should prefix each method; e.g., plotSym = Plot.plot("SymValue", sourcetable, "Timestamp", "Last").xBusinessTime().show()
XY Series
plot("SeriesName", source, "xCol", "yCol") //table source
plot("SeriesName", [x],[y]) //array source
plot("SeriesName", function) //function source
Category
catPlot("SeriesName", source, "CategoryCol", "ValueCol") //table source
catPlot("SeriesName", [category], [values]) //array source
Histogram
histPlot("seriesName", source, "ValueCol", nbins) //table source
histPlot("SeriesName", [x], nbins) //array source
Category Histogram
catHistPlot("seriesName", source, "ValueCol") //table source
catHistPlot("SeriesName", [Values]) //array source
Pie
piePlot("SeriesName", source, "CategoryCol", "ValueCol") //table source
piePlot("SeriesName", [category], [values]") //array source
Open, High, Low, and Close (OHLC)
ohlcPlot("SeriesName", source, "Time", "Open", "High", "Low", "Close") //table source
ohlcPlot("SeriesName",[Time], [Open], [High], [Low], [Close]) //array source
ErrorBar Charts
errorBarX("SeriesName", source, "x", "y", "xLow", "xHigh") //table source
errorBarX("SeriesName", [x], [y], [xLow], [xHigh]) //array source
errorBarY("SeriesName", source, "x", "y", "yLow", "yHigh") //table source
errorBarY("SeriesName", [x], [y], [yLow], [yHigh]) //array source
errorBarXY("SeriesName",source,"x","xLow","xHigh","y","yLow","yHigh") //table source
errorBarXY("SeriesName", [x], [xLow], [xHigh], [y], [yLow], [yHigh]) //array source
catErrorBar("SeriesName", source, "x", "y", "yLow", "yHigh") //table source
catErrorBar("SeriesName", [x],[y],[yLow],[yHigh]) //array source
Scatter Plot Matrix
scatterPlotMatrix(table, "series1", "series2", …)
Multiple Charts in a Figure
figure(int Rows, int Columns).newChart(r,c).plot…
.rowSpan(r) // span multiple rows
.colSpan(c) // span multiple columns
.span(r,c) // span multiple rows and columns
plotBy, catPlotBy, and ohlcPlotBy
plotBy("Series1", source, "xCol", "yCol", "groupByCol")
catPlotBy("SeriesName", source, "CategoryCol", "ValueCol", "groupByCol")
ohlcPlotBy("SeriesName", source, "Time", "Open", "High", "Low", "Close", "groupByCol")
OneClick
oneClick(source, "ColName1", "ColName2", …)
Formatting
Plot Styles: Apply to XY Series and Category Charts only
Options: Bar, Stacked_Bar, Line, Area, Stacked_Area, Scatter, Step
plot(series, t, "x", "y")
plotStyle("optionName")
Time Formatting
xFormatPattern("yyyy-MM-dd") // e.g., 2017-03-08
xFormatPattern("HH:mm") // e.g., 14:45
xFormatPattern("hh:mm a") // e.g., 02:45 p.m.
xFormatPattern("HH:mm:ss") // e.g., 14:45:59
xFormatPattern("yyyy-MM-dd'T'HH:mm:ss") // e.g., 2017-03-08T14:45:59
Decimal Formatting
yFormatPattern("###,###.00") // e.g., 123,456.78
yFormatPattern("\$###,###.00") // e.g., $123,456.78
yFormatPattern("\$###,###.00 Million") // e.g., $123,456 Million
yFormatPattern("###.00%") // e.g., 100.11%
yFormatPattern("##0.00E00") // e.g., 321.34E8
Themes
figure().theme("Dark")
Titles/Labels
figureTitle("Title")
chartTitle("Title")
xLabel("Label")
yLabel("Label")
Axis Ranges
xRange(value, value)
xMin(value)
yMax(value)
Shared axis: twinX() twinY()
Ticks - Angle, Gap, Visibility
xTickLabelAngle(45.0)
xTicks(10) - or - yMinorTicks(5)
xTicksVisible(false) - or - yMinorTicksVisible(false)
Font Formatting
.fontObject("FontFamily", "Style", size)
figureTitleFont("Arial Black", "p", 32)
chartTitleFont("Arial", "Bold", 24)
xLabelFont("SansSerif", "P", 10)
legendFont("Tahoma", "BI", 9)
Line Formatting
linesVisible(true)
lineStyle(lineStyle(width))
lineStyle(lineStyle("endStyle","joinStyle")) // ROUND, SQUARE, BUTT and ROUND, MITER, BEVEL
lineStyle(lineStyle([dashPattern])) // dashLength, gapLength
lineStyle(lineStyle(width,[dashPattern]))
lineStyle(lineStyle(width,"endStyle","joinStyle",[dashPattern]))
Point Formatting
pointsVisible(true)
pointSize(2)
pointSize(0.5)
pointSize([1,2,3]) // array-based sizing
pointSize(t,"Size") // table columns sizing
Color Formatting
Color values can include RGB, RGBA, HSL, HSLA, Hexadecimal or HTML Color Name
pointColor(colorRGB(153,204,255))
lineColor(colorHSL(0,100,50))
errorBarColor("green")
chartTitleColor("#9F9F9F")
chartTitleColor("BLACK")
Plot Orientation
plotOrientation("horizontal")
Legend/Axis
legendVisible(false)
Turns legend visibility on or off
gridLinesVisible
Turns grid line visibility on or off
Last Updated: 16 February 2021 18:06 -04:00 UTC Deephaven v.1.20200928 (See other versions)
Deephaven Documentation Copyright 2016-2020 Deephaven Data Labs, LLC All Rights Reserved