Troubleshooting Logging
If logged events are not being seen in a db.i
query in the console, these are some steps that can be taken to narrow down where a problem might be.
Check that bin files are being created and are growing.
- By default, the tailer will look for binary logs in the
/var/log/deephaven/binlogs
,/var/log/deephaven/binlogs/pel
, and/var/log/deephaven/binlogs/perflogs
directories. Log files will normally have a name in the form of<namespace>.<tablename>.System.<internal partition value>.<column partition value>.bin.<date and time stamp>
. They should be owned byirisadmin
and readable by thedbmergegrp
group. - If no such files exist, or the files are not growing, it may be that the logging application is not running, that there is a problem with the logging application, or that no events are arriving. Most troubleshooting from here will be proprietary to the custom logging application itself.
- If needed, binary log file contents can be "dumped" using the
iriscat
tool (/usr/illumon/latest/bin/iriscat
). - Another useful tool when developing new logging sources is the
readBin
command.readBin
allows a binary log file to be loaded to a memory table in the Deephaven console. It requires that the the schema for the table be deployed on the query server and that the listener class corresponding to the table and logger version be available. Several usages are available; the most common is of the form:myTable = readBin("namespace","table_name","path and file of binary log file relative to the server")
Check that the tailer is tailing the files
- When the tailer starts, it should indicate what files it has found. This is logged near the top of its log file (
/var/log/deephaven/tailer/LogTailerMain[number - usually 1].log.[current or datetimestamp].
Use grep to check these log files for a message that includes the keywordOpening:
and the name of the particular binary log file. If this message is not found, there may be a problem with the configuration of thetailerConfig.xml
or the host config file. The top of a new tailer log file, after restarting the tailer will show what xml file the tailer is using. If these files are not correctly configured with the service name and file names and patterns for the service, the tailer will not pick them up and tail them to the DIS. - Another thing to check is that the tailer is sending its data to the correct DIS and that the needed ports are open. These properties are set with
data.import.server.[DISname].host
anddata.import.server.port
in the property file being used by the tailer. The default DIS port is 22021. (See: Property File Configuration Parameters in the Binary Log Tailer. )
Check for errors in the DIS log
- The DIS runs under the
dbmerge
account, and its log files will be located under/var/log/deephaven/dis/DataImportServer.log.[current or datetimestamp]
. Use grep to check these logs forERROR
or for references to the namespace and table name that should be logged. One possible error is that the listener code that was generated from the schema has not been deployed to the DIS system, or, similarly, that the schema for the new table has not been made available to the DIS. After new schema or listeners are deployed, the DIS must be restarted to pick up the changes (sudo monit restart db_dis
). - If changes are made to a schema, logger, or listener mid-day (after data has already been logged), the DIS should be stopped and then the corresponding intraday directory (
/db/Intraday/[namespace]/[table name]/[host name]/[date]
) must be deleted. Then, when the DIS is restarted, it will restart processing today's events for the table using the new schema and listener.
Last Updated: 16 February 2021 16:22 -05:00 UTC Deephaven v.1.20190607 (See other versions)
Deephaven Documentation Copyright 2016-2019 Deephaven Data Labs, LLC All Rights Reserved