- Home
- Documentation
- Reference Guide
- Unravel Properties
- Executor logs
Executor logs
Property/Description | Set by user | Unit | Default |
---|---|---|---|
com.unraveldata.job.collector.running.load.conf When set to true
| boolean | false | |
com.unraveldata.job.collector.hive.queries.cache.size This is used to improve the Hive-MR pipeline by caching data so it can be retrieved from cache instead of external API. You should not have to change this value. | count | 1000 | |
com.unraveldata.max.attempt.log.dir.size.in.bytes Maximum size of the aggregated executor log that are imported and processed by the Spark worker for a successful application. | byte | 500000000 (~500 MB) | |
com.unraveldata.max.failed.attempt.log.dir.size.in.bytes Maximum size of the aggregated executor log that are imported and processed by the Spark worker for a failed application. | byte | 2000000000 (~2 GB) | |
com.unraveldata.min.job.duration.for.attempt.log Minimum duration of a successful application or which executor logs are processed (in milliseconds). | ms | 600000 (10 mins) | |
com.unraveldata.min.failed.job.duration.for.attempt.log Minimum duration of failed/killed application for which executor logs are processed (in milliseconds). | ms | 60000 | |
com.unraveldata.attempt.log.max.containers Maximum number of containers for the application. If application has more than configured number of containers then the aggregated executor log is processed for the application. | ms | 500 | |
com.unraveldata.spark.master Default master for spark applications. (Used to download executor log using correct APIs.) Valid Options: | string | yarn | |
com.unraveldata.process.executor.log Set the flag to process the executor logs.
| boolean | true |
Event logs
Property/Description | Set by user | Unit | Default |
---|---|---|---|
com.unraveldata.process.event.log Processes event logs.
| optional | boolean | true |
HDFS logs
Property/Description | Set by user | Unit | Default |
---|---|---|---|
com.unraveldata.job.collector.done.log.base HDFS path to | string | /user/history/done | |
com.unraveldata.job.collector.log.aggregation.base HDFS path to the aggregated container logs (logs to process). Do not include the hdfs://prefix. The log format defaults to TFile. You can specify multiple logs and log formats (TFile or IndexedFormat.) Example: com.unraveldata.job.collector.log.aggregation.base=TFile:/tmp/logs/*/logs/,IndexedFormat:/tmp/logs/*/logs-ifile/. | CSL | /tmp/logs/*/logs/ | |
com.unraveldata.spark.eventlog.location Comma-separated list of HDFS paths to the Spark event logs as per cluster configuration. Each path must include the hdfs:/// prefix. For example: com.unraveldata.spark.eventlog.location=hdfs:///spark1-history/,hdfs:///spark2-history/ | CSL | hdfs:///user/spark/applicationHistory/ |