Monitoring and Instrumentation
There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.
Web Interfaces
Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
- A list of scheduler stages and tasks
- A summary of RDD sizes and memory usage
- Environmental information.
- Information about the running executors
You can access this interface by simply opening http://<driver-node>:4040
in a web browser.
If multiple SparkContexts are running on the same host, they will bind to successive ports
beginning with 4040 (4041, 4042, etc).
Note that this information is only available for the duration of the application by default.
To view the web UI after the fact, set spark.eventLog.enabled
to true before starting the
application. This configures Spark to log Spark events that encode the information displayed
in the UI to persisted storage.
Viewing After the Fact
If Spark is run on Mesos or YARN, it is still possible to construct the UI of an application through Spark’s history server, provided that the application’s event logs exist. You can start the history server by executing:
./sbin/start-history-server.sh
This creates a web interface at http://<server-url>:18080
by default, listing incomplete
and completed applications and attempts.
When using the file-system provider class (see spark.history.provider
below), the base logging
directory must be supplied in the spark.history.fs.logDirectory
configuration option,
and should contain sub-directories that each represents an application’s event logs.
The spark jobs themselves must be configured to log events, and to log them to the same shared,
writeable directory. For example, if the server was configured with a log directory of
hdfs://namenode/shared/spark-logs
, then the client-side options would be:
spark.eventLog.enabled true
spark.eventLog.dir hdfs://namenode/shared/spark-logs
The history server can be configured as follows:
Environment Variables
Environment Variable | Meaning |
---|---|
SPARK_DAEMON_MEMORY |
Memory to allocate to the history server (default: 1g). |
SPARK_DAEMON_JAVA_OPTS |
JVM options for the history server (default: none). |
SPARK_PUBLIC_DNS |
The public address for the history server. If this is not set, links to application history may use the internal address of the server, resulting in broken links (default: none). |
SPARK_HISTORY_OPTS |
spark.history.* configuration options for the history server (default: none).
|
Spark configuration options
Note that in all of these UIs, the tables are sortable by clicking their headers, making it easy to identify slow tasks, data skew, etc. Note 1. The history server displays both completed and incomplete Spark jobs. If an application makes multiple attempts after failures, the failed attempts will be displayed, as well as any ongoing incomplete attempt or the final successful attempt. 2. Incomplete applications are only updated intermittently. The time between updates is defined by the interval between checks for changed files (`spark.history.fs.update.interval`). On larger clusters the update interval may be set to large values. The way to view a running application is actually to view its own web UI. 3. Applications which exited without registering themselves as completed will be listed as incomplete —even though they are no longer running. This can happen if an application crashes. 2. One way to signal the completion of a Spark job is to stop the Spark Context explicitly (`sc.stop()`), or in Python using the `with SparkContext() as sc:` construct to handle the Spark Context setup and tear down. ## REST API In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at `/api/v1`. Eg., for the history server, they would typically be accessible at `http://Property Name | Default | Meaning |
---|---|---|
spark.history.provider | org.apache.spark.deploy.history.FsHistoryProvider |
Name of the class implementing the application history backend. Currently there is only one implementation, provided by Spark, which looks for application logs stored in the file system. |
spark.history.fs.logDirectory | file:/tmp/spark-events |
For the filesystem history provider, the URL to the directory containing application event
logs to load. This can be a local file:// path,
an HDFS path hdfs://namenode/shared/spark-logs
or that of an alternative filesystem supported by the Hadoop APIs.
|
spark.history.fs.update.interval | 10s | The period at which the filesystem history provider checks for new or updated logs in the log directory. A shorter interval detects new applications faster, at the expense of more server load re-reading updated applications. As soon as an update has completed, listings of the completed and incomplete applications will reflect the changes. |
spark.history.retainedApplications | 50 | The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed. |
spark.history.ui.port | 18080 | The port to which the web interface of the history server binds. |
spark.history.kerberos.enabled | false |
Indicates whether the history server should use kerberos to login. This is required
if the history server is accessing HDFS files on a secure Hadoop cluster. If this is
true, it uses the configs spark.history.kerberos.principal and
spark.history.kerberos.keytab .
|
spark.history.kerberos.principal | (none) | Kerberos principal name for the History Server. |
spark.history.kerberos.keytab | (none) | Location of the kerberos keytab file for the History Server. |
spark.history.ui.acls.enable | false |
Specifies whether acls should be checked to authorize users viewing the applications.
If enabled, access control checks are made regardless of what the individual application had
set for spark.ui.acls.enable when the application was run. The application owner
will always have authorization to view their own application and any users specified via
spark.ui.view.acls and groups specified via spark.ui.view.acls.groups |
spark.history.fs.cleaner.enabled | false | Specifies whether the History Server should periodically clean up event logs from storage. |
spark.history.fs.cleaner.interval | 1d |
How often the filesystem job history cleaner checks for files to delete.
Files are only deleted if they are older than spark.history.fs.cleaner.maxAge
|
spark.history.fs.cleaner.maxAge | 7d | Job history files older than this will be deleted when the filesystem history cleaner runs. |
spark.history.fs.numReplayThreads | 25% of available cores | Number of threads that will be used by history server to process event logs. |
Endpoint | Meaning |
---|---|
/applications |
A list of all applications.
?status=[completed|running] list only applications in the chosen state.
?minDate=[date] earliest date/time to list.
Examples: ?minDate=2015-02-10
?minDate=2015-02-03T16:42:40.000GMT
?maxDate=[date] latest date/time to list; uses same format as minDate . |
/applications/[app-id]/jobs |
A list of all jobs for a given application.
?status=[complete|succeeded|failed] list only jobs in the specific state.
|
/applications/[app-id]/jobs/[job-id] |
Details for the given job. |
/applications/[app-id]/stages |
A list of all stages for a given application. |
/applications/[app-id]/stages/[stage-id] |
A list of all attempts for the given stage.
?status=[active|complete|pending|failed] list only stages in the state.
|
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id] |
Details for the given stage attempt |
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskSummary |
Summary metrics of all tasks in the given stage attempt.
?quantiles summarize the metrics with the given quantiles.
Example: ?quantiles=0.01,0.5,0.99
|
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskList |
A list of all tasks for the given stage attempt.
?offset=[offset]&length=[len] list tasks in the given range.
?sortBy=[runtime|-runtime] sort the tasks.
Example: ?offset=10&length=50&sortBy=runtime
|
/applications/[app-id]/executors |
A list of all executors for the given application. |
/applications/[app-id]/storage/rdd |
A list of stored RDDs for the given application. |
/applications/[app-id]/storage/rdd/[rdd-id] |
Details for the storage status of a given RDD. |
/applications/[base-app-id]/logs |
Download the event logs for all attempts of the given application as files within a zip file. |
/applications/[base-app-id]/[attempt-id]/logs |
Download the event logs for a specific application attempt as a zip file. |