org.apache.spark (Spark 4.2.0 JavaDoc)

:: DeveloperApi :: A set of functions used to aggregate data.

:: Experimental :: A TaskContext with extra contextual info and tooling for tasks in a barrier stage.

:: Experimental :: Carries all task infos of a barrier task.

Additional information if the error was caused by a breaking change.

Listener class used when any item has been cleaned by the Cleaner class.

Classes that represent cleaning tasks.

A WeakReference associated with a CleanupTask.

A FutureAction for actions that could trigger multiple Spark jobs.

For each barrier stage attempt, only at most one barrier() call can be active at any time, thus we can use (stageId, stageAttemptId) to identify the stage attempt where the barrier() call is from.

:: DeveloperApi :: Base class for dependencies.

A reader to load error information from one or more JSON files.

Information associated with an error class.

Information associated with an error state / SQLSTATE.

Information associated with an error subclass.

:: DeveloperApi :: Task failed due to a runtime exception.

:: DeveloperApi :: The task failed because the executor that it was running on was lost.

:: DeveloperApi :: Task failed to fetch shuffle data from a remote node.

A future for the result of an action to support cancellation.

A Partitioner that implements hash-based partitioning using Java's Object.hashCode.

A collection of fields and methods concerned with internal accumulators that represent task level metrics.

:: DeveloperApi :: An iterator that wraps around an existing iterator to provide task killing functionality.

Handle via which a "run" function passed to a ComplexFutureAction can submit jobs for execution.

A spark config flag that can be used to mitigate a breaking change.

:: DeveloperApi :: Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD.

:: DeveloperApi :: Represents a one-to-one dependency between partitions of the parent and child RDDs.

An identifier for a partition in an RDD.

An object that defines how the elements in a key-value pair RDD are partitioned by key.

An evaluator for computing RDD partitions.

:: DeveloperApi :: Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.

A Partitioner that partitions sortable records by range into roughly equal ranges.

:: DeveloperApi :: A org.apache.spark.scheduler.ShuffleMapTask that completed successfully earlier, but we lost the executor before the stage completed.

Helper class used by the MapOutputTrackerMaster to perform bookkeeping for a single ShuffleMapStage.

A FutureAction holding the result of an action that triggers a single job.

Configuration for a Spark application.

Main entry point for Spark functionality.

:: DeveloperApi :: Holds all the runtime environment objects for a running Spark instance (either master or worker), including the serializer, RpcEnv, block manager, map output tracker, etc.

Exposes information about Spark Executors.

Resolves paths to files added through SparkContext.addFile().

Class that allows users to receive all SparkListener events.

Exposes information about Spark Jobs.

A collection of regexes for extracting information from the master string.

Exposes information about Spark Stages.

Low-level status reporting APIs for monitoring job and stage progress.

Interface mixed into Throwables thrown from Spark.

Companion object used by instances of SparkThrowable to access error class information and construct error messages.

A SparkListener that detects whether spills have occurred in Spark jobs.

:: DeveloperApi :: Task succeeded.

:: DeveloperApi :: Task requested the driver to commit, but was denied.

Contextual information about a task which can be read or mutated during execution.

:: DeveloperApi :: Various possible reasons why a task ended.

:: DeveloperApi :: Various possible reasons why a task failed.

:: DeveloperApi :: Task was killed intentionally and needs to be rescheduled.

:: DeveloperApi :: Exception thrown when a task is explicitly killed (i.e., task failure is expected).

:: DeveloperApi :: The task finished successfully, but the result was lost from the executor's block manager before it was fetched.

An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is created.

:: DeveloperApi :: We don't know why the task ended -- for example, because of a ClassNotFound exception when deserializing the task result.